Results 11  20
of
143
Boundary Behavior Of Interior Point Algorithms In Linear Programming
"... This paper studies the boundary behavior of some interior point algorithms for linear programming. The algorithms considered are Karmarkar's projective rescaling algorithm, the linear rescaling algorithm whichwas proposed as a variation on Karmarkar's algorithm, and the logarithmic barrier ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
This paper studies the boundary behavior of some interior point algorithms for linear programming. The algorithms considered are Karmarkar's projective rescaling algorithm, the linear rescaling algorithm whichwas proposed as a variation on Karmarkar's algorithm, and the logarithmic barrier technique. The study includes both the continuous trajectories of the vector fields induced by these algorithms and also the discrete orbits. It is shown that, although the algorithms are defined on the interior of the feasible polyhedron, they actually determine differentiable vector fields on the closed polyhedron. Conditions are given under whichavector field gives rise to trajectories that each visit the neighborhoods of all the vertices of the KleeMinty cube. The linear rescaling algorithm satisfies these conditions. Thus, limits of such trajectories, obtained when a starting point is pushed to the boundary, may have an exponential number of breakpoints. It is shown that limits of projective rescaling trajectories mayhave only a linear number of such breakpoints. It is however shown that projective rescaling trajectories may visit the neighborhoods of linearly many vertices. The behavior of the linear rescaling algorithm near vertices is analyzed. It is shown that all the trajectories have a unique asymptotic direction of convergence to the optimum.
PrimalDual TargetFollowing Algorithms for Linear Programming
 ANNALS OF OPERATIONS RESEARCH
, 1993
"... In this paper we propose a method for linear programming with the property that, starting from an initial noncentral point, it generates iterates that simultaneously get closer to optimality and closer to centrality. The iterates follow paths that in the limit are tangential to the central path. Al ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
In this paper we propose a method for linear programming with the property that, starting from an initial noncentral point, it generates iterates that simultaneously get closer to optimality and closer to centrality. The iterates follow paths that in the limit are tangential to the central path. Along with the convergence analysis we provide a general framework which enables us to analyze various primaldual algorithms in the literature in a short and uniform way.
Polynomial interior point cutting plane methods
 Optimization Methods and Software
, 2003
"... Polynomial cutting plane methods based on the logarithmic barrier function and on the volumetric center are surveyed. These algorithms construct a linear programming relaxation of the feasible region, find an appropriate approximate center of the region, and call a separation oracle at this approxim ..."
Abstract

Cited by 25 (9 self)
 Add to MetaCart
Polynomial cutting plane methods based on the logarithmic barrier function and on the volumetric center are surveyed. These algorithms construct a linear programming relaxation of the feasible region, find an appropriate approximate center of the region, and call a separation oracle at this approximate center to determine whether additional constraints should be added to the relaxation. Typically, these cutting plane methods can be developed so as to exhibit polynomial convergence. The volumetric cutting plane algorithm achieves the theoretical minimum number of calls to a separation oracle. Longstep versions of the algorithms for solving convex optimization problems are presented. 1
Smoothed Analysis of Termination of Linear Programming Algorithms
"... We perform a smoothed analysis of a termination phase for linear programming algorithms. By combining this analysis with the smoothed analysis of Renegar’s condition number by Dunagan, Spielman and Teng ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
We perform a smoothed analysis of a termination phase for linear programming algorithms. By combining this analysis with the smoothed analysis of Renegar’s condition number by Dunagan, Spielman and Teng
Advances in convex optimization: Conic programming
 In Proceedings of International Congress of Mathematicians
, 2007
"... Abstract. During the last two decades, major developments in convex optimization were focusing on conic programming, primarily, on linear, conic quadratic and semidefinite optimization. Conic programming allows to reveal rich structure which usually is possessed by a convex program and to exploit ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
(Show Context)
Abstract. During the last two decades, major developments in convex optimization were focusing on conic programming, primarily, on linear, conic quadratic and semidefinite optimization. Conic programming allows to reveal rich structure which usually is possessed by a convex program and to exploit this structure in order to process the program efficiently. In the paper, we overview the major components of the resulting theory (conic duality and primaldual interior point polynomial time algorithms), outline the extremely rich “expressive abilities ” of conic quadratic and semidefinite programming and discuss a number of instructive applications.
Solving Simple Stochastic Games with Few Random Vertices
"... Abstract. We present a new algorithm for solving Simple Stochastic Games (SSGs). This algorithm is based on an exhaustive search of a special kind of positional optimal strategies, the fstrategies. The running time is O ( VR! · (V E  + p)), where V , VR, E  and p  are respectively ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
Abstract. We present a new algorithm for solving Simple Stochastic Games (SSGs). This algorithm is based on an exhaustive search of a special kind of positional optimal strategies, the fstrategies. The running time is O ( VR! · (V E  + p)), where V , VR, E  and p  are respectively the number of vertices, random vertices and edges, and the maximum bitlength of a transition probability. Our algorithm improves existing algorithms for solving SSGs in three aspects. First, our algorithm performs well on SSGs with few random vertices, second it does not rely on linear or quadratic programming, third it applies to all SSGs, not only stopping SSGs.
PAC Learning Intersections of Halfspaces with Membership Queries
 ALGORITHMICA
, 1998
"... A randomized learning algorithm Polly is presented that efficiently learns intersections of s halfspaces in n dimensions, in time polynomial in both s and n. The learning protocol is the "PAC" (probably approximately correct) model of Valiant, augmented with membership queries. In particul ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
A randomized learning algorithm Polly is presented that efficiently learns intersections of s halfspaces in n dimensions, in time polynomial in both s and n. The learning protocol is the "PAC" (probably approximately correct) model of Valiant, augmented with membership queries. In particular, Polly receives a set S of m = poly(n; s; 1=ffl; 1=ffi) randomly generated points from an arbitrary distribution over the unit hypercube, and is told exactly which points are contained in, and which points are not contained in, the convex polyhedron P defined by the halfspaces. Polly may also obtain the same information about points of its own choosing. It is shown that after poly(n, s, 1=ffl, 1=ffi, log(1=d)) time, the probability that Polly fails to output a collection of s halfspaces with classification error at most ffl, is at most ffi . Here, d is the minimum distance between the boundary of the target and those examples in S that are not lying on the boundary. The parameter log(1=d) can be ...
A polynomial primaldual Dikintype algorithm for linear programming
 FACULTY OF TECHNICAL MATHEMATICS AND COMPUTER SCIENCE, DELFT UNIVERSITY OF TECHNOLOGY
, 1993
"... In this paper we present a new primaldual affine scaling method for linear programming. The method yields a strictly complementary optimal solution pair, and also allows a polynomialtime convergence proof. The search direction is obtained by using the original idea of Dikin, namely by minimizing t ..."
Abstract

Cited by 20 (9 self)
 Add to MetaCart
In this paper we present a new primaldual affine scaling method for linear programming. The method yields a strictly complementary optimal solution pair, and also allows a polynomialtime convergence proof. The search direction is obtained by using the original idea of Dikin, namely by minimizing the objective function (which is the duality gap in the primaldual case), over some suitable ellipsoid. This gives rise to completely new primaldual affine scaling directions, having no obvious relation with the search directions proposed in the literature so far. The new directions guarantee a significant decrease in the duality gap in each iteration, and at the same time they drive the iterates to the central path. In the analysis of our algorithm we use a barrier function which is the natural primaldual generalization of Karmarkar's potential function. The iteration bound is O(nL), which is a factor O(L) better than the iteration bound of an earlier primaldual affine scaling meth...
An Accelerated Interior Point Method Whose Running Time Depends Only on A
 IN PROCEEDINGS OF 26TH ANNUAL ACM SYMPOSIUM ON THE THEORY OF COMPUTING
, 1993
"... We propose a "layeredstep" interior point (LIP) algorithm for linear programming. This algorithm follows the central path, either with short steps or with a new type of step called a "layered least squares" (LLS) step. The algorithm returns the exact global minimum after a finit ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
We propose a "layeredstep" interior point (LIP) algorithm for linear programming. This algorithm follows the central path, either with short steps or with a new type of step called a "layered least squares" (LLS) step. The algorithm returns the exact global minimum after a finite number of stepsin particular, after O(n 3:5 c(A)) iterations, where c(A) is a function of the coefficient matrix. The LLS steps can be thought of as accelerating a pathfollowing interior point method whenever neardegeneracies occur. One consequence of the new method is a new characterization of the central path: we show that it composed of at most n 2 alternating straight and curved