Results 11  20
of
93
Boundary Behavior Of Interior Point Algorithms In Linear Programming
"... This paper studies the boundary behavior of some interior point algorithms for linear programming. The algorithms considered are Karmarkar's projective rescaling algorithm, the linear rescaling algorithm whichwas proposed as a variation on Karmarkar's algorithm, and the logarithmic barrier technique ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
This paper studies the boundary behavior of some interior point algorithms for linear programming. The algorithms considered are Karmarkar's projective rescaling algorithm, the linear rescaling algorithm whichwas proposed as a variation on Karmarkar's algorithm, and the logarithmic barrier technique. The study includes both the continuous trajectories of the vector fields induced by these algorithms and also the discrete orbits. It is shown that, although the algorithms are defined on the interior of the feasible polyhedron, they actually determine differentiable vector fields on the closed polyhedron. Conditions are given under whichavector field gives rise to trajectories that each visit the neighborhoods of all the vertices of the KleeMinty cube. The linear rescaling algorithm satisfies these conditions. Thus, limits of such trajectories, obtained when a starting point is pushed to the boundary, may have an exponential number of breakpoints. It is shown that limits of projective rescaling trajectories mayhave only a linear number of such breakpoints. It is however shown that projective rescaling trajectories may visit the neighborhoods of linearly many vertices. The behavior of the linear rescaling algorithm near vertices is analyzed. It is shown that all the trajectories have a unique asymptotic direction of convergence to the optimum.
PrimalDual TargetFollowing Algorithms for Linear Programming
 ANNALS OF OPERATIONS RESEARCH
, 1993
"... In this paper we propose a method for linear programming with the property that, starting from an initial noncentral point, it generates iterates that simultaneously get closer to optimality and closer to centrality. The iterates follow paths that in the limit are tangential to the central path. Al ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
In this paper we propose a method for linear programming with the property that, starting from an initial noncentral point, it generates iterates that simultaneously get closer to optimality and closer to centrality. The iterates follow paths that in the limit are tangential to the central path. Along with the convergence analysis we provide a general framework which enables us to analyze various primaldual algorithms in the literature in a short and uniform way.
PAC Learning Intersections of Halfspaces with Membership Queries
 ALGORITHMICA
, 1998
"... A randomized learning algorithm Polly is presented that efficiently learns intersections of s halfspaces in n dimensions, in time polynomial in both s and n. The learning protocol is the "PAC" (probably approximately correct) model of Valiant, augmented with membership queries. In particular, Polly ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
A randomized learning algorithm Polly is presented that efficiently learns intersections of s halfspaces in n dimensions, in time polynomial in both s and n. The learning protocol is the "PAC" (probably approximately correct) model of Valiant, augmented with membership queries. In particular, Polly receives a set S of m = poly(n; s; 1=ffl; 1=ffi) randomly generated points from an arbitrary distribution over the unit hypercube, and is told exactly which points are contained in, and which points are not contained in, the convex polyhedron P defined by the halfspaces. Polly may also obtain the same information about points of its own choosing. It is shown that after poly(n, s, 1=ffl, 1=ffi, log(1=d)) time, the probability that Polly fails to output a collection of s halfspaces with classification error at most ffl, is at most ffi . Here, d is the minimum distance between the boundary of the target and those examples in S that are not lying on the boundary. The parameter log(1=d) can be ...
Solving Simple Stochastic Games with Few Random Vertices
"... Abstract. We present a new algorithm for solving Simple Stochastic Games (SSGs). This algorithm is based on an exhaustive search of a special kind of positional optimal strategies, the fstrategies. The running time is O ( VR! · (V E  + p)), where V , VR, E  and p  are respectively ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
Abstract. We present a new algorithm for solving Simple Stochastic Games (SSGs). This algorithm is based on an exhaustive search of a special kind of positional optimal strategies, the fstrategies. The running time is O ( VR! · (V E  + p)), where V , VR, E  and p  are respectively the number of vertices, random vertices and edges, and the maximum bitlength of a transition probability. Our algorithm improves existing algorithms for solving SSGs in three aspects. First, our algorithm performs well on SSGs with few random vertices, second it does not rely on linear or quadratic programming, third it applies to all SSGs, not only stopping SSGs.
A polynomial primaldual Dikintype algorithm for linear programming
 FACULTY OF TECHNICAL MATHEMATICS AND COMPUTER SCIENCE, DELFT UNIVERSITY OF TECHNOLOGY
, 1993
"... In this paper we present a new primaldual affine scaling method for linear programming. The method yields a strictly complementary optimal solution pair, and also allows a polynomialtime convergence proof. The search direction is obtained by using the original idea of Dikin, namely by minimizing t ..."
Abstract

Cited by 16 (9 self)
 Add to MetaCart
In this paper we present a new primaldual affine scaling method for linear programming. The method yields a strictly complementary optimal solution pair, and also allows a polynomialtime convergence proof. The search direction is obtained by using the original idea of Dikin, namely by minimizing the objective function (which is the duality gap in the primaldual case), over some suitable ellipsoid. This gives rise to completely new primaldual affine scaling directions, having no obvious relation with the search directions proposed in the literature so far. The new directions guarantee a significant decrease in the duality gap in each iteration, and at the same time they drive the iterates to the central path. In the analysis of our algorithm we use a barrier function which is the natural primaldual generalization of Karmarkar's potential function. The iteration bound is O(nL), which is a factor O(L) better than the iteration bound of an earlier primaldual affine scaling meth...
Polynomial interior point cutting plane methods
 Optimization Methods and Software
, 2003
"... Polynomial cutting plane methods based on the logarithmic barrier function and on the volumetric center are surveyed. These algorithms construct a linear programming relaxation of the feasible region, find an appropriate approximate center of the region, and call a separation oracle at this approxim ..."
Abstract

Cited by 15 (8 self)
 Add to MetaCart
Polynomial cutting plane methods based on the logarithmic barrier function and on the volumetric center are surveyed. These algorithms construct a linear programming relaxation of the feasible region, find an appropriate approximate center of the region, and call a separation oracle at this approximate center to determine whether additional constraints should be added to the relaxation. Typically, these cutting plane methods can be developed so as to exhibit polynomial convergence. The volumetric cutting plane algorithm achieves the theoretical minimum number of calls to a separation oracle. Longstep versions of the algorithms for solving convex optimization problems are presented. 1
A General Framework of Continuation Methods for Complementarity Problems
 MATH. OF OPER. RES
, 1994
"... A new class of continuation methods is presented which, in particular, solve linear complementarity problems with copositiveplus and L matrices. Let a# b 2 R be nonnegativevectors. Weembed the complementarity problem with a continuously differentiable mapping f : R in an artificial system o ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
A new class of continuation methods is presented which, in particular, solve linear complementarity problems with copositiveplus and L matrices. Let a# b 2 R be nonnegativevectors. Weembed the complementarity problem with a continuously differentiable mapping f : R in an artificial system of F (x# y)=(a#ib) and (x# y) 0 # () where F : R is defined by F (x# y)=(x 1 y 1 # ...#x n y n # y ; f(x)) and 0 and i 0 are parameters. A pair (x# y) is a solution of the complementarity problem if and only if it solves ()for = 0 and i = 0. A general idea of continuation methods founded on the system () is as follows.
Interior Point Algorithms For Linear Complementarity Problems Based On Large Neighborhoods Of The Central Path
 SIAM J. on Optimization
, 1998
"... In this paper we study a firstorder and a highorder algorithm for solving linear complementarity problems. These algorithms are implicitly associated with a large neighborhood whose size may depend on the dimension of the problems. The complexity of these algorithms depends on the size of the neig ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
In this paper we study a firstorder and a highorder algorithm for solving linear complementarity problems. These algorithms are implicitly associated with a large neighborhood whose size may depend on the dimension of the problems. The complexity of these algorithms depends on the size of the neighborhood. For the first order algorithm, we achieve the complexity bound which the typical largestep algorithms possess. It is wellknown that the complexity of largestep algorithms is greater than that of shortstep ones. By using highorder power series (hence the name highorder algorithm), the iteration complexity can be reduced. We show that the complexity upper bound for our highorder algorithms is equal to that for shortstep algorithms. Key Words: Interior point algorithm, Highorder power series, Large neighborhood, Large step, Complexity, Linear complementarity problem. Abbreviated Title: Interior point algorithms based on large neighborhoods AMS(MOS) subject classifications: 90...
On the Comparative Behavior of Kelley's Cutting Plane Method and the Analytic Center Cutting plane Method
, 1996
"... In this paper, we explore a weakness of a specific implementation of the analytic center cutting plane method applied to convex optimization problems, which may lead to weaker results than Kelley's cutting plane method. Improvements to the analytic center cutting plane method are suggested. 1 Introd ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
In this paper, we explore a weakness of a specific implementation of the analytic center cutting plane method applied to convex optimization problems, which may lead to weaker results than Kelley's cutting plane method. Improvements to the analytic center cutting plane method are suggested. 1 Introduction In this paper, we explore a weakness of a specific implementation of the analytic center cutting plane method, and propose improvements. Cutting plane algorithms are designed to solve general convex optimization problems. They assume that the only information available around the current iterate takes the form of cutting planes, either supporting hyperplanes to the epigraph of the objective function, or separating hyperplanes from the feasible set. The two types of hyperplanes jointly define a linear programming, polyhedral, relaxation of the original convex optimization problem. The key issue in designing a specific cutting plane algorithm is the choice of a point in the current poly...
A buildup variant of the pathfollowing method for LP
 OR Letters
, 1991
"... We propose a strategy for building up the linear program while using a logarithmic barrier method. The method starts with a (small) subset of the dual constraints, and follows the corresponding central path until the iterate is close to (or violates) one of the constraints, which is in turn added to ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
We propose a strategy for building up the linear program while using a logarithmic barrier method. The method starts with a (small) subset of the dual constraints, and follows the corresponding central path until the iterate is close to (or violates) one of the constraints, which is in turn added to the current system. This process is repeated until an optimal solution is reached. If a constraint is added to the current system, the central path will, of course, change. We analyze the effect on the barrier function value if a constraint is added. More importantly, we give an upper bound for the number of iterations needed to return to the new path. We prove that in the worst case the complexity is the same as that of the standard logarithmic barrier method. In practice this buildup scheme is likely to save a great deal of computation. Key Words: interior point method, linear programming, logarithmic barrier function, polynomial algorithm, buildup variant. 1 Introduction Karmarkar...