Results 11  20
of
126
Disciplined convex programming
 Global Optimization: From Theory to Implementation, Nonconvex Optimization and Its Application Series
, 2006
"... ..."
Smoothed Analysis of Termination of Linear Programming Algorithms
"... We perform a smoothed analysis of a termination phase for linear programming algorithms. By combining this analysis with the smoothed analysis of Renegar’s condition number by Dunagan, Spielman and Teng ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
We perform a smoothed analysis of a termination phase for linear programming algorithms. By combining this analysis with the smoothed analysis of Renegar’s condition number by Dunagan, Spielman and Teng
PAC Learning Intersections of Halfspaces with Membership Queries
 ALGORITHMICA
, 1998
"... A randomized learning algorithm Polly is presented that efficiently learns intersections of s halfspaces in n dimensions, in time polynomial in both s and n. The learning protocol is the "PAC" (probably approximately correct) model of Valiant, augmented with membership queries. In particul ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
A randomized learning algorithm Polly is presented that efficiently learns intersections of s halfspaces in n dimensions, in time polynomial in both s and n. The learning protocol is the "PAC" (probably approximately correct) model of Valiant, augmented with membership queries. In particular, Polly receives a set S of m = poly(n; s; 1=ffl; 1=ffi) randomly generated points from an arbitrary distribution over the unit hypercube, and is told exactly which points are contained in, and which points are not contained in, the convex polyhedron P defined by the halfspaces. Polly may also obtain the same information about points of its own choosing. It is shown that after poly(n, s, 1=ffl, 1=ffi, log(1=d)) time, the probability that Polly fails to output a collection of s halfspaces with classification error at most ffl, is at most ffi . Here, d is the minimum distance between the boundary of the target and those examples in S that are not lying on the boundary. The parameter log(1=d) can be ...
A polynomial primaldual Dikintype algorithm for linear programming
 FACULTY OF TECHNICAL MATHEMATICS AND COMPUTER SCIENCE, DELFT UNIVERSITY OF TECHNOLOGY
, 1993
"... In this paper we present a new primaldual affine scaling method for linear programming. The method yields a strictly complementary optimal solution pair, and also allows a polynomialtime convergence proof. The search direction is obtained by using the original idea of Dikin, namely by minimizing t ..."
Abstract

Cited by 16 (9 self)
 Add to MetaCart
In this paper we present a new primaldual affine scaling method for linear programming. The method yields a strictly complementary optimal solution pair, and also allows a polynomialtime convergence proof. The search direction is obtained by using the original idea of Dikin, namely by minimizing the objective function (which is the duality gap in the primaldual case), over some suitable ellipsoid. This gives rise to completely new primaldual affine scaling directions, having no obvious relation with the search directions proposed in the literature so far. The new directions guarantee a significant decrease in the duality gap in each iteration, and at the same time they drive the iterates to the central path. In the analysis of our algorithm we use a barrier function which is the natural primaldual generalization of Karmarkar's potential function. The iteration bound is O(nL), which is a factor O(L) better than the iteration bound of an earlier primaldual affine scaling meth...
Solving Simple Stochastic Games with Few Random Vertices
"... Abstract. We present a new algorithm for solving Simple Stochastic Games (SSGs). This algorithm is based on an exhaustive search of a special kind of positional optimal strategies, the fstrategies. The running time is O ( VR! · (V E  + p)), where V , VR, E  and p  are respectively ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
Abstract. We present a new algorithm for solving Simple Stochastic Games (SSGs). This algorithm is based on an exhaustive search of a special kind of positional optimal strategies, the fstrategies. The running time is O ( VR! · (V E  + p)), where V , VR, E  and p  are respectively the number of vertices, random vertices and edges, and the maximum bitlength of a transition probability. Our algorithm improves existing algorithms for solving SSGs in three aspects. First, our algorithm performs well on SSGs with few random vertices, second it does not rely on linear or quadratic programming, third it applies to all SSGs, not only stopping SSGs.
Polynomial interior point cutting plane methods
 Optimization Methods and Software
, 2003
"... Polynomial cutting plane methods based on the logarithmic barrier function and on the volumetric center are surveyed. These algorithms construct a linear programming relaxation of the feasible region, find an appropriate approximate center of the region, and call a separation oracle at this approxim ..."
Abstract

Cited by 16 (8 self)
 Add to MetaCart
Polynomial cutting plane methods based on the logarithmic barrier function and on the volumetric center are surveyed. These algorithms construct a linear programming relaxation of the feasible region, find an appropriate approximate center of the region, and call a separation oracle at this approximate center to determine whether additional constraints should be added to the relaxation. Typically, these cutting plane methods can be developed so as to exhibit polynomial convergence. The volumetric cutting plane algorithm achieves the theoretical minimum number of calls to a separation oracle. Longstep versions of the algorithms for solving convex optimization problems are presented. 1
A General Framework of Continuation Methods for Complementarity Problems
 MATH. OF OPER. RES
, 1994
"... A new class of continuation methods is presented which, in particular, solve linear complementarity problems with copositiveplus and L matrices. Let a# b 2 R be nonnegativevectors. Weembed the complementarity problem with a continuously differentiable mapping f : R in an artificial system o ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
A new class of continuation methods is presented which, in particular, solve linear complementarity problems with copositiveplus and L matrices. Let a# b 2 R be nonnegativevectors. Weembed the complementarity problem with a continuously differentiable mapping f : R in an artificial system of F (x# y)=(a#ib) and (x# y) 0 # () where F : R is defined by F (x# y)=(x 1 y 1 # ...#x n y n # y ; f(x)) and 0 and i 0 are parameters. A pair (x# y) is a solution of the complementarity problem if and only if it solves ()for = 0 and i = 0. A general idea of continuation methods founded on the system () is as follows.
Interior Point Algorithms For Linear Complementarity Problems Based On Large Neighborhoods Of The Central Path
 SIAM J. on Optimization
, 1998
"... In this paper we study a firstorder and a highorder algorithm for solving linear complementarity problems. These algorithms are implicitly associated with a large neighborhood whose size may depend on the dimension of the problems. The complexity of these algorithms depends on the size of the neig ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
In this paper we study a firstorder and a highorder algorithm for solving linear complementarity problems. These algorithms are implicitly associated with a large neighborhood whose size may depend on the dimension of the problems. The complexity of these algorithms depends on the size of the neighborhood. For the first order algorithm, we achieve the complexity bound which the typical largestep algorithms possess. It is wellknown that the complexity of largestep algorithms is greater than that of shortstep ones. By using highorder power series (hence the name highorder algorithm), the iteration complexity can be reduced. We show that the complexity upper bound for our highorder algorithms is equal to that for shortstep algorithms. Key Words: Interior point algorithm, Highorder power series, Large neighborhood, Large step, Complexity, Linear complementarity problem. Abbreviated Title: Interior point algorithms based on large neighborhoods AMS(MOS) subject classifications: 90...
An Accelerated Interior Point Method Whose Running Time Depends Only on A
 IN PROCEEDINGS OF 26TH ANNUAL ACM SYMPOSIUM ON THE THEORY OF COMPUTING
, 1993
"... We propose a "layeredstep" interior point (LIP) algorithm for linear programming. This algorithm follows the central path, either with short steps or with a new type of step called a "layered least squares" (LLS) step. The algorithm returns the exact global minimum after a finit ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
We propose a "layeredstep" interior point (LIP) algorithm for linear programming. This algorithm follows the central path, either with short steps or with a new type of step called a "layered least squares" (LLS) step. The algorithm returns the exact global minimum after a finite number of stepsin particular, after O(n 3:5 c(A)) iterations, where c(A) is a function of the coefficient matrix. The LLS steps can be thought of as accelerating a pathfollowing interior point method whenever neardegeneracies occur. One consequence of the new method is a new characterization of the central path: we show that it composed of at most n 2 alternating straight and curved
A buildup variant of the pathfollowing method for LP
 Operations Research Letters
, 1992
"... ..."