Results 1 
6 of
6
A PrimalDual Potential Reduction Method for Problems Involving Matrix Inequalities
 in Protocol Testing and Its Complexity&quot;, Information Processing Letters Vol.40
, 1995
"... We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming. A worstcase analysis shows that the number of iterat ..."
Abstract

Cited by 84 (21 self)
 Add to MetaCart
We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming. A worstcase analysis shows that the number of iterations grows as the square root of the problem size, but in practice it appears to grow more slowly. As in other interiorpoint methods the overall computational effort is therefore dominated by the leastsquares system that must be solved in each iteration. A type of conjugategradient algorithm can be used for this purpose, which results in important savings for two reasons. First, it allows us to take advantage of the special structure the problems often have (e.g., Lyapunov or algebraic Riccati inequalities). Second, we show that the polynomial bound on the number of iterations remains valid even if the conjugategradient algorithm is not run until completion, which in practice can greatly reduce the computational effort per iteration.
Adaptive Use of Iterative Methods in PredictorCorrector Interior Point Methods for Linear Programming
 NUMERICAL ALGORITHMS
, 1999
"... ..."
Adaptive Use Of Iterative Methods In Interior Point Methods For Linear Programming
, 1995
"... In this work we devise efficient algorithms for finding the search directions for interior point methods applied to linear programming problems. There are two innovations. The first is the use of updating of preconditioners computed for previous barrier parameters. The second is an adaptive automate ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
In this work we devise efficient algorithms for finding the search directions for interior point methods applied to linear programming problems. There are two innovations. The first is the use of updating of preconditioners computed for previous barrier parameters. The second is an adaptive automated procedure for determining whether to use a direct or iterative solver, whether to reinitialize or update the preconditioner, and how many updates to apply. These decisions are based on predictions of the cost of using the different solvers to determine the next search direction, given costs in determining earlier directions. These ideas are tested by applying a modified version of the OB1R code of Lustig, Marsten, and Shanno to a variety of problems from the NETLIB and other collections. If a direct method is appropriate for the problem, then our procedure chooses it, but when an iterative procedure is helpful, substantial gains in efficiency can be obtained.
Further Development on the Interior Algorithm for Convex Quadratic Programming
 Dept. of EngineeringEconomic Systems, Stanford University
, 1987
"... The interior trust region algorithm for convex quadratic programming is further developed. This development is motivated by the barrier function and the "center" pathfollowing methods, which create a sequence of primal and dual interior feasible points converging to the optimal solution. ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
The interior trust region algorithm for convex quadratic programming is further developed. This development is motivated by the barrier function and the "center" pathfollowing methods, which create a sequence of primal and dual interior feasible points converging to the optimal solution. At each iteration, the gap between the primal and dual objective values (or the complementary slackness value) is reduced at a global convergence ratio (1 \Gamma 1 4 p n ), where n is the number of variables in the convex QP problem. A safeguard line search technique is also developed to relax the smallstepsize restriction in the original path following algorithm. Key words: Convex Quadratic Programming, Primal and Dual, Complementarity Slackness, Polynomial Interior Algorithm. Abbreviated title: Interior Algorithm for Convex Quadratic Programming Since Karmarkar proposed the new polynomial algorithm (Karmarkar [19]), several developments have been made to the growing literature on interior a...
A POLYNOMIALTIME INTERIORPOINT METHOD FOR CONIC OPTIMIZATION, WITH INEXACT BARRIER EVALUATIONS ∗
"... Abstract. We consider a primaldual shortstep interiorpoint method for conic convex optimization problems for which exact evaluation of the gradient and Hessian of the primal and dual barrier functions is either impossible or prohibitively expensive. As our main contribution, we show that if appro ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. We consider a primaldual shortstep interiorpoint method for conic convex optimization problems for which exact evaluation of the gradient and Hessian of the primal and dual barrier functions is either impossible or prohibitively expensive. As our main contribution, we show that if approximate gradients and Hessians of the primal barrier function can be computed, and the relative errors in such quantities are not too large, then the method has polynomial worstcase iteration complexity. (In particular, polynomial iteration complexity ensues when the gradient and Hessian are evaluated exactly.) In addition, the algorithm requires no evaluation—or even approximate evaluation—of quantities related to the barrier function for the dual cone, even for problems in which the underlying cone is not selfdual.
CONVERGENCE IN KARMARKAR’S ALGORITHM FOR LINEAR PROGRAMMING*
"... Abstract. Karmarkar’s algorithm is formulated so as to avoid the possibility of failure because of unbounded solutions. A general inequality gives an easy proof of the convergence of the iterations. It is shown that the parameter value a 0.5 more than doubles the originally predicted rate of converg ..."
Abstract
 Add to MetaCart
Abstract. Karmarkar’s algorithm is formulated so as to avoid the possibility of failure because of unbounded solutions. A general inequality gives an easy proof of the convergence of the iterations. It is shown that the parameter value a 0.5 more than doubles the originally predicted rate of convergence. To go from the last iterate to an exact optimal solution, an O(n 3) termination algorithm is prescribed. If the data have maximum bit length independent of n, the composite algorithm is shown to have complexity 0(/’/4.5 log n). Key words, linear programming, Karmarkar’s algorithm, projectiveiterative method AMS(MOS) subject classifications. 90C05, 90C06