Results 11 
15 of
15
New Interior Point Algorithms in Linear Programming
, 2003
"... In this paper the abstract of the thesis "New Interior Point Algorithms in Linear Programming" is presented. The purpose of the thesis is to elaborate new interior point algorithms for solving linear optimization problems. The theoretical complexity of the new algorithms are calculated. We also ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper the abstract of the thesis "New Interior Point Algorithms in Linear Programming" is presented. The purpose of the thesis is to elaborate new interior point algorithms for solving linear optimization problems. The theoretical complexity of the new algorithms are calculated. We also prove that these algorithms are polynomial. The thesis is composed of seven chapters. In the first chapter a short history of interior point methods is discussed. In the following three chapters some variants of the a#ne scaling, the projective and the pathfollowing algorithms are presented. In the last three chapters new pathfollowing interior point algorithms are defined. In the fifth chapter a new method for constructing search directions for interior point algorithms is introduced, and a new primaldual pathfollowing algorithm is defined. Polynomial complexity of this algorithm is proved. We mention that this complexity is identical with the best known complexity in the present. In the sixth chapter, using a similar approach with the one defined in the previous chapter, a new class of search directions for the selfdual problem is introduced. A new primaldual algorithm is defined for solving the selfdual linear optimization problem, and polynomial complexity is proved. In the last chapter the method proposed in the fifth chapter is generalized for targetfollowing methods. A conceptual targetfollowing algorithm is defined, and this algorithm is particularized in order to obtain a new primaldual weightedpathfollowing method. The complexity of this algorithm is computed.
The Finite CrissCross Method for Hyperbolic Programming
 Informatica, Technische Universiteit Delft, The Netherlands
, 1996
"... In this paper the finite crisscross method is generalized to solve hyperbolic programming problems. Just as in the case of linear or quadratic programming the crisscross method can be initialized with any, not necessarily feasible basic solution. Finiteness of the procedure is proved under the ..."
Abstract
 Add to MetaCart
In this paper the finite crisscross method is generalized to solve hyperbolic programming problems. Just as in the case of linear or quadratic programming the crisscross method can be initialized with any, not necessarily feasible basic solution. Finiteness of the procedure is proved under the usual mild assumptions. Some small numerical examples illustrate the main features of the algorithm. Key words: hyperbolic programming, pivoting, crisscross method iii 1 Introduction The hyperbolic (fractional linear) programming problem is a natural generalization of the linear programming problem. The linear constraints are kept, but the linear objective function is replaced by a quotient of two linear functions. Such fractional linear objective functions arise in economical models when the goal is to optimize profit/allocation type functions (see for instance [12]). The objective function of the hyperbolic programming problem is neither linear nor convex, however there are several ...
CrissCross Pivoting Rules
"... . Assuming that the reader is familiar with both the primal and dual simplex methods, Zionts' crisscross method can easily be explained. ffl It can be initialized by any, possibly both primal and dual infeasible basis . If the basis is optimal, we are done. If the basis is not optimal , then th ..."
Abstract
 Add to MetaCart
. Assuming that the reader is familiar with both the primal and dual simplex methods, Zionts' crisscross method can easily be explained. ffl It can be initialized by any, possibly both primal and dual infeasible basis . If the basis is optimal, we are done. If the basis is not optimal , then there are some primal or dual infeasible variables. One might choose any of these. It is advised to choose once a primal and then a dual infeasible variable, if possible. ffl If the selected variable is dual infeasible, then it enters the basis and the leaving variable is chosen among the primal feasible variables in such a way that primal feasibility of the currently primal feasible variables is preserved. If no such basis exchange is possible another infeasible variable is selected. ffl If the selected variable is primal infeasible, then it leaves the basis and the entering variable is chosen among th
K. Fukuda and T. Terlaky ISSN 09225641 Reports of the Faculty of Technical Mathematics and Informatics 99?? Delft January, 1999
"... this paper the nondegeneracy assumption is removed. Our constructive proof relies on similar ideas that were developed for strongly polynomial basis identification techniques in interior point methods [10,11]. 3 In the rest of this section we fix our notations and give formal definitions. In Sectio ..."
Abstract
 Add to MetaCart
this paper the nondegeneracy assumption is removed. Our constructive proof relies on similar ideas that were developed for strongly polynomial basis identification techniques in interior point methods [10,11]. 3 In the rest of this section we fix our notations and give formal definitions. In Section 2.1 for the primal and in Section 2.2 for the dual feasibility problem it will be shown that from any basis to a feasible basis an admissible pivot sequence exists whose length is bounded by n and m, respectively. Our main result, in Section 3 shows that the answer to Question 1 is positive, there is always an admissible pivot sequence consisting of not more then m + n