Results 1  10
of
12
Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones
, 1998
"... SeDuMi is an addon for MATLAB, that lets you solve optimization problems with linear, quadratic and semidefiniteness constraints. It is possible to have complex valued data and variables in SeDuMi. Moreover, large scale optimization problems are solved efficiently, by exploiting sparsity. This pape ..."
Abstract

Cited by 734 (3 self)
 Add to MetaCart
SeDuMi is an addon for MATLAB, that lets you solve optimization problems with linear, quadratic and semidefiniteness constraints. It is possible to have complex valued data and variables in SeDuMi. Moreover, large scale optimization problems are solved efficiently, by exploiting sparsity. This paper describes how to work with this toolbox.
A simplified homogeneous and selfdual linear programming algorithm and its implementation
 Annals of Operations Research
, 1996
"... 1 Introduction Consider the linear programming (LP) problem in the standard form: (LP) minimize cT x ..."
Abstract

Cited by 56 (5 self)
 Add to MetaCart
1 Introduction Consider the linear programming (LP) problem in the standard form: (LP) minimize cT x
A Path to the ArrowDebreu Competitive Market Equilibrium
 MATH. PROGRAMMING
, 2004
"... We present polynomialtime interiorpoint algorithms for solving the Fisher and ArrowDebreu competitive market equilibrium problems with linear utilities and n players. Both of them have the arithmetic operation complexity bound of O(n 4 log(1/ɛ)) for computing an ɛequilibrium solution. If the p ..."
Abstract

Cited by 37 (7 self)
 Add to MetaCart
We present polynomialtime interiorpoint algorithms for solving the Fisher and ArrowDebreu competitive market equilibrium problems with linear utilities and n players. Both of them have the arithmetic operation complexity bound of O(n 4 log(1/ɛ)) for computing an ɛequilibrium solution. If the problem data are rational numbers and their bitlength is L, then the bound to generate an exact solution is O(n 4 L) which is in line with the best complexity bound for linear programming of the same dimension and size. This is a significant improvement over the previously best bound O(n 8 log(1/ɛ)) for approximating the two problems using other methods. The key ingredient to derive these results is to show that these problems admit convex optimization formulations, efficient barrier functions and fast rounding techniques. We also present a continuous path leading to the set of the ArrowDebreu equilibrium, similar to the central path developed for linear programming interiorpoint methods. This path is derived from the weighted logarithmic utility and barrier functions and the Brouwer fixedpoint theorem. The defining equations are bilinear and possess some primaldual structure for the application of the Newtonbased pathfollowing method.
Probabilistic Analysis of an InfeasibleInteriorPoint Algorithm for Linear Programming
, 1998
"... We consider an infeasibleinteriorpoint algorithm, endowed with a finite termination scheme, applied to random linear programs generated according to a model of Todd. Such problems have degenerate optimal solutions, and possess no feasible starting point. We use no information regarding an optimal ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
We consider an infeasibleinteriorpoint algorithm, endowed with a finite termination scheme, applied to random linear programs generated according to a model of Todd. Such problems have degenerate optimal solutions, and possess no feasible starting point. We use no information regarding an optimal solution in the initialization of the algorithm. Our main result is that the expected number of iterations before termination with an exact optimal solution is O(n ln(n)). Keywords: Linear Programming, AverageCase Behavior, InfeasibleInteriorPoint Algorithm. Running Title: Probabilistic Analysis of an LP Algorithm 1 Dept. of Management Sciences, University of Iowa. Supported by an Interdisciplinary Research Grant from the Center for Advanced Studies, University of Iowa. 2 Dept. of Mathematics, Valdosta State University. Supported by an Interdisciplinary Research Grant from the Center for Advanced Studies, University of Iowa. 3 Dept. of Mathematics, University of Iowa. Supported by ...
On generalized branching methods for mixed integer programming
, 2004
"... In this paper we present a restructuring of the computations in Lenstra’s methods for solving mixed integer linear programs. We show that the problem of finding a good branching hyperplane can be formulated on an adjoint lattice of the Kernel lattice of the equality constraints without requiring any ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
In this paper we present a restructuring of the computations in Lenstra’s methods for solving mixed integer linear programs. We show that the problem of finding a good branching hyperplane can be formulated on an adjoint lattice of the Kernel lattice of the equality constraints without requiring any dimension reduction. As a consequence the short lattice vector finding algorithms, such as Lenstra, Lenstra, Lovász (LLL) [15] or the generalized basis reduction algorithm of Lovász and Scarf [18] are described in the space of original variables. Based on these results we give a new natural heuristic way of generating branching hyperplanes, and discuss its relationship with recent reformulation techniques of Aardal and Lenstra [1]. We show that the reduced basis available at the root node has useful information on the branching hyperplanes for the generalized branchandbound tree. Based on these results algorithms are also given for solving mixed convex integer programs.
A cutting surface method for uncertain linear programs with polyhedral stochastic dominance constraints
 SIAM Journal on Optimization
"... In this paper we study linear optimization problems with multidimensional linear positive secondorder stochastic dominance constraints. By using the polyhedral properties of the secondorder linear dominance condition we present a cuttingsurface algorithm, and show its finite convergence. The cut ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
In this paper we study linear optimization problems with multidimensional linear positive secondorder stochastic dominance constraints. By using the polyhedral properties of the secondorder linear dominance condition we present a cuttingsurface algorithm, and show its finite convergence. The cut generation problem is a difference of convex functions (DC) optimization problem. We exploit the polyhedral structure of this problem to present a novel branchandcut algorithm that incorporates concepts from concave minimization and binary integer programming. A linear programming problem is formulated for generating concavity cuts in our case, where the polyhedra is unbounded. We also present duality results for this problem relating the dual multipliers to utility functions, without the need to impose constraint qualifications, which again is possible because of the polyhedral nature of the problem. Numerical examples are presented showing the nature of solutions of our model.
On Free Variables In Interior Point Methods
, 1997
"... this paper wehave selected the primaldual logarithmic barrier algorithm to present our ideas, because it and its modified versions are considered, in general, to be the most efficient in practice. The computational results presented in this paper were obtained using implementations of this algorith ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
this paper wehave selected the primaldual logarithmic barrier algorithm to present our ideas, because it and its modified versions are considered, in general, to be the most efficient in practice. The computational results presented in this paper were obtained using implementations of this algorithm. It is to be noted, however, that this choice has notational consequences only. Practically,anyinterior point method, even nonlinear ones can be discussed in a similar linear algebra framework. Let us consider the linear programming problem
Fortran subroutines for network flow optimization using an interior point algorithm
, 2004
"... We describe Fortran subroutines for network flow optimization using an interior point network flow algorithm, that, together with a Fortran language driver, make up PDNET. The algorithm is described in detail and its implementation is outlined. Usage of the package is described and some computationa ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We describe Fortran subroutines for network flow optimization using an interior point network flow algorithm, that, together with a Fortran language driver, make up PDNET. The algorithm is described in detail and its implementation is outlined. Usage of the package is described and some computational experiments are reported. Source code for the software can be downloaded at
A Stable PrimalDual Approach for Linear Programming
"... This paper studies a primaldual interior/exteriorpoint pathfollowing approach for linearprogramming that is motivated on using an iterative solver rather than a direct solver for the search direction. We begin with the usual perturbed primaldual optimality equations Fu(x, y, z) = 0. Under nonde ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
This paper studies a primaldual interior/exteriorpoint pathfollowing approach for linearprogramming that is motivated on using an iterative solver rather than a direct solver for the search direction. We begin with the usual perturbed primaldual optimality equations Fu(x, y, z) = 0. Under nondegeneracy assumptions, this nonlinear system is wellposed,i.e. it has a nonsingular Jacobian at optimality and is not necessarily illconditioned as the iterates approach optimality. We use a simple preprocessing step to eliminate boththe primal and dual feasibility equations. This results in a single bilinear equation that maintains the wellposedness property. We then apply both a direct solution techniqueas well as a preconditioned conjugate gradient method (PCG), within an inexact Newton framework, directly on the linearized equations. This is done without forming the usualnormal equations, NEQ, or augmented system. Sparsity is maintained. The work of aniteration for the PCG approach consists almost entirely in the (approximate) solution of this wellposed linearized system. Therefore, improvements depend on efficient preconditioning.