Results 1 
8 of
8
On Exploiting Problem Structure in a Basis Identification Procedure for Linear Programming
 In: INFORMS Journal on Computing
, 1997
"... During the last decade interiorpoint methods have become an efficient alternative to the simplex algorithm for solution of largescale linear programming (LP) problems. However, in many practical applications of LP, interiorpoint methods have the drawback that they do not generate an optimal basic ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
During the last decade interiorpoint methods have become an efficient alternative to the simplex algorithm for solution of largescale linear programming (LP) problems. However, in many practical applications of LP, interiorpoint methods have the drawback that they do not generate an optimal basic and nonbasic partition of the variables. This partition is required in the traditional sensitivity analysis and is highly useful when a sequence of related LP problems are solved. Therefore, in this paper we discuss how an optimal basic solution can be generated from the interiorpoint solution. The emphasis of the paper is on how problem structure can be exploited to reduce the computational cost associated with the basis identification. Computational results are presented which indicate that it is highly advantageous to exploit problem structure. Key words: Linear programming, interiorpoint methods, basis identification. 1 Introduction Since the late forties the simplex algorithm has be...
Analysis of Stochastic Problem Decomposition Algorithms in Computational Grids
"... Stochastic programming usually represents uncertainty discretely by means of a scenario tree. This representation leads to an exponential growth of the size of stochastic mathematical problems when better accuracy is needed. Trying to solve the problem as a whole, considering all scenarios together, ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Stochastic programming usually represents uncertainty discretely by means of a scenario tree. This representation leads to an exponential growth of the size of stochastic mathematical problems when better accuracy is needed. Trying to solve the problem as a whole, considering all scenarios together, yields to huge memory requirements that surpass the capabilities of current computers. Thus, decomposition algorithms are employed to divide the problem into several smaller subproblems and to coordinate their solution in order to obtain the global optimum. This paper analyzes several decomposition strategies based on the classical Benders decomposition algorithm, and applies them in the emerging computational grid environments. Most decomposition algorithms are not able to take full advantage of all the computing power available in a grid system because of unavoidable dependencies inherent to the algorithms. However, a special decomposition method presented in this paper aims at reducing dependency among subproblems, to the point where all the subproblems can be sent simultaneously to the grid. All algorithms have been tested in a grid system, measuring execution times required to solve standard optimization problems and a realsize hydrothermal coordination problem. Numerical results are shown to confirm that this new method outperforms the classical ones when used in grid computing environments.
Parallel Continuous Optimization
, 2000
"... . Parallel continuous optimization methods are motivated here by applications in science and engineering. The key issues are addressed at different computational levels including local and global optimization as well as strategies for large, sparse versus small but expensive problems. Topics covered ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
. Parallel continuous optimization methods are motivated here by applications in science and engineering. The key issues are addressed at different computational levels including local and global optimization as well as strategies for large, sparse versus small but expensive problems. Topics covered include global optimization, direct search with and without surrogates, optimization of linked subsystems, and variable and constraint distribution. Finally, there is a discussion of future research directions. Key Words. Parallel optimization, local and global optimization, largescale optimization, direct search methods, surrogate optimization, optimization of linked subsystems, design optimization, cluster simulation, macromolecular modeling 1 Introduction Optimization has broad applications in engineering, science, and management. Many of these applications either have large numbers of variables or require expensive function evaluations. In some cases, there may be many local minimiz...
Reliable Outer Bounds for the Dual Simplex Algorithm with Interval Righthand Side
"... Abstract—In this article, we describe the reliable computation of outer bounds for linear programming problems occuring in linear relaxations derived from the Bernstein polynomials. The computation uses interval arithmetic for the GaussJordan pivot steps on a simplex tableau. The resulting errors a ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—In this article, we describe the reliable computation of outer bounds for linear programming problems occuring in linear relaxations derived from the Bernstein polynomials. The computation uses interval arithmetic for the GaussJordan pivot steps on a simplex tableau. The resulting errors are stored as interval right hand sides. Additionally, we show how to generate a start basis for the linear programs of this type. We give details of the implementation using OpenMP and comment on numerical experiments. Keywordsverified simplex algorithm; interval arithmetic; tableau form; OpenMP parallelization I.
What Could a Million Cores Do To Solve Integer Programs?
, 2012
"... Given the steady increase in cores per CPU, it is only a matter of time until supercomputers will have a million or more cores. In this article, we investigate the opportunities and challenges that will arise when trying to utilize this vast computing power to solve a single integer linear optimizat ..."
Abstract
 Add to MetaCart
(Show Context)
Given the steady increase in cores per CPU, it is only a matter of time until supercomputers will have a million or more cores. In this article, we investigate the opportunities and challenges that will arise when trying to utilize this vast computing power to solve a single integer linear optimization problem. We also raise the question of whether best practices in sequential solution of ILPs will be effective in massively parallel environments.
Cooperating MultiCore and MultiGPU in the Computation of the Multidimensional Voronoi Adjacency in Machine Learning Datasets
"... Abstract — A cooperative framework is presented in this paper where multiple Cores in host and multiple GPUs cooperate to compute the Voronoi adjacency relationship in multidimensional Machine Learning datasets. Voronoi adjacency plays a very important role in neighbor based procedures in classifica ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract — A cooperative framework is presented in this paper where multiple Cores in host and multiple GPUs cooperate to compute the Voronoi adjacency relationship in multidimensional Machine Learning datasets. Voronoi adjacency plays a very important role in neighbor based procedures in classification and data condensation. The proposal includes a system of Polytope Inclusion Agents, which computes the Delaunay polytope that contains a defined point, and a Coordination System among the agents that deals with the scheduling and load balance. The Polytope Inclusion Agent uses the Dual Simplex algorithm to solve a Linear Programming problem. The results show that for small datasets the use of GPU is a drawback, while for larger ones the GPUs take advantages of their massive parallelism.