Results 1  10
of
18
FATCOP 2.0: Advanced Features in an Opportunistic Mixed Integer Programming Solver
"... We describe FATCOP 2.0, a new parallel mixed integer program solver that works in an opportunistic computing environment provided by the Condor resource management system. We outline changes to the search strategy of FATCOP 1.0 that are necessary to improve resource utilization, together with new te ..."
Abstract

Cited by 28 (10 self)
 Add to MetaCart
We describe FATCOP 2.0, a new parallel mixed integer program solver that works in an opportunistic computing environment provided by the Condor resource management system. We outline changes to the search strategy of FATCOP 1.0 that are necessary to improve resource utilization, together with new techniques to exploit heterogeneous resources. We detail several advanced features in the code that are necessary for successful solution of a variety of mixed integer test problems, along with the different usage schemes that are pertinent to our particular computing environment. Computational results demonstrating the effects of the changes are provided and used to generate effective default strategies for the FATCOP solver.
Parallel Branch, Cut, and Price For Largescale Discrete Optimization
, 2003
"... In discrete optimization, most exact solution approaches are based on branch and bound, which is conceptually easy to parallelize in its simplest forms. More sophisticated variants, such as the socalled branch, cut, and price algorithms, are more difficult to parallelize because of the need to sh ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
In discrete optimization, most exact solution approaches are based on branch and bound, which is conceptually easy to parallelize in its simplest forms. More sophisticated variants, such as the socalled branch, cut, and price algorithms, are more difficult to parallelize because of the need to share large amounts of knowledge discovered during the search process. In the first part of the paper, we survey the issues involved in parallelizing such algorithms. We then review the implementation of SYMPHONY and COIN/BCP, two existing frameworks for implementing parallel branch, cut, and price. These frameworks have limited scalability, but are effective on small numbers of processors. Finally, we briefly describe our nextgeneration framework, which improves scalability and further abstracts many of the notions inherent in parallel BCP, making it possible to implement and parallelize more general classes of algorithms.
A Library Hierarchy for Implementing Scalable Parallel Search Algorithms
 The Journal of Supercomputing
, 2001
"... This report describes the design of the Abstract Library for Parallel Search (ALPS), a framework for implementing scalable, parallel algorithms based on tree search. ALPS is specically designed to support data intensive algorithms, in which large amounts of data are required to describe each node ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
This report describes the design of the Abstract Library for Parallel Search (ALPS), a framework for implementing scalable, parallel algorithms based on tree search. ALPS is specically designed to support data intensive algorithms, in which large amounts of data are required to describe each node in the search tree. Implementing such algorithms in a scalable manner is dicult due to data storage requirements. This report also describes the design of two other libraries built on top of ALPS, the rst of which is the Branch, Constrain, and Price Software (BiCePS) library, a framework that supports the implementation of parallel branch and bound algorithms in which the bounding is based on some type of relaxation, usually Lagrangean. In this layer, the notion of global data objects associated with the variables and constraints is introduced. These global objects provide a connection between the various subproblems in the search tree and present further diculties in designing scalable algorithms. Finally, we will discuss the BiCePS Linear Integer Solver (BLIS), a concretization of BiCePS, in which linear programming is used to obtain bounds in each search tree node.
ALPS: A framework for implementing parallel search algorithms
 In Proceedings of the Ninth INFORMS Computing Society Conference
, 2005
"... ALPS is a framework for implementing and parallelizing tree search algorithms. It employs a number of features to improve scalability and is designed specifically to support the implementation of data intensive algorithms, in which large amounts of knowledge are generated and must be maintained and ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
ALPS is a framework for implementing and parallelizing tree search algorithms. It employs a number of features to improve scalability and is designed specifically to support the implementation of data intensive algorithms, in which large amounts of knowledge are generated and must be maintained and shared during the search. Implementing such algorithms in a scalable manner is challenging both because of storage requirements and because of communications overhead incurred in the sharing of data. In this abstract, we describe the design of ALPS and how the design addresses these challenges. We present two sample applications built with ALPS and preliminary computational results. 1
Parallel Branch and Cut for Capacitated Vehicle Routing
, 2002
"... Combinatorial optimization problems arise commonly in logistics applications. The most successful approaches to date for solving such problems involve modeling them as integer programs and then applying some variant of the branch and bound algorithm. Although branch and bound is conceptually easy t ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
Combinatorial optimization problems arise commonly in logistics applications. The most successful approaches to date for solving such problems involve modeling them as integer programs and then applying some variant of the branch and bound algorithm. Although branch and bound is conceptually easy to parallelize, achieving scalability can be a challenge. In more sophisticated variants, such as branch and cut, large amounts of data must be shared among the processors, resulting in increased parallel overhead. In this paper, we review the branch and cut algorithm for solving combinatorial optimization problems and describe the implementation of SYMPHONY, a library for implementing these algorithms in parallel. We then describe a solver for the vehicle routing problem that was implemented using SYMPHONY and analyze its parallel performance on a Beowulf cluster.
The SYMPHONY callable library for mixed integer programming
 In The Proceedings of the Ninth Conference of the INFORMS Computing Society
, 2005
"... SYMPHONY is a customizable, opensource library for solving mixedinteger linear programs (MILP) by branch, cut, and price. With its large assortment of parameter settings, user callback functions, and compiletime options, SYMPHONY can be configured as a generic MILP solver or an engine for solving ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
SYMPHONY is a customizable, opensource library for solving mixedinteger linear programs (MILP) by branch, cut, and price. With its large assortment of parameter settings, user callback functions, and compiletime options, SYMPHONY can be configured as a generic MILP solver or an engine for solving difficult MILPs by means of a fully customized algorithm. SYMPHONY can run on a variety of architectures, including singleprocessor, distributedmemory parallel, and sharedmemory parallel architectures under MS Windows, Linux, and other Unix operating systems. The latest version is implemented as a callable library that can be accessed either through calls to the native C application program interface, or through a C++ interface class derived from the COINOR Open Solver Interface. Among its new features are the ability to solve bicriteria MILPs, the ability to stop and warm start MILP computations after modifying parameters or problem data, the ability to create persistent cut pools, and the ability to perform rudimentary sensitivity analysis on MILPs. 1
Reformulation and Sampling to Solve a Stochastic Network Interdiction Problem, to appear, Networks
, 2008
"... The Network Interdiction Problem involves interrupting an adversary’s ability to maximize flow through a capacitated network by destroying portions of the network. A budget constraint limits the amount of the network that can be destroyed. In this paper, we study a stochastic version of the network ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
The Network Interdiction Problem involves interrupting an adversary’s ability to maximize flow through a capacitated network by destroying portions of the network. A budget constraint limits the amount of the network that can be destroyed. In this paper, we study a stochastic version of the network interdiction problem in which the successful destruction of an arc of the network is a Bernoulli random variable, and the objective is to minimize the maximum expected flow of the adversary. Using duality and linearization techniques, an equivalent deterministic mixed integer program is formulated. The structure of the reformulation allows for the application of decomposition techniques for its solution. Using a parallel algorithm designed to run on a distributed computing platform known as a computational grid, we give computational results showing the efficacy of a samplingbased approach to solving the problem. 1
The Theory And Applications Of Discrete Constrained Optimization Using Lagrange Multipliers
, 2000
"... In this thesis, we present a new theory of discrete constrained optimization using Lagrange multipliers and an associated firstorder search procedure (DLM) to solve general constrained optimization problems in discrete, continuous and mixedinteger space. The constrained problems are general in the ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
In this thesis, we present a new theory of discrete constrained optimization using Lagrange multipliers and an associated firstorder search procedure (DLM) to solve general constrained optimization problems in discrete, continuous and mixedinteger space. The constrained problems are general in the sense that they do not assume the differentiability or convexity of functions. Our proposed theory and methods are targeted at discrete problems and can be extended to continuous and mixedinteger problems by coding continuous variables using a floatingpoint representation (discretization). We have characterized the errors incurred due to such discretization and have proved that there exists upper bounds on the errors. Hence, continuous and mixedinteger constrained problems, as well as discrete ones, can be handled by DLM in a unified way with bounded errors.
DryadOpt: BranchandBound on Distributed DataParallel Execution Engines
"... Abstract—We introduce DryadOpt, a library that enables massively parallel and distributed execution of optimization algorithms for solving hard problems. DryadOpt performs an exhaustive search of the solution space using branchandbound, by recursively splitting the original problem into many simpl ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Abstract—We introduce DryadOpt, a library that enables massively parallel and distributed execution of optimization algorithms for solving hard problems. DryadOpt performs an exhaustive search of the solution space using branchandbound, by recursively splitting the original problem into many simpler subproblems. It uses both parallelism (at the core level) and distributed execution (at the machine level). DryadOpt provides a simple yet powerful interface to its users, who only need to implement sequential code to process individual subproblems (either by solving them in full or generating new subproblems). The parallelism and distribution are handled automatically by DryadOpt, and are invisible to the user. The distinctive feature of our system is that it is implemented on top of DryadLINQ, a distributed dataparallel execution engine similar to Hadoop and MapReduce. Despite the fact that these engines offer a constrained application model, with restricted communication patterns, our experiments show that careful design choices allow DryadOpt to scale linearly with the number of machines, with very little overhead. Keywordscombinatorial optimization; branchandbound; distributed computation; Dryad; distributed dataparallel execution engines I.
Grid Enabled Optimization with GAMS ∗
, 2007
"... We describe a framework for modeling optimization problems for solution on a grid computer. The framework is easy to adapt to multiple grid engines, and can seamlessly integrate evolving mechanisms from particular computing platforms. It facilitates the widely used master/worker model of computing a ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
We describe a framework for modeling optimization problems for solution on a grid computer. The framework is easy to adapt to multiple grid engines, and can seamlessly integrate evolving mechanisms from particular computing platforms. It facilitates the widely used master/worker model of computing and is shown to be exible and powerful enough for a large variety of optimization applications. In particular, we summarize a number of new features of the GAMS modeling system that provide a lightweight, portable and powerful framework for optimization on a grid. We provide downloadable examples of its use for embarrasingly parallel nancial applications, decomposition and iterative algorithms and for solving very di cult mixed integer programs to optimality. Computational results are provided for a number of di erent grid engines, including multicore machines, a pool of machines controlled by the Condor resource manager and the grid engine from Sun Microsystems. 1