Results 1  10
of
17
Global minimization using an Augmented Lagrangian method with variable lowerlevel constraints
, 2007
"... A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration k the method requires the εkglobal minimization of the Augmented Lagrangian with simple constraints, where εk → ε. Global c ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration k the method requires the εkglobal minimization of the Augmented Lagrangian with simple constraints, where εk → ε. Global convergence to an εglobal minimizer of the original problem is proved. The subproblems are solved using the αBB method. Numerical experiments are presented.
Evaluating ASP and commercial solvers on the CSPLib
 In Proceedings of the Seventeenth European Conference on Artificial Intelligence (ECAI 2006
, 2006
"... Abstract. This paper deals with three solvers for combinatorial problems: the commercial stateoftheart solver Ilog OPL, and the research ASP systems DLV and SMODELS. The first goal of this research is to evaluate the relative performance of such systems, using a reproducible and extensible experi ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Abstract. This paper deals with three solvers for combinatorial problems: the commercial stateoftheart solver Ilog OPL, and the research ASP systems DLV and SMODELS. The first goal of this research is to evaluate the relative performance of such systems, using a reproducible and extensible experimental methodology. In particular, we consider a thirdparty problem library, i.e., the CSPLib, and uniform rules for modelling and selecting instances. The second goal is to analyze the effects of a popular reformulation technique, i.e., symmetry breaking, and the impact of other modelling aspects, like global constraints and auxiliary predicates. Results show that there is not a single solver winning on all problems, and that reformulation is almost always beneficial: symmetrybreaking may be a good choice, but its complexity has to be carefully chosen, by taking into account also the particular solver used. Global constraints often, but not always, help OPL, and the addition of auxiliary predicates is usually worth, especially when dealing with ASP solvers. Moreover, interesting synergies among the various modelling techniques exist. 1
Software Quality Assurance for Mathematical Modeling Systems, forthcoming
 GAMS Development Corporation, (2004), GAMS  The Solver Manuals, GAMS Development Corporation
, 2004
"... Abstract With increasing importance placed on standard quality assurance methodologies by large companies and government organizations, many software companies have implemented rigorous quality assurance (QA) processes to ensure that these standards are met. The use of standard QA methodologies cuts ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Abstract With increasing importance placed on standard quality assurance methodologies by large companies and government organizations, many software companies have implemented rigorous quality assurance (QA) processes to ensure that these standards are met. The use of standard QA methodologies cuts maintenance costs, increases reliability, and reduces cycle time for new distributions. Modeling systems differ from most software systems in that a model may fail to solve to optimality without the modeling system being defective. This additional level of complexity requires specific QA activities. To make software quality assurance (SQA) more costeffective, the focus is on reproducible and automated techniques. In this paper we describe some of the main SQA methodologies as applied to modeling systems. In particular, we focus on configuration management, quality control, and testing as they are handled in the GAMS build framework, emphasizing reproducibility, automation, and an opensource publicdomain framework.
An interval partitioning approach for continuous constrained optimization
 Models and Algorithms in Global Optimization
, 2006
"... Constrained Optimization Problems (COP’s) are encountered in many scientific fields concerned with industrial applications such as kinematics, chemical process optimization, molecular design, etc. When nonlinear relationships among variables are defined by problem constraints resulting in nonconv ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Constrained Optimization Problems (COP’s) are encountered in many scientific fields concerned with industrial applications such as kinematics, chemical process optimization, molecular design, etc. When nonlinear relationships among variables are defined by problem constraints resulting in nonconvex feasible sets, the problem of identifying feasible solutions may become very hard. Consequently, finding the location of the global optimum in the COP is more difficult as compared to boundconstrained global optimization problems. This chapter proposes a new interval partitioning method for solving the COP. The proposed approach involves a new subdivision direction selection method as well as an adaptive search tree framework where nodes (boxes defining different variable domains) are explored using a restricted hybrid depthfirst and bestfirst branching strategy. This hybrid approach is also used for activating local search in boxes with the aim of identifying different feasible stationary points. The proposed search tree management approach improves the convergence speed of the interval partitioning method that is also supported by the new parallel subdivision direction selection rule
The Optimization Test Environment
"... Testing is a crucial part of software development in general, and hence also in mathematical programming. Unfortunately, it is often a time consuming and little exciting activity. This naturally motivated us to increase the e ciency in testing solvers for optimization problems and to automatize as m ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Testing is a crucial part of software development in general, and hence also in mathematical programming. Unfortunately, it is often a time consuming and little exciting activity. This naturally motivated us to increase the e ciency in testing solvers for optimization problems and to automatize as much of the procedure as possible. Keywords: test environment, optimization, solver benchmarking, solver comparison The testing procedure typically consists of three basic tasks: a) organize test problem sets, also called test libraries; b) solve selected test problems with selected solvers; c) analyze, check and compare the results. The Test Environment is a graphical user interface (GUI) that enables to manage the tasks a) and b) interactively, and task c) automatically. The Test Environment is particularly designed for users who seek to 1. adjust solver parameters, or 2. compare solvers on single problems, or 3. evaluate solvers on suitable test sets.
COMPARISON AND AUTOMATED SELECTION OF LOCAL OPTIMIZATION SOLVERS FOR INTERVAL GLOBAL OPTIMIZATION METHODS ∗
"... Abstract. We compare six stateoftheart local optimization solvers with focus on their efficiency when invoked within an intervalbased global optimization algorithm. For comparison purposes we design three special performance indicators: a solution check indicator (measuring whether the local min ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. We compare six stateoftheart local optimization solvers with focus on their efficiency when invoked within an intervalbased global optimization algorithm. For comparison purposes we design three special performance indicators: a solution check indicator (measuring whether the local minimizers found are good candidates for nearoptimal verified feasible points), a function value indicator (measuring the contribution to the progress of the global search), and the running time indicator (estimating the computational cost of the local search within the global search). The solvers are compared on the COCONUT Environment test set consisting of 1307 problems. Our main target is to predict the behavior of the solvers in terms of the three performance indicators on a new problem. For this we introduce a knearest neighbor method applied over a feature space consisting of several categorical and numerical features of the optimization problems. The quality and robustness of the prediction is demonstrated by various quality measurements with detailed comparative tests. In particular, we found that on the test set we are able to pick a ‘best ’ solver in 66–89 % of the cases and avoid picking all ‘useless ’ solvers in 95–99 % of the cases (when a useful alternative exists). The resulting automated solver selection method is implemented as an inference engine of the COCONUT Environment.
The Optimization Test Environment User manual
, 2010
"... Abstract. The Test Environment is an interface to efficiently test different optimization solvers. It is designed as a tool for both developers of solver software and practitioners who just look for the best solver for their specific problem class. It enables users to: • Choose and compare diverse s ..."
Abstract
 Add to MetaCart
Abstract. The Test Environment is an interface to efficiently test different optimization solvers. It is designed as a tool for both developers of solver software and practitioners who just look for the best solver for their specific problem class. It enables users to: • Choose and compare diverse solver routines; • Organize and solve large test problem sets; • Select interactively subsets of test problem sets; • Perform a statistical analysis of the results, automatically produced as L ATEX and PDF output. The Test Environment is free to use for research purposes.
Enhancing a Genetic Algorithm by a Complete Solution Archive Based on a Trie Data Structure DIPLOMARBEIT zur Erlangung des akademischen Grades DiplomIngenieur/in
"... Many parameters and improvements have been designed to solve special problems. However, it is difficult to find techniques which can be used universally. In my thesis, I will describe a mechanism which should improve the ability to find a better solution for each genetic algorithm; a complete soluti ..."
Abstract
 Add to MetaCart
Many parameters and improvements have been designed to solve special problems. However, it is difficult to find techniques which can be used universally. In my thesis, I will describe a mechanism which should improve the ability to find a better solution for each genetic algorithm; a complete solution archive based on a trie data structure. The idea of the archive is to efficiently store all visited solutions, avoid revisits, and have a good and intelligent mechanism for transforming of already visited solution into a similar unvisited one. The genetic algorithm can be seen as a separate module which generates solutions in a specific way. Every created solution is forwarded to the trie. As the trie accepts the solution, it checks whether it is included in the archive already. If the solution is not in the archive already, it is simply inserted into the trie. On the other hand, when the solution is in the trie, it comes to a revisit. Handling of the revisit can be done in several ways. It is important to find a good balance between the quality of the changed solution and the effort needed to change it. After inserting or altering a solution, it is sent back to the genetic algorithm module and then handled as usual. This thesis presents the implemented algorithms and data structures. The archive is tested on three problems; Royal Road function, NK landscapes problem, and MAXSAT problem. The results of the standard genetic algorithm are compared to the algorithms that use the archive. The results show that in many cases the archive contributes to the quality of the solutions. ii
and
"... The optimum and at least one optimizing point for convex nonlinear programs can be approximated well by the solution to a linear program (a fact long used in branch and bound algorithms). In more general problems, we can identify subspaces of “nonconvex variables” such that, if these variables have ..."
Abstract
 Add to MetaCart
The optimum and at least one optimizing point for convex nonlinear programs can be approximated well by the solution to a linear program (a fact long used in branch and bound algorithms). In more general problems, we can identify subspaces of “nonconvex variables” such that, if these variables have sufficiently small ranges, the optimum and at least one optimizing point can be approximated well by the solution of a single linear program. If these subspaces are lowdimensional, this suggests subdividing the variables in the subspace a priori, then producing and solving a fixed, known number of linear programs to obtain an approximation to the solution. The total amount of computation is much more predictable than that required to complete a branch and bound algorithm, and the scheme is “embarrassingly parallel, ” with little need for either communication or load balancing. We compare such a nonadaptive scheme experimentally to our GlobSol branch and bound implementation, on those problems from the COCONUT project Lib1 test set with nonconvex subspaces of dimension 4 or less, and we discuss potential alterations to both the nonadaptive scheme and our branch and bound process that might change the scope of applicability. AMS Subject Classification: 90C26; 49M20.
Noname manuscript No. (will be inserted by the editor) A General Framework for Convexity Analysis and an Alternative to Branch and Bound in Deterministic Global Optimization
"... Abstract To date, complete search in deterministic global optimization has been based on branch and bound techniques, with the bounding often done with linear or convex relaxations of the original nonconvex problem. Here, we present an alternative, inspired by talks of Ch. Floudas. In this alternat ..."
Abstract
 Add to MetaCart
Abstract To date, complete search in deterministic global optimization has been based on branch and bound techniques, with the bounding often done with linear or convex relaxations of the original nonconvex problem. Here, we present an alternative, inspired by talks of Ch. Floudas. In this alternative, a set of nonconvex variables, chosen from the intermediate variables in the expressions for the objective and constraints, is first identified. The intervals corresponding to these variables are then subdivided a priori, and the total number of subregions to be examined is known beforehand. The algorithm is designed to provide bounds on the global optimum and at least one global optimizer, with an accuracy determined a posteriori. Advantages include simplicity (less overhead), as well as easy parallelization (since subproblems to be solved are known beforehand and are independent). Furthermore, the number of nonconvex variables to be subdivided with the new techniques in this paper can be considerably less than the number identified with schemes from previous work. Identification of the set of nonconvex variables can be considered to be a preprocessing step. This preprocessing, done in a much smaller amount of time, reveals beforehand the practicality of using this method to solve a particular problem.