Results 1 - 10
of
102
Global minimization using an Augmented Lagrangian method with variable lower-level constraints
, 2007
"... A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration k the method requires the εk-global minimization of the Augmented Lagrangian with simple constraints, where εk → ε. Global c ..."
Abstract
-
Cited by 39 (1 self)
- Add to MetaCart
A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration k the method requires the εk-global minimization of the Augmented Lagrangian with simple constraints, where εk → ε. Global convergence to an ε-global minimizer of the original problem is proved. The subproblems are solved using the αBB method. Numerical experiments are presented.
A Comparison of Complete Global Optimization Solvers
"... Results are reported of testing a number of existing state of the art solvers for global constrained optimization and constraint satisfaction on a set of over 1000 test problems in up to 1000 variables. ..."
Abstract
-
Cited by 28 (3 self)
- Add to MetaCart
Results are reported of testing a number of existing state of the art solvers for global constrained optimization and constraint satisfaction on a set of over 1000 test problems in up to 1000 variables.
Efficient and safe global constraints for handling numerical constraint systems
- SIAM J. NUMER. ANAL
, 2005
"... Numerical constraint systems are often handled by branch and prune algorithms that combine splitting techniques, local consistencies, and interval methods. This paper first recalls the principles of Quad, a global constraint that works on a tight and safe linear relaxation of quadratic subsystems ..."
Abstract
-
Cited by 25 (9 self)
- Add to MetaCart
(Show Context)
Numerical constraint systems are often handled by branch and prune algorithms that combine splitting techniques, local consistencies, and interval methods. This paper first recalls the principles of Quad, a global constraint that works on a tight and safe linear relaxation of quadratic subsystems of constraints. Then, it introduces a generalization of Quad to polynomial constraint systems. It also introduces a method to get safe linear relaxations and shows how to compute safe bounds of the variables of the linear constraint system. Different linearization techniques are investigated to limit the number of generated constraints. QuadSolver, a new branch and prune algorithm that combines Quad, local consistencies, and interval methods, is introduced. QuadSolver has been evaluated on a variety of benchmarks from kinematics, mechanics, and robotics. On these benchmarks, it outperforms classical interval methods as well as constraint satisfaction problem solvers and it compares well with state-of-the-art optimization solvers.
Generalized conflict learning for hybrid discrete/linear optimization
- In CP-2005
, 2005
"... Abstract. Conflict-directed search algorithms have formed the core of practical, model-based reasoning systems for the last three decades. At the core of many of these applications is a series of discrete constraint optimization problems and a conflict-directed search algorithm, which uses conflicts ..."
Abstract
-
Cited by 21 (11 self)
- Add to MetaCart
(Show Context)
Abstract. Conflict-directed search algorithms have formed the core of practical, model-based reasoning systems for the last three decades. At the core of many of these applications is a series of discrete constraint optimization problems and a conflict-directed search algorithm, which uses conflicts in the forward search step to focus search away from known infeasibilities and towards the optimal feasible solution. In the arena of model-based autonomy, deep space probes have given way to more agile vehicles, such as coordinated vehicle control, which must robustly control their continuous dynamics. Controlling these systems requires optimizing over continuous, as well as discrete variables, using linear as well as logical constraints. This paper explores the development of algorithms for solving hybrid discrete/linear optimization problems that use conflicts in the forward search direction, carried from the conflict-directed search algorithm in model-based reasoning. We introduce a novel algorithm called Generalized Conflict-Directed Branch and Bound (GCD-BB). GCD-BB extends traditional Branch and Bound (B&B), by first constructing conflicts from nodes of the search tree that are found to be infeasible or suboptimal, and then by using these conflicts to guide the forward search away from known infeasible and sub-optimal states. Evaluated empirically on a range of test problems of coordinated air vehicle control, GCD-BB demonstrates a substantial improvement in performance compared to a traditional B&B algorithm applied to either disjunctive linear programs or an equivalent binary integer programming encoding. 1
Evolutionary reinforcement learning of artificial neural networks
- International Journal of Hybrid Intelligent Systems
, 2007
"... Abstract. In this article we describe EANT2, Evolutionary ..."
Abstract
-
Cited by 19 (2 self)
- Add to MetaCart
(Show Context)
Abstract. In this article we describe EANT2, Evolutionary
Global optimization in the 21st century: Advances and challenges
, 2005
"... This paper presents an overview of the research progress in global optimization during the last 5 years (1998–2003), and a brief account of our recent research contributions. The review part covers the areas of (a) twice continuously differentiable nonlinear optimization, (b) mixedinteger nonlinear ..."
Abstract
-
Cited by 18 (3 self)
- Add to MetaCart
This paper presents an overview of the research progress in global optimization during the last 5 years (1998–2003), and a brief account of our recent research contributions. The review part covers the areas of (a) twice continuously differentiable nonlinear optimization, (b) mixedinteger nonlinear optimization, (c) optimization with differential-algebraic models, (d) optimization with grey-box/black-box/nonfactorable models, and (e) bilevel nonlinear optimization. Our research contributions part focuses on (i) improved convex underestimation approaches that include convex envelope results for multilinear functions, convex relaxation results for trigonometric functions, and a piecewise quadratic convex underestimator for twice continuously differentiable functions, and (ii) the recently proposed novel generalized �BB framework. Computational studies will illustrate the potential of these advances.
Aggregating risk capital, with an application to operational risk
- The Geneva Risk and Insurance Review
, 2006
"... Abstract We describe a numerical procedure to obtain bounds on the distribution function of a sum of n dependent risks having fixed marginals. With respect to the existing literature, our method provides improved bounds and can be applied also to large non-homogeneous portfolios of risks. As an appl ..."
Abstract
-
Cited by 16 (11 self)
- Add to MetaCart
Abstract We describe a numerical procedure to obtain bounds on the distribution function of a sum of n dependent risks having fixed marginals. With respect to the existing literature, our method provides improved bounds and can be applied also to large non-homogeneous portfolios of risks. As an application, we compute the VaR-based minimum capital requirement for a portfolio of operational risk losses. Key words risk aggregation – dependency bounds – operational risk – mass transportation duality theorem – global optimization
Rigorous error bounds for the optimal value in semidefinite programming
- SIAM J. Numer. Anal
"... Abstract. A wide variety of problems in global optimization, combinatorial optimization as well as systems and control theory can be solved by using linear and semidefinite programming. Sometimes, due to the use of floating point arithmetic in combination with ill-conditioning and degeneracy, errone ..."
Abstract
-
Cited by 13 (4 self)
- Add to MetaCart
(Show Context)
Abstract. A wide variety of problems in global optimization, combinatorial optimization as well as systems and control theory can be solved by using linear and semidefinite programming. Sometimes, due to the use of floating point arithmetic in combination with ill-conditioning and degeneracy, erroneous results may be produced. The purpose of this article is to show how rigorous error bounds for the optimal value can be computed by carefully postprocessing the output of a linear or semidefinite programming solver. It turns out that in many cases the computational costs for postprocessing are small compared to the effort required by the solver. Numerical results are presented including problems from the SDPLIB and the NETLIB LP library; these libraries contain many ill-conditioned and real life problems.
Automated hierarchy discovery for planning in partially observable domains
- Advances in Neural Information Processing Systems 19
, 2006
"... author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. ..."
Abstract
-
Cited by 12 (2 self)
- Add to MetaCart
(Show Context)
author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public.