• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

Complete search in continuous global optimization and constraint satisfaction. In: Iserles, (2004)

by A Neumaier
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 102
Next 10 →

Global minimization using an Augmented Lagrangian method with variable lower-level constraints

by Ernesto G. Birgin , et al. , 2007
"... A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration k the method requires the εk-global minimization of the Augmented Lagrangian with simple constraints, where εk → ε. Global c ..."
Abstract - Cited by 39 (1 self) - Add to MetaCart
A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration k the method requires the εk-global minimization of the Augmented Lagrangian with simple constraints, where εk → ε. Global convergence to an ε-global minimizer of the original problem is proved. The subproblems are solved using the αBB method. Numerical experiments are presented.

A Comparison of Complete Global Optimization Solvers

by Arnold Neumaier, Oleg, Shcherbina, Waltraud Huyer, Tamás Vinkó
"... Results are reported of testing a number of existing state of the art solvers for global constrained optimization and constraint satisfaction on a set of over 1000 test problems in up to 1000 variables. ..."
Abstract - Cited by 28 (3 self) - Add to MetaCart
Results are reported of testing a number of existing state of the art solvers for global constrained optimization and constraint satisfaction on a set of over 1000 test problems in up to 1000 variables.

Efficient and safe global constraints for handling numerical constraint systems

by Yahia Lebbah, Claude Michel, Michel Rueher, David Daney, Jean-pierre Merlet - SIAM J. NUMER. ANAL , 2005
"... Numerical constraint systems are often handled by branch and prune algorithms that combine splitting techniques, local consistencies, and interval methods. This paper first recalls the principles of Quad, a global constraint that works on a tight and safe linear relaxation of quadratic subsystems ..."
Abstract - Cited by 25 (9 self) - Add to MetaCart
Numerical constraint systems are often handled by branch and prune algorithms that combine splitting techniques, local consistencies, and interval methods. This paper first recalls the principles of Quad, a global constraint that works on a tight and safe linear relaxation of quadratic subsystems of constraints. Then, it introduces a generalization of Quad to polynomial constraint systems. It also introduces a method to get safe linear relaxations and shows how to compute safe bounds of the variables of the linear constraint system. Different linearization techniques are investigated to limit the number of generated constraints. QuadSolver, a new branch and prune algorithm that combines Quad, local consistencies, and interval methods, is introduced. QuadSolver has been evaluated on a variety of benchmarks from kinematics, mechanics, and robotics. On these benchmarks, it outperforms classical interval methods as well as constraint satisfaction problem solvers and it compares well with state-of-the-art optimization solvers.
(Show Context)

Citation Context

...adratic objective function, fathoming node, and Lagrangian dual problem. Thus, these techniques can be considered as local consistencies for optimization problems (see also [59] and Neumaier’s survey =-=[41]-=-).s2078 LEBBAH, MICHEL, RUEHER, DANEY, AND MERLET problem. 4 They have proposed a simple and cheap procedure to get a rigorous upper bound of the objective function. The incorporation of these procedu...

Generalized conflict learning for hybrid discrete/linear optimization

by Hui Li, Brian Williams - In CP-2005 , 2005
"... Abstract. Conflict-directed search algorithms have formed the core of practical, model-based reasoning systems for the last three decades. At the core of many of these applications is a series of discrete constraint optimization problems and a conflict-directed search algorithm, which uses conflicts ..."
Abstract - Cited by 21 (11 self) - Add to MetaCart
Abstract. Conflict-directed search algorithms have formed the core of practical, model-based reasoning systems for the last three decades. At the core of many of these applications is a series of discrete constraint optimization problems and a conflict-directed search algorithm, which uses conflicts in the forward search step to focus search away from known infeasibilities and towards the optimal feasible solution. In the arena of model-based autonomy, deep space probes have given way to more agile vehicles, such as coordinated vehicle control, which must robustly control their continuous dynamics. Controlling these systems requires optimizing over continuous, as well as discrete variables, using linear as well as logical constraints. This paper explores the development of algorithms for solving hybrid discrete/linear optimization problems that use conflicts in the forward search direction, carried from the conflict-directed search algorithm in model-based reasoning. We introduce a novel algorithm called Generalized Conflict-Directed Branch and Bound (GCD-BB). GCD-BB extends traditional Branch and Bound (B&B), by first constructing conflicts from nodes of the search tree that are found to be infeasible or suboptimal, and then by using these conflicts to guide the forward search away from known infeasible and sub-optimal states. Evaluated empirically on a range of test problems of coordinated air vehicle control, GCD-BB demonstrates a substantial improvement in performance compared to a traditional B&B algorithm applied to either disjunctive linear programs or an equivalent binary integer programming encoding. 1
(Show Context)

Citation Context

...ods for non-linear optimization problems is related to our algorithm in that they guide search for solutions using consistency checking, constraint propagation and approximations. Neumaier ′ s survey =-=[22]-=- studies the utility and complexity of sub-optimality pruning for non-linear optimization. 1.6 Chapter Overview The rest of this thesis is organized as follows. Chapter 2 studies the problem formulati...

Evolutionary reinforcement learning of artificial neural networks

by Nils T Siebel, Gerald Sommer - International Journal of Hybrid Intelligent Systems , 2007
"... Abstract. In this article we describe EANT2, Evolutionary ..."
Abstract - Cited by 19 (2 self) - Add to MetaCart
Abstract. In this article we describe EANT2, Evolutionary
(Show Context)

Citation Context

...timisation algorithms like gradient descent-type methods is impracticable for large problems. It is known from mathematical optimisation theory that these algorithms tend to get stuck in local minima =-=[20]-=-. They only work well with very simple (e.g., convex) target functions or if an approximate solution is known beforehand. (ibid.) In short, these methods lack generality and can therefore only be used...

Global optimization in the 21st century: Advances and challenges

by C. A. Floudas , I. G. Akrotirianakis , S. Caratzoulas , C. A. Meyer , J. Kallrath , 2005
"... This paper presents an overview of the research progress in global optimization during the last 5 years (1998–2003), and a brief account of our recent research contributions. The review part covers the areas of (a) twice continuously differentiable nonlinear optimization, (b) mixedinteger nonlinear ..."
Abstract - Cited by 18 (3 self) - Add to MetaCart
This paper presents an overview of the research progress in global optimization during the last 5 years (1998–2003), and a brief account of our recent research contributions. The review part covers the areas of (a) twice continuously differentiable nonlinear optimization, (b) mixedinteger nonlinear optimization, (c) optimization with differential-algebraic models, (d) optimization with grey-box/black-box/nonfactorable models, and (e) bilevel nonlinear optimization. Our research contributions part focuses on (i) improved convex underestimation approaches that include convex envelope results for multilinear functions, convex relaxation results for trigonometric functions, and a piecewise quadratic convex underestimator for twice continuously differentiable functions, and (ii) the recently proposed novel generalized �BB framework. Computational studies will illustrate the potential of these advances.

Aggregating risk capital, with an application to operational risk

by Paul Embrechts, Paul Embrechts, Giovanni Puccetti, Giovanni Puccetti - The Geneva Risk and Insurance Review , 2006
"... Abstract We describe a numerical procedure to obtain bounds on the distribution function of a sum of n dependent risks having fixed marginals. With respect to the existing literature, our method provides improved bounds and can be applied also to large non-homogeneous portfolios of risks. As an appl ..."
Abstract - Cited by 16 (11 self) - Add to MetaCart
Abstract We describe a numerical procedure to obtain bounds on the distribution function of a sum of n dependent risks having fixed marginals. With respect to the existing literature, our method provides improved bounds and can be applied also to large non-homogeneous portfolios of risks. As an application, we compute the VaR-based minimum capital requirement for a portfolio of operational risk losses. Key words risk aggregation – dependency bounds – operational risk – mass transportation duality theorem – global optimization

An extended level set method for shape and topology optimization

by S. Y. Wang , K. M. Lim , B. C. Khoo , M. Y. Wang , 2007
"... ..."
Abstract - Cited by 15 (0 self) - Add to MetaCart
Abstract not found

Rigorous error bounds for the optimal value in semidefinite programming

by Christian Jansson, Denis Chaykin, Christian Keil - SIAM J. Numer. Anal
"... Abstract. A wide variety of problems in global optimization, combinatorial optimization as well as systems and control theory can be solved by using linear and semidefinite programming. Sometimes, due to the use of floating point arithmetic in combination with ill-conditioning and degeneracy, errone ..."
Abstract - Cited by 13 (4 self) - Add to MetaCart
Abstract. A wide variety of problems in global optimization, combinatorial optimization as well as systems and control theory can be solved by using linear and semidefinite programming. Sometimes, due to the use of floating point arithmetic in combination with ill-conditioning and degeneracy, erroneous results may be produced. The purpose of this article is to show how rigorous error bounds for the optimal value can be computed by carefully postprocessing the output of a linear or semidefinite programming solver. It turns out that in many cases the computational costs for postprocessing are small compared to the effort required by the solver. Numerical results are presented including problems from the SDPLIB and the NETLIB LP library; these libraries contain many ill-conditioned and real life problems.
(Show Context)

Citation Context

.... In interval arithmetic several methods for computing rigorous bounds for all or some eigenvalues of interval matrices were developed. Some important references are Floudas [5], Mayer [17], Neumaier =-=[22]-=-, and Rump [26, 27]. 3. Rigorous lower bound. In many applications some or all input data are uncertain. We model these uncertainties by intervals. In the case of semidefinite programming we assume th...

Automated hierarchy discovery for planning in partially observable domains

by Laurent Charlin - Advances in Neural Information Processing Systems 19 , 2006
"... author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. ..."
Abstract - Cited by 12 (2 self) - Add to MetaCart
author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public.
(Show Context)

Citation Context

...lly increasing would seem to be a natural next step. Having said this, there might also be merit in trying out a class of optimization solvers which focus on finding global optimum, global optimizers =-=[34, 35]-=-. Other approximation methods should also be considered. In the spirit of previous work, future research could focus on finding a principled way to partition the state space as we discover part of the...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2016 The Pennsylvania State University