## Validated linear relaxations and preprocessing: Some experiments, 2003. accepted for publication in

Venue: | SIAM J. Optim |

Citations: | 5 - 3 self |

### BibTeX

@ARTICLE{Baker_validatedlinear,

author = {R. Baker and Kearfott and Siriporn Hongthong},

title = {Validated linear relaxations and preprocessing: Some experiments, 2003. accepted for publication in},

journal = {SIAM J. Optim},

year = {}

}

### OpenURL

### Abstract

Abstract. Based on work originating in the early 1970s, a number of recent global optimization algorithms have relied on replacing an original nonconvex nonlinear program by convex or linear relaxations. Such linear relaxations can be generated automatically through an automatic differentiation process. This process decomposes the objective and constraints (if any) into convex and nonconvex unary and binary operations. The convex operations can be approximated arbitrarily well by appending additional constraints, while the domain must somehow be subdivided (in an overall branch-and-bound process or in some other local process) to handle nonconvex constraints. In general, a problem can be hard if even a single nonconvex term appears. However, certain nonconvex terms lead to easier-to-solve problems than others. Recently, Neumaier, Lebbah, Michel, ourselves, and others have paved the way to utilizing such techniques in a validated context. In this paper, we present a symbolic preprocessing step that provides a measure of the intrinsic difficulty of a problem. Based on this step, one of two methods can be chosen to relax nonconvex terms. This preprocessing step is similar to a method previously proposed by Epperly and Pistikopoulos [J. Global Optim., 11 (1997), pp. 287–311] for determining subspaces in which to branch, but we present it from a different point of view that is amenable to simplification of the problem presented to the linear programming solver, and within a validated context. Besides an illustrative example, we have implemented general relaxations in a validated context, as well as the preprocessing technique, and we present experiments on a standard test set. Finally, we present conclusions.

### Citations

494 |
Global Optimization Using Interval Analysis
- Hansen
- 1992
(Show Context)
Citation Context ...explanations in which convex underestimators are employed, see, for example, [5], [23]. For explanations focusing on validation but restricted to traditional interval arithmetic-based techniques, see =-=[8]-=- or [10, Chapter 5]. The effectiveness of the above technique depends on the quality of the upper bound ϕ and the lower bound ϕ(x). The upper bound ϕ may be obtained by various techniques, such as by ... |

316 | Rigorous Global Search: Continuous Problems - Kearfott - 1996 |

136 |
Automatic Differentiation: Techniques and Applications
- Rall
- 1981
(Show Context)
Citation Context ...The original idea for such an arithmetic predates the groundbreaking work of McCormick [15], [16]. Such an arithmetic may employ operator overloading or similar technology, such as explained, say, in =-=[20]-=-, [10, section 1.4], or in the proceedings [1], [2], or [7]. A framework for such automatic computation is given in [23, section 4.1]. In such an arithmetic, given underestimators for expressions g1 a... |

118 |
Computability of global solutions to factorable nonconvex programs. I: Convex underestimating problems
- McCormick
- 1976
(Show Context)
Citation Context ...arithmetic can be used to automatically compute underestimators given by expressions or computer programs. The original idea for such an arithmetic predates the groundbreaking work of McCormick [15], =-=[16]-=-. Such an arithmetic may employ operator overloading or similar technology, such as explained, say, in [20], [10, section 1.4], or in the proceedings [1], [2], or [7]. A framework for such automatic c... |

78 |
Eds): Automatic Differentiation of Algorithms: Theory, Implementation, and Applications
- Griewank, Corliss
(Show Context)
Citation Context ...breaking work of McCormick [15], [16]. Such an arithmetic may employ operator overloading or similar technology, such as explained, say, in [20], [10, section 1.4], or in the proceedings [1], [2], or =-=[7]-=-. A framework for such automatic computation is given in [23, section 4.1]. In such an arithmetic, given underestimators for expressions g1 and g2, formulas are implemented for computing underestimato... |

55 |
Global optimization of nonconvex nlps and minlps with applications in process design. Computers chem
- Ryoo, Sahinidis
- 1995
(Show Context)
Citation Context ... solver is used to solve the linear relaxations may have a significant effect on the practicality of the overall branch-and-bound algorithm. Also, the “probing” technique in BARON, first described in =-=[21]-=- and later in [23], may be effective; we are presently developing a validated version of this technique. Acknowledgments. We wish to thank Sven Leyffer and Jorge Moré for inviting the first author to ... |

52 |
Nonlinear Programming: Theory, Algorithms and Applications
- McCormick
(Show Context)
Citation Context ...mputing underestimators of powers, exponentials, logarithms, and other such functions encountered in practice. Many of the ideas for such an arithmetic appear in the work of McCormick [15], [16], and =-=[17]-=-. Significant portions of the books [5] and [23] are devoted to techniques for deriving underestimators and overestimators as we have just described, and for implementing automatic computation of thes... |

48 | An Interior Point Algorithm for Large-Scale Nonlinear Optimization with Applications in Process Engineering - Wächter - 2002 |

47 |
An algorithm for separable nonconvex programming roblems. Management Science 15: 550–569
- Falk, Soland
- 1969
(Show Context)
Citation Context ...er the subregion x is then computed. If ϕ(x) > ϕ, then x is rejected; otherwise, other techniques are used to reduce, eliminate, or subdivide x. The original explanation for this technique appears in =-=[4, 22]-=-, while a relatively early didactic explanation appears in [19]. For more recent explanations in which convex underestimators are employed, see, for example, [5], [23]. For explanations focusing on va... |

42 |
Convexification and Global Optimization
- Tawarmalani, Sahinidis
- 2002
(Show Context)
Citation Context ...for this technique appears in [4, 22], while a relatively early didactic explanation appears in [19]. For more recent explanations in which convex underestimators are employed, see, for example, [5], =-=[23]-=-. For explanations focusing on validation but restricted to traditional interval arithmetic-based techniques, see [8] or [10, Chapter 5]. The effectiveness of the above technique depends on the qualit... |

26 |
Floudas, Deterministic Global Optimization: Theory
- A
- 1999
(Show Context)
Citation Context ...tion for this technique appears in [4, 22], while a relatively early didactic explanation appears in [19]. For more recent explanations in which convex underestimators are employed, see, for example, =-=[5]-=-, [23]. For explanations focusing on validation but restricted to traditional interval arithmetic-based techniques, see [8] or [10, Chapter 5]. The effectiveness of the above technique depends on the ... |

25 | T.: A comparison of complete global optimization solvers
- Neumaier, Shcherbina, et al.
- 2005
(Show Context)
Citation Context ... corresponding to the identified subspaces were selected for further bisection2 . We used essentially the same test set as in [12], namely, the “tiny” problems from Library 1 in the Neumaier test set =-=[18]-=-. The results appear in Table 7.1. We carried out the experiments in Table 7.1 on a dual 2.8 GHz processor3 AMD Opteron machine running Linux (SuSe distribution 9.1), with 4 gigabytes of memory. We co... |

19 | An SQP Algorithm for finely discretized continuous minimax problems and other minimax problems with many objective functions
- Zhou, Tits
- 1996
(Show Context)
Citation Context ... testing our programs with Example 1 and various other small problems with certain properties. For a reasonably simple but realistic test problem, we have tried the following problem, originally from =-=[25]-=-. Example 2. Minimize max 1≤i≤m �fi(x)� , where fi(x)=x1e x3ti + x2e x4ti − 1 , 1+ti ti = −0.5+(i− 1)/(m − 1), 1 ≤ i ≤ m. We transformed this nonsmooth problem to a smooth problem with Lemaréchal’s co... |

14 | Efficient and safe global constraints for handling numerical constraint systems
- Lebbah, Michel, et al.
- 2005
(Show Context)
Citation Context ...ent of the space of reduced variables, and we are currently developing this idea. Evidence that an improved validated environment can be produced is in the successful validated codes of Lebbah et al. =-=[13]-=-. We have used the subspace analysis technique in GlobSol’s validated branchand-bound algorithm, testing the technique on a published low-dimensional test set. Those tests revealed that, for most prob... |

9 |
A reduced space branch and bound algorithm for global optimization
- Epperly, Pistikopoulos
- 1997
(Show Context)
Citation Context ... gives the dimension of the subspace in which tessellation must occur, and thus gives a measure of how much effort needs to be expended to accurately approximate a solution. Epperly and Pistikopoulos =-=[3]-=- proposed a method for subspace analysis that gives subspaces similar to ours; they also illustrated its effectiveness on various problems. However, their view of the process is different from the abo... |

9 | Globsol: History, composition, and advice on use
- Kearfott
- 2003
(Show Context)
Citation Context ... 8 v10 ←−v 2 9 [−1, 0] −v 2 9 − v10 ≤ 0 nonconvex 9 v11 ← v5 + v10 [−1, 9] v5 + v10 − v11 ≤ 0 linear Example 1, a small unconstrained problem except for bound constraints, is easily solved by GlobSol =-=[11]-=-, a traditional interval branch-and-bound method. However, it is nonconvex, and can be used to illustrate underlying concepts in this work. To generate a linear relaxation of this problem, we first as... |

8 |
2004, ‘Empirical Comparisons of Linear Relaxations and Alternate Techniques in Validated Deterministic Global Optimization’. preprint, http://interval.louisiana.edu/preprints/ validated global optimization search comparisons.pdf
- Kearfott
- 2004
(Show Context)
Citation Context ...ithin our branch-and-bound algorithm. We implemented subdivision in the subspace, as we describe in section 7 below, within the search process that uses validated linear relaxations as we describe in =-=[12]-=-. 7. Some systematic comparisons. In [9], we detail some of the techniques we have used to provide machine-representable relaxations that are mathematically rigorous, while in [12] we describe our imp... |

8 |
Converting General Nonlinear Programming Problems to Separable Nonlinear Programming Problems
- McCormick
- 1972
(Show Context)
Citation Context ...s. An arithmetic can be used to automatically compute underestimators given by expressions or computer programs. The original idea for such an arithmetic predates the groundbreaking work of McCormick =-=[15]-=-, [16]. Such an arithmetic may employ operator overloading or similar technology, such as explained, say, in [20], [10, section 1.4], or in the proceedings [1], [2], or [7]. A framework for such autom... |

7 |
Constrained Global Optimization: Algorithms and
- Pardalos, Rosen
- 1987
(Show Context)
Citation Context ...ed; otherwise, other techniques are used to reduce, eliminate, or subdivide x. The original explanation for this technique appears in [4, 22], while a relatively early didactic explanation appears in =-=[19]-=-. For more recent explanations in which convex underestimators are employed, see, for example, [5], [23]. For explanations focusing on validation but restricted to traditional interval arithmetic-base... |

5 | Automatic Differentiation of Algorithms: From Simulation to Optimization - Faure, Hascoët, et al. - 2002 |

4 |
Rigorous linear overestimators and underestimators,” submitted to Mathematical Programming B
- Hongthong, Kearfott
- 2004
(Show Context)
Citation Context ...erval enclosures, and expanded NLP for Example 1. ♯ Operation Enclosures Constraints Convexity 1 v3 ← x1 + x2 [−2, 2] x1 + x2 − v3 = 0 linear 2 v4 ← v3 − 1 [−3, 1] v3 − 1 − v4 = 0 linear 3 v5 ← v 2 4 =-=[0, 9]-=- v 2 4 − v5 ≤ 0 convex 4 v6 ← x 2 1 [0, 1] v 2 1 − v6 = 0 both 5 v7 ← x 2 2 [0, 1] v 2 2 − v7 = 0 both 6 v8 ← v6 + v7 [0, 2] v6 + v7 − v8 = 0 linear 7 v9 ← v8 − 1 [−1, 1] v8 − 1 − v9 = 0 linear 8 v10 ... |

3 |
Construction of convex relaxations using automated code generation techniques
- Gatzke, Tolsma, et al.
- 2002
(Show Context)
Citation Context ...tion of underestimators, based on convex envelopes and linear underestimations. The techniques in [23] are embodied in the highly successful software package BARON. Lastly, Gatzke, Tolsma, and Barton =-=[6]-=- have implemented automated generation of both linear underestimating techniques as in [23] and convex underestimating techniques as in [5] in their DAEPACK system. 1.4. Our view of the process. In th... |

2 |
Non Differentiable Optimization, Nonlinear Optimization
- Lemaréchal
- 1980
(Show Context)
Citation Context ... 2. Minimize max 1≤i≤m �fi(x)� , where fi(x)=x1e x3ti + x2e x4ti − 1 , 1+ti ti = −0.5+(i− 1)/(m − 1), 1 ≤ i ≤ m. We transformed this nonsmooth problem to a smooth problem with Lemaréchal’s conditions =-=[14]-=- to obtain minimize v subject to fi(x) − v ≤ 0,1 ≤ i ≤ m −fi(x) − v ≤ 0,1 ≤ i ≤ m. To test the preprocessing, we took m = 21, xi ∈ [−5,5] for 1 ≤ i ≤ 4, and v ∈ [−100,100]. The resulting output had 22... |