Results

**1 - 5**of**5**### SIAM J. OPTIM. Vol. 21, No. 1, pp. 415–416 c ○ 2011 Society for Industrial and Applied Mathematics ERRATUM: VALIDATED LINEAR RELAXATIONS AND PREPROCESSING: SOME EXPERIMENTS ∗

"... There are errors in column 4 (entitled “Under / over estimators”) of rows 3, 4, and 5 of Table 4.1, on p. 426 of [1]. As a consequence, references to those entries in the table on lines 1 and 2 below the table (on page 426) and on lines 5, 6, 7, and 8 of section 4.1 are incorrect, and the last line ..."

Abstract
- Add to MetaCart

There are errors in column 4 (entitled “Under / over estimators”) of rows 3, 4, and 5 of Table 4.1, on p. 426 of [1]. As a consequence, references to those entries in the table on lines 1 and 2 below the table (on page 426) and on lines 5, 6, 7, and 8 of section 4.1 are incorrect, and the last line of that paragraph should be deleted. The error occurred by using the wrong enclosure range. For example, the midpoint of the enclosure range [−3, 1] for v4, namelyv4 = −1, should have been used in the tangent line for row 3, whereas the midpoint of the enclosure range for v5 = v 2 4 instead. Similar errors occurred in rows 4 and 5. Table 4.1 should therefore be corrected to read as follows: had been used ♯ Operation Enclosures Under/over estimators Convexity 1 v3 ← x1 + x2 [−2, 2] x1 + x2 − v3 = 0 linear 2 v4 ← v3 − 1 [−3, 1] v3 − 1 − v4 = 0 linear 3 v5 ← v 2 4 [0, 9] (−1) 2 +2(−1)(v4 − (−1)) − v5 ≤ 0 convex 4 v6 ← x 2 1 [0, 1] (0) 2 + 2(0)(v1 − 0) − v6 ≤ 0 convex 1 − v6 ≥ 0 nonconvex 5 v7 ← x2 2 [0, 1] (0) 2 + 2(0)(v2 − 0) − v7 ≤ 0 convex 1 − v7 ≥ 0 nonconvex 6 v8 ← v6 + v7 [0, 2] v6 + v7 − v8 = 0 linear 7 v9 ← v8 − 1 [−1, 1] v8 − 1 − v9 = 0 linear 8 v10 ←−v 2 9 [−1, 0] −1 − v10 ≤ 0 nonconvex 9 v11 ← v5 + v10 [−1, 9] v5 + v10 − v11 ≤ 0 linear Thus, lines 1 and 2 below Table 4.1 should read as follows: enclosure interval; for example, the expression (−1) 2 +2(−1)(v4 − (−1)) in the third row corresponds to the tangent line to v 2 4 at v4 = −1. The nonconvex operations (−v 2 9, Since the data were no longer correct, the last sentence of the paragraph above section 4.1 should be deleted. (The reader may solve the corrected linear program with any method.) Similarly, the first paragraph of section 4.1 (in which lines 5, 6, 7, and 8 are changed) should read as follows: 4.1. Refining convex constraints. As explained in [23, section 4.2] and elsewhere, the nonlinear convex operations can be approximated more closely in the linear relaxation by appending more constraints corresponding to additional tangent lines.

### Noname manuscript No. (will be inserted by the editor) A General Framework for Convexity Analysis and an Alternative to Branch and Bound in Deterministic Global Optimization

"... Abstract To date, complete search in deterministic global optimization has been based on branch and bound techniques, with the bounding often done with linear or convex relaxations of the original non-convex problem. Here, we present an alternative, inspired by talks of Ch. Floudas. In this alternat ..."

Abstract
- Add to MetaCart

Abstract To date, complete search in deterministic global optimization has been based on branch and bound techniques, with the bounding often done with linear or convex relaxations of the original non-convex problem. Here, we present an alternative, inspired by talks of Ch. Floudas. In this alternative, a set of non-convex variables, chosen from the intermediate variables in the expressions for the objective and constraints, is first identified. The intervals corresponding to these variables are then subdivided a priori, and the total number of subregions to be examined is known beforehand. The algorithm is designed to provide bounds on the global optimum and at least one global optimizer, with an accuracy determined a posteriori. Advantages include simplicity (less overhead), as well as easy parallelization (since subproblems to be solved are known beforehand and are independent). Furthermore, the number of non-convex variables to be subdivided with the new techniques in this paper can be considerably less than the number identified with schemes from previous work. Identification of the set of non-convex variables can be considered to be a preprocessing step. This preprocessing, done in a much smaller amount of time, reveals beforehand the practicality of using this method to solve a particular problem.

### CONSTRUCTION OF VALIDATED UNIQUENESS REGIONS FOR NONLINEAR PROGRAMS IN WHICH CONVEX SUBSPACES HAVE BEEN

"... Abstract. In deterministic global optimization algorithms for constrained problems, it can be advantageous to identify and utilize coordinates in which the problem is convex, as Epperly and Pistikopoulos have done. In self-validating versions of these algorithms, a useful technique is to construct r ..."

Abstract
- Add to MetaCart

Abstract. In deterministic global optimization algorithms for constrained problems, it can be advantageous to identify and utilize coordinates in which the problem is convex, as Epperly and Pistikopoulos have done. In self-validating versions of these algorithms, a useful technique is to construct regions about approximate optima, within which unique local optima are known to exist; these regions are to be as large as possible, for exclusion from the continuing search process. In this paper, we clarify the theory and develop algorithms for constructing such large regions, when we know the problem is convex in some of the variables. In addition, this paper clarifies how one can validate existence and uniqueness of local minima when using the Fritz John equations in the general case. We present numerical results that provide evidence of the efficacy of our techniques.