### Table I. Differentiable problems. Lipschitz constants and global solutions.

### Table II. Non-differentiable problems. Lipschitz constants and global solutions.

### Table 4: Comparison among the LP methods on Min-Var and Lipschitz condition objectives. Notation: Min-

"... In PAGE 6: ... Of course, #0Cnding smoothness objectives that result in smaller LPs is a direction for future work. The performances of the new LP formulations with #5Csmooth- ness quot; objectives are studied in Table4 . We use the coe#0E- cients #280.... ..."

### Table VI Average Running Times (sec.) of the Lipschitz-Optimisation Procedure for the Minimal-Repair Model n = 3 n = 5 n = 7 n = 10 n = 25 n = 50

### Table 1. Classification accuracies for the Lipschitz classifier and Support Vector Machine on ten 2-dimensional randomly generated test sets. Each test set contains 400 data points.

### Table 1. Performance of the algorithms from ranlip on the examples illus- trated on Figs.1-3. The Lipschitz constant was automatically computed by the algorithm for each element of the partition Dk.

in Class

### Table 1: Four convex loss functions and the corresponding -transform. On the interval [ B; B], each loss function has the indicated Lipschitz constant LB and modulus of con- vexity ( ) with respect to d . All have a quadratic modulus of convexity.

2004

"... In PAGE 3: ...) It is immediate from the definitions that ~ and are nonnegative and that they are also con- tinuous on [0; 1]. We calculate the -transform for exponential loss, logistic loss, quadratic loss and truncated quadratic loss, tabulating the results in Table1 . All of these loss func- tions can be verified to be classification-calibrated.... In PAGE 7: ...dometric d on a0 : we say that : a0 ! a0 is Lipschitz with respect to d, with constant L, if for all a; b 2 a0 , j (a) (b)j L d(a; b): (Note that if d is a metric and is convex, then necessarily satisfies a Lipschitz condition on any compact subset of a0 .) We consider four loss functions that satisfy these conditions: the exponential loss function used in AdaBoost, the deviance function for logistic regression, the quadratic loss function, and the truncated quadratic loss function; see Table1 . We use the pseudometric d (a; b) = inf fja j + j bj : constant on (minf ; g; maxf ; g)g : For all except the truncated quadratic loss function, this corresponds to the standard metric on a0 , d (a; b) = ja bj.... ..."

Cited by 9

### Table 4. \Lena quot; encoding results with \weighted quot; method. As is seen from the table, there is a (very) slight PSNR improvement. The optimum occurs for s = 2:4, while the Lipschitz factor for this mapping was computed as s1 = 2:36. The visual improvement is also small, but is mostly

### Table 4: Comparison among the LP methods on Min-Var and Lipschitz condition objectives. Notation: Min-Var LP: LP with Min- Var objective; LipI LP: LP with Min-Lip-I objective; LipII LP: LP with Min-Lip-II objective; LipIII LP: LP with Min-Lip-III objective; Com LP: LP with combined objective.

in ABSTRACT

### Table 2 Comparison of algorithms considered for road arm optimization Algorithm Advantages Disadvantages

1999

"... In PAGE 9: ... DIRECT employs a bounding technique performing Lipschitz optimization without assuming a Lipschitz constant. Table2 summarizes main characteristics and applicability of these algorithms. Table 2 Comparison of algorithms considered for road arm optimization Algorithm Advantages Disadvantages... ..."

Cited by 2