### Table 1. CCA0BDBPBE-concave densities (normalization constants omitted).

2001

"... In PAGE 3: ... As a corollary of the above theorem algorithm D8CSD6 then can gener- ate random variate of (at least) all log-concave dis- tributions. Table1 give examples of CCA0BDBPBE-concave distributions. 2.... ..."

Cited by 5

### Table 2. In the three last columns it is indicated which models pre- serve convexity for option prices and bond call options, log-convexity of bond prices and log-concavity of bond prices, respectively.

2007

"... In PAGE 22: ... We also note that if we demand that the price is log-convex and log-concave, we recover the well-known sufficient conditions for a model to admit an affine term structure. Our findings for some commonly used models are summarized in Table2... ..."

### Table 2. In the three last columns it is indicated which models pre- serve convexity for option prices and bond call options, log-convexity of bond prices and log-concavity of bond prices, respectively.

"... In PAGE 24: ... We also note that if we demand that the price is log-convex and log-concave, we recover the well-known sufficient conditions for a model to admit an affine term structure. Our findings for some commonly used models are summarized in Table2... ..."

### Table 2: Functions available in BUGS.

1995

"... In PAGE 14: ...2 Restrictions to log-concave distributions Gilks and Wild (1992) provide the theory and examples of adaptive rejection sampling for log- concave distributions - the derivative-free version (Gilks, 1992) is implemented in BUGS. The prior and the likelihood terms in the sampling distribution must be log-concave if they are not conjugate or discrete: Table2 in Gilks and Wild (1992) gives several commonly used parameters that can be sampled. This restriction means that if such parameters are logically de ned as expressions, then these expressions are generally restricted to linear functions.... In PAGE 20: ...7 Functions The standard operators + - * / are available, and bracketing can be used to any depth. Func- tions available are given in Table2 . Certain functions (cloglog, log, logit, probit) may be used on the left hand side of a statement, as indicated under \usage quot;.... ..."

Cited by 35

### Table 2: Functions available in BUGS.

1996

"... In PAGE 13: ...2 Restrictions to log-concave distributions Gilks and Wild (1992) provide the theory and examples of adaptive rejection sampling for log- concave distributions - the derivative-free version (Gilks, 1992) is implemented in BUGS. The prior and the likelihood terms in the sampling distribution must be log-concave if they are not conjugate or discrete: Table2 in Gilks and Wild (1992) gives several commonly used parameters that can be sampled. This restriction means that if such parameters are logically de ned as expressions, then these expressions are generally restricted to linear functions.... In PAGE 19: ...7 Functions The standard operators + - * / are available, and bracketing can be used to any depth. Func- tions available are given in Table2 . Certain functions (cloglog, log, logit, probit) may be used... ..."

Cited by 16

### Table 1. Some numbers with reciprocals closest to numbers(A3) and midpoints

in p

"... In PAGE 4: ... We first look at some of these results and then turn to a detailed performance anal- ysis. 5 Results Table1 presents a small sample of the results obtained using the methods outlined above. For each of the four ma- jor precisions D4 BP BEBGBN BHBFBN BIBGBN BDBDBF, we list the 66 floating- point significands whose reciprocals are closest either to floating-point numbers or midpoints.... ..."

### Table 1: Most likely size Durfee square: tested for 0 n 5000. i q(2m(i)=b). This means that mode fF(n; d)g q2n=b. A slight modi cation of this calculation is required for the families in which we consider the sequence fF(n; d)g only for odd d or even d. The results of our experiments are displayed in Table 1. Each of the families of par- titions F(n) in column 1 was checked for n = 0; : : : ; 5000. Column 2 gives the numerical value of cF based on computation, and column 3 gives the conjectured analytical ex- pression for cF. For the family P(n; d), the analytical expression is proven in Section 3.

"... In PAGE 3: ... The results show that for n su ciently large, jmodefP(n; d)g ? meanfP(n; d)gj 1=2 and that fP(n; d)g is log-concave, but leave open the question whether the Durfee polynomial has all roots real. In Section 4 we explain how the theoretical values for the constants cF given in Table1 of Section 2 were found. 2 Statistics of the Durfee Polynomial We consider the Durfee polynomial for several families of partitions F.... ..."

### Table 3: Explicit midpoint and quot;the quot; Runge-Kutta method

2002

### Table 13. Data utilisation ratios of LOP-ALL using ad hoc selection methods compared to random sampling.

in Cambridge University Press Active Learning and Logarithmic Opinion Pools for HPSG Parse Selection

2006

"... In PAGE 25: ... It is also important to consider sequential selection, which is a default strategy typically adopted by annotators for many domains. Table13 shows the results of testing LOP-ALL with sequential sampling and sampling by shorter sentences and longer sentences. Sampling by lower and higher ambiguity was for the most part on par with sampling by shorter and longer sentences, respectively.... ..."

### Table 2. Percentile Scores Achieved by the Neighbor Divergence Scoring Method

2002

"... In PAGE 4: ... However, most random groups fail to make the cutoff and the precision is higher. In Table2 we have listed the percentile of the score assigned by the method for the different functional groups relative to the 1900 random groups. Neighbor divergence as- signed 15 of the 19 functional groups scores that exceeded all of the 1900 random groups; another 3 functional groups had scores exceeding 98% of the random groups (Table 2).... ..."

Cited by 23