### Table 1: Pattern-finding by simplicity: A sample of research

2003

"... In PAGE 4: ... Simplicity as a cognitive principle So simplicity appears to go some way towards meeting criterion (i): justifying why patterns should be chosen according to simplicity. What about criterion (ii)? Does simplicity explain empirical data in cognitive science? Table1 describes a range of models of cognitive phenomena, from low and high level visual perception, language processing, memory, similarity judgements, and mental processes in explicit scientific inference. The breadth of domains in which simplicity has proved to be a powerful organizing principle in cognitive modelling is encouraging.... In PAGE 8: ... Table1 : Many pattern-finding problems have been successfully approached by mathematicians and computer scientists using a simplicity principle. In many of these areas, the simplicity principle has also been used as a starting point for modelling... ..."

Cited by 15

### (Table 1) describes the actual data. Hence, we will only compute residuals for the first row of cells. For simplicity of notation, these residuals will be written as rj without the population subscript. Substituting (8) into the numerator of the residual definition, (9), we arrive at the adjusted residual, tailored for our table,

1999

Cited by 5

### Table 6 : Test results for the hill-climbers As can be seen, the random sampling does worst, despite the high sampling rate for F4, F5, F6 and F7. Overall, the hill-climbers out-perform the GAs, with DRHC2 giving the best performance. This algorithm does extremely well over this test set, with an average hit ratio of 0.8. It should be pointed out that these results show only hits on the global optimum, and do not show, for example, if an algorithm very quickly reached a good sub- optimum, but failed to reach the global one. Perhaps most surprising is the effectiveness and computational simplicity of the hill-climbers. Most

1995

Cited by 16

### Table 2. Comparison of CPU-times: each line displays the length of the strings and the CPU times (in seconds) spent to compute the different approximate generalized median strings (average results for the 10 classes of the SIMPLIcity base described in 5, each class having 100 strings).

"... In PAGE 8: ...mprovement rises to 16.56 (resp. 19.41 and 21.29). Table2 compares CPU-times spent to compute the different approximations on a 2.16GHz Intel dual core with a 2MB cache.... ..."

### Table 1 shows on the right-hand side the message used in the deployment of a hypothetical diff-serv++ service on all nodes of a path between two customer sites (represented by the A and C top-level nodes in Fig. 5). The computations executed for this example are shown in Table 2. For the sake of simplicity, the straight- forward update of other fields in the oMsg (such as srcId) and fields that do not change compared to the iMsg message are not shown.

2001

Cited by 3

### Table 2: Roots of 7169 x2 ? 8686 x + 2631 = 0; computed with single precision MCA. For simplicity, we used IEEE single precision oating-point representation in our implementa- tion of MCA. Similar results can be produced in any precision. The C source code for MCA and all examples here is available from the authors. In this table, notice the standard deviation estimates the absolute error in the computed roots. That is, the roots in each run are within a few standard deviations b of the exact solution. Fur- thermore the computed average b also lies within these bounds. So the standard deviation b gives a rough estimate of the error in b : Running a program n times with MCA ultimately gives, for each value x being computed, n samples xi that disagree on the random digits of their errors. These samples have an underlying

### Table 4. Summary of computational transformations

"... In PAGE 6: ... The definitions for skin friction, and Reynolds num- ber, in the computational method are the same as Eqs. (5) and (6); therefore, the transformations shown in Table4 are required to compare the computational results with most of the correlations. For simplicity in the analysis, several theories use the single power relation of (24) 0.... ..."

### Table 2: Roots of 7169 x2 ? 8686 x + 2631 = 0; computed with single precision MCA. Notice that, in Table 2, the standard deviation estimates the absolute error in the computed roots. That is, the roots in each run are within a few standard deviations of the exact solution. 1We used single-precision for simplicity in the implementation. Similar results can be produced in any precision.

### Table 3: The correlation between the mapped scores and the human evaluation scores. The tabulated values are the correlation measures for each of the four calibration systems, as computed based on the samples provided from each of the four systems, the average of those results, and based on all the data combined. The systems are: 1) GIFT; 2) SIMPLIcity; 3) ROMM-CALIB; and 4) Keywords. Fitting Average correlation between human

2005

"... In PAGE 5: ...4.3 Mapping CBIR system scores to human evalu- ation results Table3 provides the correlations between the mapped score and the adjusted human score for all three fltting meth- ods. In order to investigate sources of bias, we computed re- sults for each of the four calibration system evaluated using only the images selected by each of the four.... ..."

Cited by 1

### Table 1: Algorithm comparison In this example the hybrid algorithm performs best, while the AABB algorithm shows worst behaviour for both quali cation criteria (table 1). For exhaustive testing we setup a test-case database with over 50 surface-types and more than 1000 test-cases. Generally it can be said that tight bounding volumes lead to less iterations, but are more expensive to compute than simple ones, so there is a trade-o between simplicity and tight- ness. The combination of both bounding volumes leads in many cases to a better time behaviour than the usage of just one of them.

1998

Cited by 1