### Table 1. Classification of prototypes.

2001

"... In PAGE 5: ... Thus, a prototype can vary from a simple paper mock-up to a fully functional executable system. A classification of different types of prototypes is presented in Table1 . Sutcliffe (1997a) and McGraw and Harbison (1997) propose that the use of prototypes, mock-ups, examples, scenes, and narrative descriptions of contexts could all be called scenario-based approaches.... ..."

Cited by 1

### Table 8. Classification of probabilistic structure classes (second phase data)

2006

"... In PAGE 16: ...hem hard), the Mneimneh-Sakallah suite (s27, 298, ..., a total of 52 instances of which 40 are hard), and the Katz suite (jmc quant, 20 instances, of which 13 are hard). In Table8 we show the classification of the probabilistic structure classes included in the evaluation test set according to the solvers admitted to the second phase. Table 8 is arranged similarly to Table 7, where the data about Nested Counterfactuals has been sum- marised in a single entry, QHorn instances are divided into Horn ( Horn ) and renamable Horn (renHorn) families, and Robot instances are presented split into four families corre- sponding to the number of obstacles known in advance.... In PAGE 16: ... Model A instances are split into 24 classes corresponding to six different values of the alternation depth (from 0 to 5 alterna- tions), each family comprised of instances with different number of variables (20,40,80,160). According to the data summarised in Table8 , this part of the evaluation second phase consisted of 2640 instances, of which 2029 have been solved, 1126 declared satisfiable and 903 declared unsatisfiable, resulting in 126 easy, 1687 medium, 216 medium-hard, and 611 hard instances. These results indicate that overall the selected probabilistic classes are within the capabilities of current state-of-the-art QBF solvers, but in some cases they are challenging as much as structured ones.... ..."

Cited by 3

### Table 5 Proximity matrix for generic character

"... In PAGE 7: ...3 Clustering with a Linkage Algorithm Using the font proximity matrix, the font pairs having mini- mum average distance are merged and a new cluster is formed. After each merge step the proximity matrix in Table5 is updated by defining the new distances between clusters and the number of rows and columns of the prox- imity matrix are decreased by one. As clusters become size- able, the single linkage method suffers from false merges.... ..."

### Table 1: Classification of Probabilistic Queries. Query Class Entity-based Value-based

2003

Cited by 101

### Table 4. Results of the Rank-Based Proximity Swap Test

"... In PAGE 18: ... Expect P(a) to be a worse approximation of interval length for MAINTAIN than for TAXES. Table4 confirms this. Suppose one desires a target correlation of the swapped to the unswapped value of 0.... In PAGE 20: ... Recall that K is the average percentage change induced by the rank swap. Table4 shows that 0 the theory provides good predictions for the relationship between K and P(a). Again, the 0 distribution of each swapping interval is assumed to be approximately uniform.... In PAGE 20: ... Compare values (within each field) of 0 the quot;PCT OF RECORDS IN SWAPPING INTERVAL quot; column with the quot;AVERAGE ABSOLUTE PCT CHANGE/ OBS. quot;column from Table4 . The columns are almost directly linearly correlated.... In PAGE 20: ... When P(a) doubles, the observed value of K approximately doubles. 0 Also use Table4 to confirm the inverse relationship between the observed correlation, R(a, a apos;) and the corresponding observed value of K . This is a logical consequence of the validity of 0 Theorems 3 and 4.... In PAGE 22: ...fficient code could be written in another environment (Unix, C, etc.). The programming code exists which is easy to use and modify. Does this code execute in a relatively short amount of time? Table4 shows the CPU time required to execute the swap (Module 5).... In PAGE 42: ... When implemented on 1993 Annual Housing Survey data, the above estimate proved amazingly accurate in spite of all the ideal assumptions (uniform distributions, independent variables, and constants instead of expectations). Table4 shows the results.... ..."

### Table 4. Results of the Rank-Based Proximity Swap Test

"... In PAGE 19: ... Expect P(a) to be a worse approximation of interval length for MAINTAIN than for TAXES. Table4 confirms this. Suppose one desires a target correlation of the swapped to the unswapped value of 0.... In PAGE 21: ... Recall that K is the average percentage change induced by the rank swap. Table4 shows that 0 the theory provides good predictions for the relationship between K and P(a). Again, the 0 distribution of each swapping interval is assumed to be approximately uniform.... In PAGE 22: ...of the quot;PCT OF RECORDS IN SWAPPING INTERVAL quot; column with the quot;AVERAGE ABSOLUTE PCT CHANGE/ OBS. quot;column from Table4 . The columns are almost directly linearly correlated.... In PAGE 22: ... When P(a) doubles, the observed value of K approximately doubles. 0 Also use Table4 to confirm the inverse relationship between the observed correlation, R(a, a apos;) and the corresponding observed value of K . This is a logical consequence of the validity of 0 Theorems 3 and 4.... In PAGE 24: ...The programming code exists which is easy to use and modify. Does this code execute in a relatively short amount of time? Table4 shows the CPU time required to execute the swap (Module 5).... In PAGE 45: ... When implemented on 1993 Annual Housing Survey data, the above estimate proved amazingly accurate in spite of all the ideal assumptions (uniform distributions, independent variables, and constants instead of expectations). Table4... ..."

### Table 1: Classification accuracy (%) on image data compar- ing our method (LDM) vs. Euclidean (EDM), probabilistic global metric (PGDM) and support vector machine (SVM).

2006

"... In PAGE 5: ... We refer to this algorithm as Probabilistic Global Distance Metric Learning , or PGDM for short. Experimental Results for Image Classification Classification Accuracy The classification accuracy using Euclidean distance, the probabilistic global distance metric (PGDM), and the local distance metric (LDM) is shown in Table1 . Clearly, LDM outperforms the other two algorithms in terms of the classification accuracy.... In PAGE 5: ... We estimate the top eigenvectors based on the mixture of labeled and unlabeled images, and these eigenvectors are used to learn the local distance metric. The classification accuracy and the retrieval accuracy of the local distance metric learning with unlabeled data are presented in Table1 and Figure 3. We observe that both the classification and retrieval accuracy improve noticeably when unlabeled data is available.... ..."

Cited by 3

### Table 2. The spectral (vectorial) version of SAR descriptors of Table 1.

in Introducing Spectral Structure Activity Relationship (S-SAR) Analysis. Application to Ecotoxicology

2007

"... In PAGE 7: ... Basically, Table 1 is reconsidered under the form of Table2 where, for completeness, the unity column has been added 1 .... In PAGE 10: ... This special feature of S-SAR will be illustrated later, in the application section. It is now clear that once expanded, observing its first column, the determinant (17) generates the searched full solution of the basic SAR problem of Table2 with minimization of errors included and independent of the orthogonalization order. Remarkably, apart from being conceptually new through considering the spectral (orthogonal) expansion of the input data space (of both activity and descriptors) through the system (16), the present method also has the computational advantage of being simpler than the classical standard way of treating SAR problem previously exposed.... ..."

### Table 1: Comparison of average precisions for various combinations of methods. Symbols denote the names of various techniques: qwgt = query weighting, qexp = query expansion, prox = proximity infomation, proto = prototype-based ranking.

1998

"... In PAGE 4: ... 4 Experimental Results The methods described in the previous section have been used in various combinations for the ad hoc query on TREC-7 collections. Table1 summarizes the experimental results.... ..."

Cited by 3

### Table 2: Parameters to the Probabilistic Counting Al- gorithm

1996

"... In PAGE 8: ... The algorithm based on probabilistic counting esti- mates the size of the cube to within a theoretically pre- dicted bound. The values of the parameters we used are shown in Table2 . The estimate is accurate under widely varying data distributions, ranging from uniform to highly skewed.... ..."

Cited by 66