### Table 1: Statistics of the merged lexical prefix tree structure when using across-word phoneme models.

2000

"... In PAGE 3: ... For the VERBMOBIL task this leads to 208 right context indices that denote the different sets of right across-word contexts including the 44 original sets containing only one right con- text. Table1 shows some characteristic figures of the struc- ture of our lexical prefix tree for the VERBMOBIL task. As can be seen, we obtained a reduction of the number of fan-out arcs by 45% which reduced the computational effort by roughly 25%.... ..."

Cited by 1

### Table 5: Prefix distribution LEVEL TRIE PATRICIA PREFIX TREE LEVEL TREE

2004

"... In PAGE 57: ... The main drawback is the size of the third lookup table with 24 address bits. However, the number of prefixes with length above 24 is very limited according to Table5 [55], and it is therefore more efficient to store the prefixes in a CAM. Search speed can be further improved by increasing the number of leaf nodes.... In PAGE 58: ... It contains 29587 prefixes. Table5 shows the number of prefixes at each level of the trie/tree. The prefix distribution is depicted graphically in Figure 32.... ..."

### Table 2: Accuracies using Adaptive Prefix Tree (APT), NaiveBayes, KNN (K=3), C4.5 and ID3

"... In PAGE 3: ... The adaptive prefix tree has been implemented and experimented using Weka1 (Waikato Environment for Knowledge Analysis) [22]. Table2 shows the average accuracy (correct classification rate) of the adaptive pre- fix tree compared to Naive Bayes [5], 3-NN (k-Nearest Neighborhood with k = 3), C4.5 [20] and ID3 [9], us- ing the Weka default values [22] for parameters specific to each classifier.... ..."

### Table 6 shows the total number of nodes and the number of dummy nodes. A dummy node is an intermediate node that does not contain any routing information. The average and maximum height does not differ much for the patricia trie and prefix-tree, but the number of nodes is almost twice as high for the patricia trie. Note that the level-tree gives a small reduction in the average and maximum height compared to the prefix-tree. However, it is expected that the advantage of level-tree is larger for a more hierarchically organised prefix database e.g. in combination with IPv6. Table 6: Comparison

2004

### Table 1 The number of nodes for prefix trees with 5, 4, and 3 levels. The levels correspond to prefixes with lengths {16, 20, 24, 28, 32}, {16, 20, 24, 32} and {16, 24, 32}, respectively.

"... In PAGE 4: ... Table1 shows the sizes of prefix trees with three, four and five levels for different routing tables. The rout- ing tables come from the IPMA project [22], and are snapshots of real tables from four large backbone routers on the Internet.... In PAGE 4: ... The last row gives the amount of memory required to store the prefix tree. Table1 shows that larger routing table yield less nodes per route than smaller tables. In other words, larger tables have a higher degree of overlap between routes, which gives attractive scaling properties in terms of memory consumption.... ..."

### Table 3. Comparison experiments between the two Janus de- coders. (RTF = real time factor, 3-p = three pass decoder, 1-p = one pass, single prefix tree decoder)

2002

Cited by 1

### Table 3. Comparison experiments between the two Janus de- coders. (RTF = real time factor, 3-p = three pass decoder, 1-p = one pass, single prefix tree decoder)

### Table 1: Algorithm Characteristics (K denotes the size of the longest frequent itemset, I denotes the number of frequent items, BVBE Array optimization uses a 2D array for counting the candidate 2-itemsets, instead of using Hash Trees, Prefix Trees or Tries.)

1999

"... In PAGE 9: ...Table 1: Algorithm Characteristics (K denotes the size of the longest frequent itemset, I denotes the number of frequent items, BVBE Array optimization uses a 2D array for counting the candidate 2-itemsets, instead of using Hash Trees, Prefix Trees or Tries.) Table1 presents a summary of the major differences among all the algorithms reviewed above. These algorithmic characteristics should aid understanding of the parallel algorithms presented below.... ..."

Cited by 73

### Table 1. Algorithm characteristics. K denotes the size of the longest frequent itemset. C2 array optimization uses a 2D array to count candidate 2-itemsets rather than using hash trees or prefix trees.

"... In PAGE 4: ... The best approach was MaxClique, which out- performed Apriori and Partition by more than an order of magnitude and Eclat by a factor of 2 or more. Table1 presents a summary of the major differences among all the algo- rithms reviewed thus far. Parallel ARM algorithms Researchers expect parallelism to relieve current ARM methods from the sequen- tial bottleneck, providing scalability to massive data sets and improving response time.... ..."

### Table 2.2.: Some discovery algorithms, part II Legend: Dyad: Supports the search for two elements close to each other, PD: Pattern-driven, SD: Sample-driven, IC:Information Content, Suffix: Suffixtree, Prefix: Prefixtree

in Contents

2005