### Table 1. Performance evaluation of incremental data bubbles and the resulting clustering structure. Dataset Scheme Fscore Compactness

2004

"... In PAGE 10: ... The performance of OPTICS is determined using the F score measure [13] (where F = 2*p*r/(p+r), p is precision and r is recall). We notice from Table1 that the F score of the clustering algorithm (OPTICS) using our incremental scheme is always very close to (and sometimes higher than) the F score when using Table 1. Performance evaluation of incremental data bubbles and the resulting clustering structure.... In PAGE 10: ... If the repositioning of the representatives of incremental data bubbles is effective, then the overall compactness of the incremental data bubbles should not (significantly) exceed the overall compactness of the completely rebuilt data bubbles. As shown in Table1 , our dynamic scheme is very effective in (re)-positioning data bubbles. Incremental data bubbles even have a lower compactness than the completely rebuilt ones in many experiments.... ..."

Cited by 4

### Table 1. Memory requirements of the di erent data for the Smallbucky data set. The space requirements for encoding just the reference mesh are 3,688,912 bytes, or 6,802,832 bytes with an indexed data structure without, or with adjacencies, re- spectively. We report results for an edge-based and a vertex-based MT. The rst column describes the type of MT. The second to fourth column give the storage cost of the general data structure and its overhead factors with, respectively. The last four columns give the storage cost of the compact data structure and its compression factors with respect to storing the reference mesh without or with adjacencies or the MT with the explicit structure, respectively. All storage costs are in bytes.

"... In PAGE 14: ... For this reason, vertex-based updates generate MTs with less tetrahedra, and with fewer dependency links. Table1 reports storage costs for the di erent data structures, together with ovwerhead/compression factors with respect to a data structures... ..."

### Table routing is a standard solution to the shortest path routing problem for arbitrary networks. At each router in the network is stored a table listing for each possible destination the output port that should be used to send a message along a shortest path. This solution guarantees shortest paths but requires O(n log d) bits of memory per node, for an n-router network of maximum degree d. The compact routing problem is to implement routing schemes that uses a minimum memory size on each router. The interval routing scheme was introduced by Santoro and Khatib in [19] to improve the space requirements of routing tables. In [16], we showed that it is not possible to nd a compact routing scheme for all networks, i.e., universal routing schemes that produce in each router compact data structures for the routing information. We showed that a constant fraction of the routers of the network need to store (n log d) bits of information independantly from the routing scheme used. Supported by the research programs PRS and ANM of the CNRS, by the CNRS-INRIA project ReMap, and by the DRET.

1996

Cited by 3

### Table 1: A transaction database as running example.

"... In PAGE 2: ...1 FrequentPattern Tree To design a compact data structure for e#0Ecient fre- quent pattern mining, let apos;s #0Crst examine an example. Example 1 Let the transaction database, DB,be #28the #0Crst two columns of#29 Table1 and #18 =3. A compact data structure can be designed based on the following observations.... ..."

### Table 2: Is the iteratively revised knowledge base compactable?

1995

"... In PAGE 6: ... Section 4 contains the analysis for the bounded-size case, while Sections 5 and 6 deal with iterated belief revision, in the unbounded-size and bounded-size case, respectively. The results are summarized in Table 1 and Table2 . Section 7 presents results for generic data structures and Section 8 contains some conclusions.... In PAGE 40: ... Restricting our attention to query equivalence, we found situations where compact representations exist. Results are summarized in Table2 , where YES stands for compactable, while NO stands for not compactable. The following comments on the table are in order.... In PAGE 41: ...oreover, it is helpful to save the formulae P 1, . . . , P m even after incorporation, for possible further revisions. In fact polynomiality in Table2 is guaranteed only if all formulae are available. 7 Generalization and strengthening of results Our results can be easily generalized in several directions.... ..."

Cited by 32

### Table 3: (a) Seven data sets: V=# vertices, WE=# wire edges, F=# triangles; Storage cost of data structures (b) for 2D data sets in (a).

"... In PAGE 9: ... The TS and the IQM structures are the most compact ones because they encode only top simplexes. We compare the various data structures on the data sets in Table3 (a). The duck and the head are manifold data sets.... ..."

### Table 1: The transaction database DB as our running example.

2000

"... In PAGE 3: ...1 FrequentPattern Tree To design a compact data structure for e#0Ecient frequent pattern mining, let apos;s #0Crst examine a tiny example. Example 1 Let the transaction database, DB, be #28the #0Crst two columns of#29 Table1 and the minimum support... ..."

Cited by 599

### Table 3: Skim Compaction Data

1997

Cited by 113

### Table 3: Skim Compaction Data

### Table 2: Tested machines

2005

"... In PAGE 10: ... Non- compact data structures are used. Table2 shows some machines used in our experiments. Their capabilities vary greatly.... In PAGE 10: ... The relative figures allows to compare the different machines. For a small table, all data are in cache and the relative performance of the machine is closely correlated with the raw cpu frequency presented in Table2 . For large tables, two results are presented.... ..."

Cited by 1