### Table 1 shows the performances for state space construction, parameterised on the num- bers of either cyclers or philosophers. Note that the BDD sizes grow near linearly in n, indicating that the inherent complexity of the problems does not reside in their size in states (but rather in regularity of features, which are here many).

1992

"... In PAGE 11: ... Table1 : Performances of the tool on state space construction Tables 2 and 3 show the performance results for weak bisimulation partition, according to various orderings. As already mentioned the lt; order performs better than on the scheduler, contrarily to the philosophers.... ..."

Cited by 27

### Table 2: Matches and transformations resulting from the algorithm. The transformations map the motif onto the scanned structures. 5 Conclusion Modeling amino acids by the 3 atoms of their backbone allows to de ne a complete and unique associated reference frame. Every couple of amino acids has hence 6 invariants for rigid transformations that we use in a geometric hashing scheme to discover initial matches. These are clustered, veri ed and extended. The error inherent to the problem is integrated in the process, thanks to an error analysis and Extended Kalman Filter. Experiments con rm the validity, e ciency and robustness of our approach. Future work will be articulated upon three axes. We plan to automatize the adjustment of the algorithm parameters based on a statistical study of the invariants. A second direction would be the use of a probabilistic scheme for

1994

Cited by 15

### (Table 10.3). The first ADF could thus only be used for searching and the second, only for moving. The final technique explored was a limited form of recursion. Both ADFs from the previous technique were simply allowed to call themselves (ADF1 now contained ADF1 in its function set, and ADF2 contained ADF2 as shown in Table 10.4). As mentioned previously, this technique exploits the base case inherent in the problem: any attempt to LOOK or GO past a leaf node does nothing. The ADFs were only allowed to proceed to a recursive depth equal to the depth of the tree, since any productive function will reach a leaf node by this point. Calling a recursive function past this depth returns the value of the node the look pointer is pointing to. This limit on the depth of the recursion was

10

### (Table 10.3). The first ADF could thus only be used for searching and the second, only for moving. The final technique explored was a limited form of recursion. Both ADFs from the previous technique were simply allowed to call themselves (ADF1 now contained ADF1 in its function set, and ADF2 contained ADF2 as shown in Table 10.4). As mentioned previously, this technique exploits the base case inherent in the problem: any attempt to LOOK or GO past a leaf node does nothing. The ADFs were only allowed to proceed to a recursive depth equal to the depth of the tree, since any productive function will reach a leaf node by this point. Calling a recursive function past this depth returns the value of the node the look pointer is pointing to. This limit on the depth of the recursion was

### Table 2: Initial classi cation vectors for mouse l5 limitation in the attribute extraction stage of the system. The second problem refers to situations where most discriminating parts of the object are not visible from a given view and is an inherent problem with the recognition of 3D objects from 2D views. It is also a general limitation of empirical learning systems where generalisation is de ned by the training set. In turn, as already mentioned above, a spanning set of views can be taken to maximise rule generalisation to all possible poses - relative to all other objects (see [14]). This problem is evident with the fth view of the cross object, the fourth and fth views of the sh object, to a lesser extent the rst view of the train object, and also the rst view of the plane object. For example, Figure 20 displays all the regions of the sh object. Since this object is largely two dimensional, the fourth and fth views of the object contain very little useful information, and hence the reason why these views were badly classi ed in the experiment. Also, in many of the cases where the object is misclassi ed, the output obtained from

"... In PAGE 42: ...using the CRG tree is usually not that unfavourable for recognising the appropriate object. Table2 shows the initial classi cation for the recognition of the fth left view of the mouse. The numbers displayed in the table represent the possibility of the parts belonging to each class and were obtained by averaging the evidence vectors of each of the snakes involved in that part and normalising to 1.... ..."

### Table 1: The ratio nk+1 2kn for various values of n and k. For parameterized complexity, the analog of NP-hardness is hardness for W[1]. The analogy is very strong, since the k-Step Halting Problem for Nondeterministic Turing Machines is complete for W[1] [CCDF96], a result which is essentially a miniaturization of Cook apos;s Theorem. Dominating Set is hard for W[1] and is therefore unlikely to be xed parameter tractable. Indeed, one senses a \wall of brute force quot; (try all k-subsets) inherent complexity in this problem, much as one does for the k-Step Halting Problem, where a signi cant improvement on the obvious O(nk) algorithm seems fundamentally unlikely, and much as one senses a wall of brute force (\try all 2n truth assignments quot;) in the complexity of NP-complete problems such as satis ability. 12

1998

Cited by 13

### Table 4. QMR iteration counts for Galerkin discretization with block triangular (diagonal) preconditioner. Note that the cost per step of the block triangular preconditioner is only slightly higher than that of the block diagonal preconditioner (only an extra multiplication by Bt is needed), hence (2.12) is more e cient. However, for the Stokes problem ( ! 1), the preferred choice of preconditioner is less obvious since the inherent symmetry is destroyed if (2.12) is used in place of (2.5).

### Table 1 Inherent Accuracy of Graffiti

1997

"... In PAGE 3: ... Scanning the Graffiti chart in Figure 1b, we find 18 matches with uppercase letters. These are identified by quot;1 quot; in the third column in Table1 . For the eight letters that do not match, a quot;0 quot; appears.... In PAGE 3: ...ommon than others (e.g., Z), we weight the results using standard probabilities for letters in common English. The probabilities from Mayzner and Tresselt [10] appear in the second column in Table1 . By summing the 18 weighted matches, we compute an inherent uppercase accuracy of 68.... In PAGE 3: ...4%, slightly lower than the unweighted accuracy. The same test yields 11 matches with lowercase letters, as shown in column 4, Table1 . These yield an unweighted accuracy of 42.... In PAGE 4: ...hile rarely visiting others (e.g., Z). To emphasize this point, if we consider the standard letter probabilities in Table1 , then it would require about 8,000 character entries before achieving five instances of the letter Z. In part 2, subjects were given the 325Point for five minutes.... ..."

Cited by 41

### Table VIII: n-Queens Performance Summary. The third approach randomizes the integers from 0 to nk ? 1, and assigns 1pth of these to each processor. The overhead for randomization and communication is minimal compared with the faster completion time due to improved load balance. See Tables IV and V for a comparison of these three algorithms when n = 15, on p = 4 and p = 8 nodes, each an r = 4-way SMP, varying k from 1 to 4. Similar results for n = 16 are given in Tables VI and VII. Because of the special topology inherent in this search problem, the block and cyclic partitioning schemes are inferior to a randomized approach. Table VIII gives the performance of our SIMPLE algorithm compared to the standard netlib \queens quot; benchmark results for n = 14; 15; and 16. Because our algorithm is generalized for COSMOS , it takes slightly longer to compute on a single processor, but scales linearly with the total number of processor used.

1999

Cited by 51