### Table 3: Compression for three motion graphs. The first graph is computed from walking and jumping motions. The second graph is computed from walking and picking motions and the third one is computed from just walking motions.

2007

"... In PAGE 20: ...3 The benefit of motion graph compression In this experiment, we evaluate the effect of the motion graph compression. Table3 shows general statistics for three different databases: (1) walking, jumping, ducking, sitting and walking along the beam motions; (2) walking and picking motions; (3) just walking motions. For each database, we computed the number of states and transitions in the motion graph before compression, after the first compression step (merging transitions) and after the second compression step (merging states).... ..."

### Table 1: Compression for three motion graphs. The first graph is computed from motions of walking, jumping, ducking, sitting and walking along the beam. The second graph is computed from mo- tions of walking and picking up an object and the third one is com- puted from just walking motions.

2007

"... In PAGE 8: ...3 The benefit of motion graph compression In this experiment, we evaluate the effect of motion graph compres- sion. Table1 shows these statistics for three different databases: (1) walking, jumping, ducking, sitting and walking along a beam; (2) walking and picking up an object; (3) just walking motions. For each database, we computed the number of states and transitions in the motion graph before compression, after the first compression step (removing sub-optimal data), and after the second compres- sion step (removing redundant data).... ..."

Cited by 3

### Table 3. Classification accuracies (%) of our graph kernel. The parameter is the termination probability of random walks, which controls the effect of the length of label paths.

2003

"... In PAGE 6: ...hanged from 0.1 to 0.9. Table 2 and Table3 show the classification accuracies in the five two-class problems measured by leave-one- out cross validation. No general tendencies were found to conclude which method is better (the PD was bet- ter in MR, FR and Mutag, but our method was better in MM and FM).... ..."

Cited by 54

### Table. Closed non-reversing walks in the Petersen graph.

### Table 1: Connected Regular Graphs

1994

"... In PAGE 4: ... This also resulted in using about 800 hours of time to complete the catalog. Table1 lists the the number of connected regular graphs generated for each catalog. (The total number of regular graphs, including disconnected ones, in each catalog can be easily infered from this table.... ..."

Cited by 1

### Table 1. CPDs for step action node A, the foot node F, the Fall node and the walking status node S

1996

"... In PAGE 6: ... The belief of a fall in the current time slice i is given by the posterior obtaining after adding evidence and running the inference algorithm, that is, bel(Falli=T), 3, and a warning about an imminent fall can be based on the predictions for the next time slice, that is whether bel(Falli+1) is greater than some warning threshold. Structure and Conditional Probability Distributions The CPDs for the nodes A, F, Fall and S are given in Table1 . The model for walking is represented by the arcs from Fi to Ai, and Fi, Ai and Si to Fi+1.... ..."

Cited by 3

### Table 5. Closest approach walks

"... In PAGE 11: ... The values of D1;2 and df complement the information about the extent of the neutral networks, verifying that they are indeed spanning most of sequence space, see Table 4. The residual distances (measured as Hamming distances) of the sequences resulting from closest approach walks, shown in Table5 , are surprisingly small even for pairs of protein structures that have virtually no structural features in common, see Figure 4. 7.... ..."

### Table 1. Classification results (in percents) for the MU- TAG dataset using different random walk models

2004

"... In PAGE 6: ... 1st- and 2nd-order model comparison: We first compared the classification accuracy of the graph ker- nels corresponding to the 1st- and 2nd-order Markov random walks, for different values of pq. Results are shown in Table1 , where we observe that the change from a 1st- to a 2nd-order Markov model has no sig- nificant effect on the success rates (89.9% to 90.... ..."

Cited by 14

### Table 1. Classification results (in percents) for the MU- TAG dataset using different random walk models

2004

"... In PAGE 6: ... 1st- and 2nd-order model comparison: We first compared the classification accuracy of the graph ker- nels corresponding to the 1st- and 2nd-order Markov random walks, for different values of pq. Results are shown in Table1 , where we observe that the change from a 1st- to a 2nd-order Markov model has no sig- nificant effect on the success rates (89.9% to 90.... ..."

Cited by 14