### Table 2: Justi cation of Table 1

1994

"... In PAGE 17: ... quot; For example, C00 holds if and only if RE0 is bounded, RE0 i being bounded for one value of i does not imply that RE0 is bounded, and REk bounded for all k implies that C00 holds. Table2 gives the justi cation for all of the entries in Table 1. If an entry in Table 1 is either \=) quot; or \(), quot; then the corresponding entry in Table 2 either refers to a theorem or the result holds trivially.... In PAGE 17: ... Table 2 gives the justi cation for all of the entries in Table 1. If an entry in Table 1 is either \=) quot; or \(), quot; then the corresponding entry in Table2 either refers to a theorem or the result holds trivially. If an entry in Table 1 is \ 6 =), quot; then the corresponding entry in Table 2 refers to the appropriate counterexample.... In PAGE 17: ... If an entry in Table 1 is either \=) quot; or \(), quot; then the corresponding entry in Table 2 either refers to a theorem or the result holds trivially. If an entry in Table 1 is \ 6 =), quot; then the corresponding entry in Table2 refers to the appropriate counterexample. Some of the most interesting results contained in Table 1 are the following.... ..."

Cited by 15

### Table 7: Justi cation for valid-interpretation claim

2003

"... In PAGE 60: ... We then consider the mapping f obtained as output from our algorithm that generates target-to-source mappings, along with the corresponding generated target population. To check whether this target population satis es the integrity constraints declared in the target input, we proceed in three steps: First, we provide a list of all integrity constraints used in the restricted subset of OSM as introduced in Section 2 (see Column 1 of Table7 ). The items of this list are grouped into three classes.... In PAGE 60: ... Second, we inspect each integrity constraint of the target input in turn. Each con- straint refers to a well-determined set of items (object sets or relationship sets) in the target model (see Column 2 of Table7 ). If item a is in the domain of f, then a has an image b in the source (Req.... In PAGE 60: ... (Req. 2) and of the same sort (Req. 3). In this case a is populated exactly from b. If all target items referred to by a target constraint are mapped to some source item (see Column 3 of Table7 ), then we can show that the satisfaction of the target constraint is implied by the assumed satisfaction of the source constraints. Here we have to distinguish two subcases: either an image b is directly declared in the source model (see Column 4 of Table 7), as discussed in Section 3, or b has been derived (see Column 5 of Table 7), as discussed in Section 4.... In PAGE 60: ... If all target items referred to by a target constraint are mapped to some source item (see Column 3 of Table 7), then we can show that the satisfaction of the target constraint is implied by the assumed satisfaction of the source constraints. Here we have to distinguish two subcases: either an image b is directly declared in the source model (see Column 4 of Table7 ), as discussed in Section 3, or b has been derived (see Column 5 of Table 7), as discussed in Section 4. Because our target-to-source mappings are usually partial, we have to concern our-... In PAGE 60: ... If all target items referred to by a target constraint are mapped to some source item (see Column 3 of Table 7), then we can show that the satisfaction of the target constraint is implied by the assumed satisfaction of the source constraints. Here we have to distinguish two subcases: either an image b is directly declared in the source model (see Column 4 of Table 7), as discussed in Section 3, or b has been derived (see Column 5 of Table7 ), as discussed in Section 4. Because our target-to-source mappings are usually partial, we have to concern our-... In PAGE 62: ... The entries refer to the pertinent discussions in the preceding sections, which are not repeated here. Based on the preceding discussion and Table7 , we can state the following theorem. Theorem 1 Let t be a target OSM model instance and s be a source OSM model instance.... ..."

Cited by 22

### Table 1. Accuracy comparison between the justi cation and the committee approaches in the unbiased scenarios.

2003

"... In PAGE 6: ... Therefore, in the scenarios with many agents, as the individual case bases are smaller, the individual accuracies will also be lower, leading to a greater incentive to collaborate. Table1 shows the classi cation accuracies obtained by several groups of agents using the JEC policy and the committee collaboration policy. The rst thing we notice is that the agents using justi cations ob- tain higher accuracies.... In PAGE 6: ...ained an accuracy of 84.14%. These results show that even when the data is very fragmented, the con dence measures (that are not a ected by the fragmentation of the data) computed by the agents are able to select which of the individual solutions are better based on the justi cation given. Table1 also shows the accuracy obtained by the agents when they solve the problems individually (without collaborating with the other agents). The table shows that as the number of agents in the experiments in- creases (and the size of the individual training sets diminishes) the individual accuracy drops fast.... In PAGE 7: ... This weakening in the indi- vidual results is also re ected in the Committee accu- racy. This can be seen in Table 2, where the accuracy achieved by the Committee is clearly lower than the accuracy obtained in the unbiased scenarios shown in Table1 . However, comparing the results of Tables 1 and 2, we can see that the results obtained by the agents using the JEC policy are not a ected by the bias of the case bases.... ..."

Cited by 8

### Table 3.6: Finding a greedily justi ed subplan of a linear plan kind of subplan running time to nd it perfectly justi ed NP-complete stronger justi cation greedily justi ed O(P j j5) quot; well-justi ed O(P j j4) # backward justi ed O(E j j2) weaker justi cation

1992

Cited by 8

### Table 4.1 Numerical justi cation of the invariance of the signal-to-noise ratio. Notice that the last two columns of the table almost agree.

2006

Cited by 4

### Table 2: CAI-LAI Breakdown Our diagnosis of CAI and LAI is summed up in Table 2. We now turn to justi cation for the entries in that table under \Communi- cation. quot;

### Table 4: Motivation for Probabilistic LP Simulation To give an intuition on the justi cation of optimism look again at the parallel simulation of the small simulation model in Figure 6 together with the future list, and the parallel execution of lazy cancellation 35

### Table 3 summarizes the justi cations for the entries in Table 1. Because of Theorem 6.2, which guarantees the skew symmetry of Table 1, it is only necessary to justify the entries on or above the diagonal. The entry [max; min], below the diagonal, is included because it is needed to justify [min; max], above the diagonal.

2003

"... In PAGE 21: ... Table3 : Justi cation for the entries in Table 1. The results in this paper suggest a number of future research directions and open ques- tions which we raise here.... ..."

### Table 2: Derived pairs. Note how the recoding has produced data in which we observe a number of extreme probabilities relating to the output variable y1, namely P(y1 = 0jx4 = 0) = 1, P(y1 = 1jx4 = 1) = 1 and P(p1 = 0jx4 = 2) = 1. The recoding thus provides us with indirect justi cation for predicting y1 = 0 with a probability of 1, if the di erence between the input variables is 1. It also provides us with indirect justi cation for predicting y1 = 1 with a probability of 1, if the di erence between the input variables is either 2 or 0. In short, we have indirect justi cation for the output rule `y1 = 1 if x4 = 1; otherwise y1 = 0 apos;. Kirsh apos;s `marginal regularities apos; we conclude, are precisely those whose justi cation is in our sense indirect. They thus involve (1) deriving a recoding of the training examples and (2) deriving probability statistics within the recoded data. The number of indirect justi cations is the number of direct justi cations (derivable from the relevant recoded data) plus the number of possible recodings of the data. The number of possible recodings is simply the number of distinct Turing machines we can apply to those data. There are in nitely many of 8

1997

"... In PAGE 8: ...o their chance values. Indirect justi cations are to be found via some recoding function g. In the case at hand imagine that the function e ectively substitutes the input variables in each training pair with a single variable whose value is just the di erence between the original variables. This gives us a set of derived pairs as shown in Table2 (the value of x4 here is the di erence between the values of x1 and... ..."

Cited by 51