### Table 1. Unsatisfiable core extraction

2003

"... In PAGE 5: ... Unsatisfiable core extraction In Tables 1,2,3 we give experimental results on hard instances from the verification domain. Table1 gives the data on unsatisfiable core extraction. Name of the instances are given in the first column.... ..."

Cited by 22

### Table 1. Unsatisfiable core extraction

### Table 5. ShrinkDistribution ShrinkDistribution

"... In PAGE 5: ...enote the labeled data set by D and the unlabeled data set (i.e. the test set) by DU. In our evaluation scheme, a training data set is generated from D using the algorithm shown in Table5 , which shrinks the 3G classes (In the task we shall cope with gradually expanded target class. So, we can use a smaller target class to simulate the current apos; target class, while use a bigger target class to simulate the expanded target class.... ..."

### Table 5. ShrinkDistribution

"... In PAGE 6: ...enote the labeled data set by D and the unlabeled data set (i.e. the test set) by DU. In our evaluation scheme, a training data set is generated from D using the algorithm shown in Table5 , which shrinks the 3G classes 3. Taking k 2 f1; 3; 5g and p 2 f10%; 30%; 50%g, 9 labeled data sets fDT 1 ; DT 2 ; : : : ; DT 9 g are generated.... ..."

### Table 1: Results for the unsatisfiable AIM Problems (computations performed via NEOS on a 400MHz Sparc2)

2002

Cited by 10

### Table 1: Results for the unsatisfiable AIM Problems (computations performed via NEOS on a 400MHz Sparc2)

2002

### Table Detection via Probability Optimization

2002

Cited by 4

### Table 2. Results For Unsatisfiable Embedded Problems Average Total Search Nodes

"... In PAGE 9: ... In the first set, ehi-85-297, each problem contains 297 variables (85 in the original sat problems), in the second set, ehi-90-315, each problem contains 315 variables (90 in the original sat problems). The last problem set reported in Table2 is a set of 100 composed random prob- lems, consisting of a main under-constrained component in the form lt;n,d,m,t gt; where n is the number of variables, d the uniform domain size, m the graph density of the component and t the constraint tightness, and k satellite components also in this form attached to the main component by links lt;m,t gt;. For the problem set reported, each problem had main component lt;100,10,0.... ..."

### Table 3: Experiments on small training sets. SVMTorchN = SVMTorch without shrinking, SVMTorchU = SVMTorch with shrinking and unshrinking, Time NSP = time (in seconds) for non-sparse data format, Time SP = time (in seconds) for sparse data format, # SV = number of support vectors, Objective Function = value of (3) at the end of the optimization, Model Train = mean absolute error (MAE) over the training set, Model Test = MAE over the test set, Median Train = MAE over the training set with the median as predictor, Median Test = MAE over the test set with the median as predictor.

2001

"... In PAGE 13: ...2 Small Datasets Let us now compare SVMTorch and Nodelib on small datasets. In the results given in Table3 , only the rst 5000 training examples were used. The size of the cache was set to 300Mb, so that the whole kernel matrix could be kept in memory.... In PAGE 15: ...27 Nodelib gt; 106 { { { { { Table 4: Experiments on large training sets. See Table3 for the description of the elds. Let us now turn to experiments using large datasets.... ..."

Cited by 157

### Table 2: Run time performance for synthetic data set compar- ing Anytime TA, TA and time required to take an Anytime TA measure. Synthetic data set (1,000,000 tuples, 4 attributes, his- tograms size 20, random distribution). Time is reported in sec- onds.

"... In PAGE 10: ... In this set of experiments we set K = 1, 000, but similar results were obtained for different values. Table2 shows the runtime performance of the Anytime TA algo- rithm, as well as the overhead that the technique imposes over the TA algorithm. In the first column we report the running time of the TA algorithm.... In PAGE 10: ... The total running time of our algorithm is the sum of the time it takes to run Anytime TA (column 2) and the time it takes to compute the anytime measures (column 3) times the number of times the anytime computation is invoked. The experimental results in Table2 suggest that the overhead of our approach is relatively small for Anytime TA. There is little variation in runtime between the TA and Anytime TA algorithm (this is attributed to the fact that histograms are not utilized for computation until a reading is taken).... ..."