### TABLE 2. Number of questions asked when using QuickCheck to test programs

2006

Cited by 3

### Table 2. First few quick check paths and associated frequencies.

"... In PAGE 11: ... The best performance with the counting al- gorithm is also at around thirty paths, but resulting parse times are around eight per cent slower, and space usage is three per cent higher. The paths derived are somewhat surprising ( Table2 ), and in many cases do not fit in with grammar writer intuitions. In particular, some of the paths are very long.... ..."

### Table 2. First few quick check paths and associated frequencies.

in compilation

"... In PAGE 11: ... The best performance with the counting al- gorithm is also at around thirty paths, but resulting parse times are around eight per cent slower, and space usage is three per cent higher. The paths derived are somewhat surprising ( Table2 ), and in many cases do not t in with grammar writer intuitions. In particular, some of the paths are very long.... ..."

### Table 7.4: The number of successful and failed uniflcations for the non-indexed, path-indexed, and quick-check parsers over the unconstrained MERGE grammar. The sentence numbers are the same as those used in Figure 7.3 and in Table 7.1.

### Table 4.3.: Performance and copying behavior of selected unification algorithms on LinGO when parsing the aged test set. unify1 and unify2 are the functions of the same name from Wroblewski (1987), unify3 is the function from Section 2.4.2, tomabechi is the algorithm from Tomabechi (1991), and tom-smart adds non-redundant copying from Malouf et al. (2000). The row labelled quick check indicates if quick check filtering was enabled or disabled for a column. Where the quick check makes no difference, the column is labelled on/off. The static rule filter was enabled in all cases. A plain active chart parser was used.

### Table 5.2: Mean square error table. A smaller mean square error indicates that the estimate is closer to the original image. The numbers have to be compared on each row. The square of the number on the left-hand column gives the real variance of the noise. By comparing this square to the values on the same row, it is quickly checked that all studied algorithms indeed perform some denoising. This is a sanity check! In general, the comparison performance corroborates the previously mentioned quality criteria.

### Table 4: Parameters for artificial branch pattern generation.

2007

"... In PAGE 6: ...tructure, which is illustrated in Fig. 7. We used the follow- ing parameters: the average probability of the taken branch is pp; the average length of paths in loop iteration is lp;the number of paths is np; and the number of branches between consecutive loop iterations is ng. Branch result patterns are generated by any possible com- bination of parameters given in Table4 ,andBranchHis- tory Entropy; the branch prediction performance of the two- level predictor is measured. The length of the generated branch patterns is one million: at least 10 patterns are gen- erated for every combination of the parameter values.... ..."

Cited by 1

### Table 4: Parameters for artificial branch pattern generation.

2007

"... In PAGE 6: ... We used the following pa- rameters: average probability of taken branch is pp, average length of paths in loop iteration is lp, the number of paths is np, and the number of branches between consecutive loop iterations is ng. Branch result patterns are generated by any possible combi- nation of parameters given in Table4 , and Source Entropy and branch prediction performance of two-level predictor Iteration body: number of paths: np avg. number of branches: lp a loop structure: iteration gap: avg.... ..."

Cited by 1

### Table 1: Experimental Results of the branch and bound algorithms.

"... In PAGE 8: ... We refer the interested reader to [17] for further results. TRANSIT-BFS and TRANSIT-DFS The necessary computation to determine the set of minimal up- date sequences that transform r1 into r2 is shown in Table1 a. The first two columns list the number of databases processed and modification operations executed for building the transition graph.... In PAGE 8: ... We also list the overall number of databases added to the graph together with the number of databases generated as dupli- cates. Table1 a shows a huge difference between the number of modifi- cation operations executed and the number of databases added to the graph or being identified as duplicates for both approaches. This observation indicates that the majority of the generated data- bases are pruned due to their upper and lower bounds.... In PAGE 8: ... It therefore tests several databases at distance levels above the actual update distance. This is reflected by comparing the number of databases added and tested in columns 1 and 3 of Table1 a. For TRANSIF-DFS more databases are tested than added to the graph, due to those data- bases that are added once but tested several times at decreasing ... In PAGE 9: ... We observe that in general the memory requirement for the breadth-first approach is higher than that for the depth-first approach. TRANSIT-BFS (GS) and TRANSIT-DFS (GS) Table1 b shows the necessary effort to determine the set of mini- mal transformers when using the group solution cost as the lower bound for both branch and bound algorithms. In our experiments, this heuristic always computes the correct update distance, but does not find all minimal update sequences.... In PAGE 9: ... In our experiments, this heuristic always computes the correct update distance, but does not find all minimal update sequences. Compared to the numbers in Table1 a, the effort regarding data- bases tested and added is significantly lower for TRANSIT- BFS (GS) and TRANSIT-DFS (GS). As a downside, the computa- tion cost may increase due to the computation of the group solu- tion cost.... ..."