### Table 1: Performance of the Random, FIFO, and LIFO variable selection heuristics on difficult random 3-SAT instances. The number of instances (out of 500) that were solved by each strategy given acomputational resource bound of MAXTRIES and MAXFLIPS is shown above.

1997

"... In PAGE 3: ... For each randomly generated problem, GSAT was run using each of the tie-breaking strategies, where each run consisted of up to MAXTRIES iterations of MAXFLIPS flips (the solution was completely randomized after every MAXFLIPS flips). Table1 shows the results of this experiment. The number of instances for which satisfying assignments were found using each tie-breaking strategy is shown.... ..."

Cited by 4

### Table 1: Performance of the Random, FIFO, and LIFO variable selection heuristics on difficult random 3-SAT instances. The number of instances (out of 500) that were solved by each strategy given a computational resource bound of MAXTRIES and MAXFLIPS is shown above.

"... In PAGE 3: ... For each randomly generated problem, GSAT was run using each of the tie-breaking strategies, where each run consisted of up to MAXTRIES iterations of MAXFLIPS flips (the solution was completely randomized after every MAXFLIPS flips). Table1 shows the results of this experiment. The number of instances for which satisfying assignments were found using each tie-breaking strategy is shown.... ..."

### Table 1. Failure measures and branching factors for fail-first heuristics

2005

"... In PAGE 9: ... In particular, we measured the the mean domain size of the variables selected by each heuristic (denoted CY CS CY chosen ) and the total number of failures. Table1... ..."

Cited by 9

### Table 2. Factor Analysis with Lexical and Random Tie-Breaking in Variable Selection

"... In PAGE 3: ...2 Stability of the factors Perhaps the first question that comes to mind in looking over such data is, Is this pattern of loadings related to basic causal factors affecting heuristic performance, or is it a statistical ar- tifact due to extraneous, uncontrolled features of the exper- iment? Evidence that these factors reflect basic differences among heuristics comes from experiments that randomise ei- ther value selection or tie-breaking among variables. An example of the latter is shown in Table2 . In these ex- periments, the first variable was always selected lexically, and then the heuristic in question was used to select variables.... ..."

### Table 2. Factor Analysis with Lexical Value Ordering and Tie-Breaking, Random Value Selection, and Random Tie-Breaking for Variables

"... In PAGE 3: ... In either case, each problem is tested a large number of times, and the mean performance is used as a single data point in the factor analysis. Table2 shows results with and without randomisation. In these experiments, the first variable was always selected lexically, and then the heuristic in question was used to se- lect variables.... ..."

### Table 4. Factor Analysis for selected heuristics Under Three Propagation Regimes

"... In PAGE 4: ... Do- main size was varied in this manner (which produces a rectan- gular distribution for CYCSCY) to see if min-domain could be asso- ciated with the contention factor without relying on the tech- nique of constant variable selection at the top of the search tree. The results ( Table4 ) show that this was successful for MAC and forward checking but only partially successful for simple backtracking. The main finding was that the same heuristics were asso- ciated with the same factors despite enormous differences in search efficiency associated with different degrees of propa- gation, including no propagation at all.... ..."

### Table 5. Search Measures for Heuristics

"... In PAGE 5: ... To this end, various measures were tested including mea- sures of overall search effort (here, search nodes), a mea- sure of the branching factor (mean CYCSCY for variables selected during search), a measure of connectivity with future, unin- stantiated variables (mean forward degree), and two failure measures defined in Section 2. The results of these tests are shown in Table5 , where data for contention heuristics are shown at the top of the ta- ble (rows 1-7), while data for simplification heuristics are shown at the bottom. In the first place, in these tests there are no consistent differences in overall performance (nodes) between heuristics of different classes.... In PAGE 6: ...) When this effect was examined for sets of 100 problems, it was found that promise reaches its maximum value sooner for simplification than for contention heuristics. As with the data in Table5 , there was no overlap in the averages for heuristics in the two classes.... ..."

### Table 2: Branches explored and CPU time (seconds) used to nd a minimal length ruler (F) or prove that none shorter exists (P). A - means that the run was cut o after 105 branches. ing domain (SD); this heuristic has often been found to give good results, although mainly in binary constraint satisfaction problems. In one version of the smallest-domain heuristic, both the original and the auxiliary variables are used as search variables. In the other version (restricted SD) only the original variables are used as search variables. The lexicographic ordering selects the original variables in order, starting from x2 (since x1 is already assigned to 0). When all the original variables have been assigned, constraint propagation will have already assigned the auxiliary variables as well, and therefore a restricted form of this heuristic does exactly the same thing.

1999

"... In PAGE 7: ... We concen- trate on the auxiliary variable representation with the single all-di erent constraint as this is the most e cient in terms of CPU time, from those compared in Table 1. Table2 compares lexicographic ordering with two ver- sions of the heuristic which chooses next the variable with smallest remain-... In PAGE 8: ... Hence, lexicographic ordering of the auxiliary variables gives identical results to lexicographic ordering of the original variables, in both ternary representations. It is clear from Table2 that lexicographic ordering gives much better results than smallest domain ordering. With the smallest domain ordering, it is a good idea to search only on the original variables, and not on the auxiliary variables as well.... ..."

Cited by 25

### Table 1 Simple forward selection results in the Broomehill study area. number of

"... In PAGE 5: ...and ability to extrapolate by reducing the number of input variables Of the heuristic feature selection approaches tested, we present the results of an approach called simple forward selection in Table1 . The simple forward selection approach is as follows: 1.... ..."

### Table 1. Comparison of Soft-SAT and WMax-SAT. Time in seconds.

"... In PAGE 7: ... WMax-SAT incorporates two static variable selection heuristics: the lexicographic heuristic (lex) and the heuristic that instantiates the variables taking into account the number of occurrences in decreasing order (dno). Table1 shows the results obtained for di erent sets of 100 random binary CSPs of the hard region of the phase transition: the rst column shows the parameters given to the generator of random binary CSPs, and the remaining columns show the experimental results for Soft-SAT with heuristics csp and lex, and for WMax-SAT with heuristics lex and dno. For each heuristic we give the mean and median time needed to solve an instance.... ..."