### Table 1 Parameter Invariance and Free Parameters for the Models of the Go/No-Go Lexical Decision Task Relative to the Diffusion Model Fits to the Two-Choice Lexical Decision Task

### Table 3. In the case of there being two choices as in

in Grammar Defined Introns: An Investigation Into Grammars, Introns, and Bias in Grammatical Evolution.

2001

"... In PAGE 3: ...332) C 85/256 (.332) Table3 : Probabilities of selecting a production rule using 8- bit codons. successful at removing the bias.... ..."

Cited by 1

### Table 1 Percentage of trials in which Fill23 was selected during two-choice primary generalization tests for individual students in Experiment 1.

"... In PAGE 7: ... Two-choice primary generalization test perfor- mances. As seen in Table1 , no systematic dif- ferences were observed in student perfor- mances across conditions or comparison selections during two-choice generalization testing. Thus, averaged functions were ob- tained for students in each group and are de- picted in Figure 2.... ..."

### Table 2: Optimal ^ for examples with two choices for loss, and parameterization as in (6).

1997

"... In PAGE 6: ... Consider the example with squared error loss, L(x; y) = (x ? y)2. Table2 shows the optimal ^ in this case and also when L(x; y) = log xy 2. Fig.... In PAGE 6: ... in this case and also when L(x; y) = log xy 2. Fig. 1 shows strike limit trajectories for a single replicate of the B7 trial when removals are taken according to Q3rds. Shown are quotas for PBR, `Thirds apos;, H, and C( ^ ) for the two loss functions of Table2 . In this gure, the ideal strike limits are quite low, because the B7 trial is pessimistic and `Thirds apos; generally permits excessive catch.... In PAGE 12: ... Sensitivity to Class of C: Appropriate choice of a class for C raises the standard statis- tical issues of model selection: is the model su ciently exible to re ect the data without being over- or under-parameterized? In this application, due to the vast amount of `data apos; and the interest in accurate predictions, it seems that the balance should be shifted more than usual towards large, exible classes with many parameters. 7Also, Table2 shows that ^ is similar for the two cases. However, this need not be the case because similar ts can be obtained for many di erent values of , just as in ordinary linear regression.... ..."

Cited by 5

### Table 8: Accuracies for the first two choices as ordered by an interactive intelligent thesaurus.

2007

"... In PAGE 7: ... Table 9, for the data sets used in the previous sec- tions. The accuracy values are lower than in the case when both the left and the right context are considered ( Table8 ). This is due in part to the fact that some sentences in the test sets have very little left context, or no left context at all.... ..."

Cited by 1

### Table 3. Root mean square deviations (RMSDs) for subjects using the two-choice and nine-choice answer methods with the fuzzy logical model of perception (FLMP) and the categorical model (CMP)

"... In PAGE 8: ... The models were fit to the results of each subject individually and to the average results of each of the four groups of subjects. Table3 gives the average RMSD values for the fit of the individual subjects and the RMSD values for the average subject. The FLMP gives a consistently better description of the results for all subjects, TABULAR DATA OMITTED regardless of category width or sex.... ..."

### Table 5: The following gives the mean and error (95% confidence interval) over 30 independent runs for each measurement for generalization performance defined by GW(i) at the start and at the end of CCL for the two-choice IPD.

### Table 7: The following gives the mean and error (95% confidence interval) over 30 independent runs for each measurement for generalization performance defined by GA(i) at the start and at the end of CCL for the two-choice IPD.

### Table 4 Percentage of intervals in which Fill23 was selected as a function of sample fill value during the two-choice generalization test in Experiment 2.

### Table 1: Summary of the three task types. Type 1 included no backtracking, so three clicks were needed for both sequential and simultaneous menus. Type 2 questions involved two choices from the third menu, thus requiring two additional clicks for sequential menu users (one to return to the previous menu, and one to make a second choice), and one additional click for simultaneous menus. Finally, Type 3 questions varied the second category. For simultaneous menus, this added only the one click required to make the additional choice, so these questions are no harder than those in Type 2. However, sequential users had to make four additional clicks: two to return to the second menu, one to make a new choice from that menu, and one to repeat the selection made from the third menu.

2000

"... In PAGE 9: ... However, sequential menus require seven clicks: five as required for type two, plus one Back click and a new menu selection at the second level. These results are summarized in Table1 . The Items Varied for any given task is the ... In PAGE 10: ... However, sequential users had to make four additional clicks: two to return to the second menu, one to make a new choice from that menu, and one to repeat the selection made from the third menu. Further analysis can generalize the contents of Table1 into a predictive model based on the number of clicks required to complete each task. For simultaneous menus, users must make one selection at each of the initial menus, followed by an additional click for each comparison that must be made.... ..."

Cited by 17