### Table 1: Summary of RANSAC and R-RANSAC algorithms. The step II is added to RANSAC to randomize its evaluation.

2002

"... In PAGE 10: ...to test only a small number of data points CS AS C6 to conclude with high confidence that they do not correspond to the sought solution. The idea was implemented in a two-step evaluation procedure ( Table1 ). We also introduced in Section 3 a mathematically tractable class of pre-tests based on small test samples.... ..."

Cited by 2

### Table 1: Work-Based Learning Student Cases: Academic Reinforcement Findings

"... In PAGE 22: ... We also looked for examples of work-based learning positively affecting motivation towards schoolwork. (See Table1 for the results of the analysis.) Below, we give examples from our fieldwork, as well as examples, as appropriate, from the work of Moore (1981a; 1981b; 1986), and Stasz and her associates (Stasz amp; Brewer, 1998; Stasz amp; Kaganoff, 1997).... In PAGE 34: ...sometimes the experience of work in the real world has a different kind of motivational effect: two other students, Renee and Maria, had such tedious internships that they became highly motivated to attend college directly from high school, rather than delaying post-secondary enrollment or combining it with work. Summary In Table1 , we summarize the results of the analysis of our cases, noting for each student whether the three claims for academic reinforcement (school-based knowledge is applied, school-based knowledge is explored and tested, and motivation towards school is positively affected) were met. For nine of the students (over one-third of our sample), over the course of multiple visits to the internship sites, and before-and-after in-depth interviews with the students, we found no evidence for any of the claims.... ..."

### Table 3. Performance of Reinforcement Learning in selecting right service selection strategies.

"... In PAGE 11: ... We also mea- sure the average ratio of satisfaction (RGM) and the average time required for service selection (TGM) if the consumers had used only the SPSGM in their service selections. Table3 shows the results of our simulations. The first three columns refer to the pa- rameters that are used to configure the simulation environment.... ..."

### Table 1 Fuzzy rule bases

2006

"... In PAGE 3: ... The rules reflect an initial strategy for combining the different forecast values that has been suggested by a user. For example, if the same level of trust is given to the customer forecast and expert forecast, the rules in Rule Base 1 can have the form as given in Table1 (a). Rule Base 2 and Rule Base 3 are defined in a similar way (see Table 1 (a) and (b), respectively).... In PAGE 3: ... For example, if the same level of trust is given to the customer forecast and expert forecast, the rules in Rule Base 1 can have the form as given in Table 1 (a). Rule Base 2 and Rule Base 3 are defined in a similar way (see Table1 (a) and (b), respectively). However, the proposed DSS_DF includes a learning mechanism that modifies and improves the initial rule bases... ..."

### Table 2: Parameters used in reinforcement learning

"... In PAGE 4: ... Note that Hall and Mars found that the SLA outperformed com- mon static policies, such as FCFS, EDF and SP. We then applied our RL system under the same simulation conditions, using the parameters shown in Table2 . The mea- sured mean delay for the batch algorithm is shown in Figure 4, and for the -greedy algorithm in Figure 5.... ..."

### Table 5.2: Running times of reinforcement learning algorithms Algorithm

### Table 2. Parameters used in the reinforce- ment learning algorithms

2004

"... In PAGE 4: ... Reinforcement learning parameters were determined empirically, and in each case, were optimal (chosen from a discrete set of values). These are shown in Table2 . A trace threshold of 0:01 was also used in the case of Sarsa( ).... ..."

Cited by 1

### Table 3. The Q-RRL algorithm for relational reinforcement learning.

"... In PAGE 13: ... Instead of having an explicit lookup table for the Q-function, an implicit representation of this function is learned in the form of a logical regression tree, called a Q-tree. The Q-RRL algorithm is given in Table3 . The main point where RRL differs from the algorithm in Section 3.... ..."

### Table 3. Average number of inliers, samples and num- ber of LOs taken in 100 runs of RANSAC algorithms.

### Table 1. Four reinforcement learning algorithms, the counterpart of the Bellman equation for each, and each of the corresponding residual algorithms..

1995

"... In PAGE 7: ... Given the Bellman equation counterpart for a reinforcement learning algorithm, it is straightforward to derive the associated direct, residual gradient, and residual algorithms. As can be seen from Table1 , all of the residual algorithms can be implemented incrementally except for residual value iteration. Value iteration requires that an expected value be calculated for each possible action, then the maximum to be found.... In PAGE 7: ... This is clearly impractical, and appears to have been one of the motivations behind the development of Q-learning. Table1 also shows that for a deterministic MDP, all of the algorithms can be implemented without a model, except for residual value iteration. This may simplify the design of a learning system, since there is no need to learn a model of the MDP.... ..."

Cited by 145