### Table 6. What learning efficiency of resources?

"... In PAGE 2: ... The overall course organisation required the use of theory modules and we see that 100 percents of the students use them. But learning efficiency results introduce nuances about this quot;obligation quot; (see Table6 ). Table 1 also shows that students use the quizzes less and that they use communication tools even less.... In PAGE 5: ... Only the Chat does not show this very big positive progression and we will further investigate to determine why. The two right columns of Table6 show that the learning efficiency perception has a big weight when students accept or refuse an e-learning course. Table 6.... ..."

### Table 6.4: Results of learning algorithms with inter-signer learning, with partial train- ing from that person.

1995

### Table 1: The progression algorithm.

1998

"... In PAGE 9: ... The technique of formula progression works by labeling the initial state with the sentence representing the goal, call it g.For each successor of the initial state, generated byforward chaining, a new formula label is generated by progressing the initial state apos;s label using the algorithm given in Table1 . This new formula is used to label the successor states.... ..."

Cited by 109

### Table 1: The progression algorithm.

1995

"... In PAGE 5: ...5 We have developed a mechanism for doing incremental checking of an LTL formula. The key to this method is the progression algorithm given in Table1 . In the algorithm quantified formulas are progressed by progressing all of their instances.... In PAGE 8: ... Our planner can take this first-order definition of a predicate (rewritten in Lisp syntax) as input. And we can then use this predicate in an LTL control formula where during the oper- ation of the progression algorithm ( Table1 ) its first-order definition will be evaluated in the current world for various instantiations of its parameter x. Hence, we can use a strategy of preserving good towers by setting our LTL control formula to 2 8[x:clear(x)] goodtower(x) ) goodtowerabove(x) ; (1) where the predicate goodtowerabove is defined in a manner that is symmetric to goodtowerbelow.... ..."

Cited by 77

### Table 1: The progression algorithm.

1995

"... In PAGE 6: ... We have developed a mecha- nism for doing incremental checking of an LTL formula. The key to this method is the progression algorithm given in Table1 . In the algorithm quantified formulas are progressed by progressing all of their instances.... In PAGE 9: ... Our planner can take this first-order definition of a predicate (rewritten in Lisp syntax) as input. And we can then use this predicate in an LTL control formula where during the operation of the progression algorithm ( Table1 ) its first-order definition will be evaluated in the current world for various instantiations of its parameter x. Hence, we can use a strategy of preserving good towers by setting our LTL control formula to 2 8[x:clear(x)] goodtower(x) ) goodtowerabove(x) ; (1) where the predicate goodtowerabove is defined in a manner that is symmetric to goodtowerbelow.... ..."

Cited by 77

### Table 1: The progression algorithm.

1995

"... In PAGE 5: ...5 We have developed a mechanism for doing incremental checking of an LTL formula. The key to this method is the progression algorithm given in Table1 . In the algorithm quantified formulas are progressed by progressing all of their instances.... In PAGE 8: ... Our planner can take this first-order definition of a predicate (rewritten in Lisp syntax) as input. And we can then use this predicate in an LTL control formula where during the oper- ation of the progression algorithm ( Table1 ) its first-order definition will be evaluated in the current world for various instantiations of its parameter x. Hence, we can use a strategy of preserving good towers by setting our LTL control formula to 2 8[x:clear(x)] goodtower(x) ) goodtowerabove(x) ;; (1) where the predicate goodtowerabove is defined in a manner that is symmetric to goodtowerbelow.... ..."

Cited by 77

### Table 1. The experimentation phase implemented in Progressive RL

2004

"... In PAGE 5: ....1. The reinforcement learner In the current implementation of Progressive RL, the experimentation phase uses tabular Q-learning (Watkins, 1989). Table1 provides details of the algorithm used in the experimentation phase of Progressive RL. The notation used in this description is the same as that used in Section 2.... ..."

Cited by 6

### Table 1: The Experimentation Phase Implemented in Progressive RL

"... In PAGE 2: ...1 The Reinforcement Learner The current implementation of Progressive RL is based on tabular Q-learning [25]. Table1 provides details of the algorithm used in the experimentation phase of Progressive RL. The notation used in this description is the same as that used in Section 1.... ..."

### Table 1. Progression of the algorithm

1999

"... In PAGE 6: ... The weights are high- lighted within the tables. The benefits of the various edges are depicted in Table1 . For example, the benefit of e1 is 2 because allocating 2 buckets (= weight(e1)) to SC will bring down the errors of two sub-cubes (SC and S) below .... ..."

Cited by 36

### Table 2 Evaluation of the progressive algorithm

2003

"... In PAGE 8: ... Normally, the more accurately we could reconstruct the 3D object, the more accurate the sketch drawings became. Table2 shows the evaluation of the progressive algorithm for various projections and objects. It shows that the proposed algorithm recon- structs the most plausible objects from the sketch drawings in short time.... ..."