Results 1 - 10
of
24,255
Table 1: Differentiating Privatization, Prefunding, and Diversification
1998
Cited by 4
Table 2. Deep Learning Differences by Discipline
2005
"... In PAGE 12: ... In fact, many seniors in every area use deep learning approaches at least some of the time. For the deep learning scale ( Table2 ), seniors in the social sciences have the highest average score even after controlling for student characteristics (effect size with controls = 0.26, p lt; 0.... ..."
Table 2: Assessment Object Strategy
"... In PAGE 24: ...Learning Modeling and Assessment R amp;D for Technology-Enabled Learning Systems 19 The integration with learning environments and data tracking/reporting systems will involve demonstration projects to show that the software for item/task generation, scoring, and inference can be integrated into a variety of online learning environments. Table2 presents an agenda to create an agreed-upon modular architecture for representing reusable components of assessment tasks, and to develop processes and software to support such an assessment ... ..."
Table 1: Contemporary learning strategies supporting deep approaches to learning
2002
"... In PAGE 2: ...mphasis on student-centred instruction (p. 45). Many writers have attempted to conceptualise the attributes and nature of learning settings for higher education that promote deep learning through an emphasis on learning processes. Table1 provides a summary and synthesis of the descriptions of a number of researchers and writers who have explored these conditions. A Framework Describing Learning Approaches A number of consistent elements appear to emerge from the literature which describes the conditions under which students can be encouraged to seek understanding and comprehension as distinct from surface level learning in instances where generic skills development is being sought.... ..."
Cited by 1
Table 6: Differential Effects for Small and Service Sector and Private Firms
Table 3.1 : inference rules for membership constraints
1997
"... In PAGE 5: ... A sequent is here a triple ; ! , where is a signature, , , B and C are nite sets of formulas. The proof system presented in [18] ( Table3 ) can be used as a basis of a logic programming language since a goal-directed style of theorem proving is correct and complete for it. This inference system uses only uniform proofs : a cut-free sequent proof is uniform if whenever the succedent of a sequent is not atomic, that sequent is the conclusion of a right-introduction rule.... In PAGE 5: ... When the succedent is atomic, the left-rules are used. ; B; C; ! ; B ^ C; ! ^L ; ! ; B ; ! ; C ; ! ; B ^ C ^R ; B; ! ; C; ! ; B _ C; ! _L ; ! ; B ; ! ; B _ C _R ; ! ; C ; ! ; B _ C _R ; ! ; B ; C; ! ? ; B C; ! [ ? L ; B; ! ; C ; ! ; B C R ; ; B[x=t] ! ; ; 8xB ! 8L [ fcg; ! ; B[x=c] ; ! ; 8xB 8R [ fcg; ; B[x=c] ! ; ; 9x B ! 9L ; ! ; B[x=t] ; ! ; 9x B 9R ; ! ; ? ; ! ; B ?R Table3 : inference rules given by Miller... In PAGE 7: ...2 : inference rules for inclusion constraints 3.3 Solving mechanism If we have to reduce a sub-goal constitued by a conjunction of inclusion set constraints, we want to build a mechanism allowing to obtain a solved form, from which simple questions can be easily checked, and to generate a formula where all the free set variables have been removed, which the -prolog inference rules will be able to reduce ( Table3 ). For instance, the constraint... In PAGE 8: ... The operator will be suppressed, but not the 2 operator. For instance, if the formula is of the form 8z (p(z) ) z 2 se), then it is transformed by the 8R and R inference rules (see Table3 ), and we have to prove the new formula c 2 se with p(c) as a fact of the program and c a new symbol. Our algorithm (table 3.... ..."
Cited by 2
Table 6. Partial Correlations between Deep Learning Scales and Educational Outcomes
2005
"... In PAGE 14: ...o 3.35) compared to the others (means on the other subscales range from 2.60 to 3.13). Table6 contains partial correlations between the deep learning scales and three student outcome variables (gains in personal and intellectual development, grades, and satisfaction) calculated within each of the disciplinary areas. We were primarily interested in determining if the relationships between deep approaches to learning and student outcomes were consistent with scores on the deep learning scales.... ..."
Table 1 can be directly transformed into the fuzzy rulebase. Two fuzzy variables, degree of di culty (D) and error (E), will serve as antecedents. Their membership values are depicted in gure 1. The consequent variable f contains three fuzzy terms don apos;t learn, learn weakly and learn heavily. The resulting control surface is achieved by min-max inference and center-of-gravity defuzzi cation and graphed in gure 2, showing how the learning rate is increased for patterns that need more learning to become adequate and decreased for patterns that have an error too low to be adequate and thus disturb the learning progress of the other patterns.
"... In PAGE 3: ... Table 1 shows how the learning rate should be set according to the degree of di culty and the error of a pattern. Table1 : Learning rate depending on adequateness DIFFICULTY ERROR ADEQUATENESS LERNING RATE simple small adequate learn weakly simple large inadequate learn heavily hard small inadequate don apos;t learn hard large adequate learn weakly A simple pattern that is already classi ed with a small error is represented ade- quately and needs to be learnt only weakly. A simple pattern with a large error is inadequate and requires a more intense learning.... ..."
Table 9. Mymensingh: Bivariate Probit Regression of NGO membership and adopter or likely adopter of private fishpond technology
2003
Table 3: Inclusion inference.
"... In PAGE 4: ...meronym, partonym and membership, and an approximation of inclusion based on overlap computation and context analy- sis (see Table3 ). Contextual inclusion is computed by check- ing the overlap between the context feature (see Section 3.... ..."
Results 1 - 10
of
24,255