Results 1  10
of
56
The impact on individualizing student models on necessary practice opportunities
 In Inter. Conf. on Educational Data Mining (EDM
, 2012
"... When modeling student learning, tutors that use the Knowledge Tracing framework often assume that all students have the same set of model parameters. We find that when fitting parameters to individual students, there is significant variation among the individual’s parameters. We examine if this va ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
(Show Context)
When modeling student learning, tutors that use the Knowledge Tracing framework often assume that all students have the same set of model parameters. We find that when fitting parameters to individual students, there is significant variation among the individual’s parameters. We examine if this variation is important in terms of instructional decisions by computing the difference in the expected number of practice opportunities required if mastery is assessed using an individual student’s own estimated model parameters, compared to the population model. In the dataset considered, we find that a significant portion of students are expected to perform twice as many practice opportunities if the student is modeled using a populationbased model, compared to the number needed if the student’s own model parameters were used. We also find an additional significant portion of students will be likely to receive less practice opportunities than needed, implying that such students will be advanced too early. Though further work on additional datasets is needed to explore this issue in more depth, our results suggest that considering individual variation in student parameters may have important implications for the instructional decisions made in intelligent tutoring systems that use a Knowledge Tracing model. 1.
Introducing item difficulty to the knowledge tracing model.
 In Proc. UMAP,
, 2011
"... ..."
(Show Context)
Detecting Learning MomentbyMoment
"... Intelligent tutors have become increasingly accurate at detecting whether a student knows a skill, or knowledge component (KC), at a given time. However, current student models do not tell us exactly at which point a KC is learned. In this paper, we present a machinelearned model that assesses the ..."
Abstract

Cited by 18 (9 self)
 Add to MetaCart
Intelligent tutors have become increasingly accurate at detecting whether a student knows a skill, or knowledge component (KC), at a given time. However, current student models do not tell us exactly at which point a KC is learned. In this paper, we present a machinelearned model that assesses the probability that a student learned a KC at a specific problem step (instead of at the next or previous problem step). We use this model to analyze which KCs are learned gradually, and which are learned in “eureka ” moments. We also discuss potential ways that this model could be used to improve the effectiveness of cognitive mastery learning.
Sparse factor analysis for learning and content analytics
 J. OF MACHINE LEARNING RESEARCH
, 2014
"... We develop a new model and algorithms for machine learningbased learning analytics, which estimate a learner’s knowledge of the concepts underlying a domain, and content analytics, which estimate the relationships among a collection of questions and those concepts. Our model represents the probabil ..."
Abstract

Cited by 16 (10 self)
 Add to MetaCart
(Show Context)
We develop a new model and algorithms for machine learningbased learning analytics, which estimate a learner’s knowledge of the concepts underlying a domain, and content analytics, which estimate the relationships among a collection of questions and those concepts. Our model represents the probability that a learner provides the correct response to a question in terms of three factors: their understanding of a set of underlying concepts, the concepts involved in each question, and each question’s intrinsic difficulty. We estimate these factors given the graded responses to a collection of questions. The underlying estimation problem is illposed in general, especially when only a subset of the questions are answered. The key observation that enables a wellposed solution is the fact that typical educational domains of interest involve only a small number of key concepts. Leveraging this observation, we develop both a biconvex maximumlikelihoodbased solution and a Bayesian solution to the resulting SPARse Factor Analysis (SPARFA) problem. We also incorporate userdefined tags on questions to facilitate the interpretability of the estimated factors. Experiments with synthetic and realworld data demonstrate the efficacy of our approach. Finally, we make a connection between SPARFA and noisy, binaryvalued (1bit) dictionary learning that is of independent interest.
Individualized Bayesian Knowledge Tracing Models
"... Abstract. Bayesian Knowledge Tracing (BKT)[1] is a user modeling method extensively used in the area of Intelligent Tutoring Systems. In the standard BKT implementation, there are only skillspecific parameters. However, a large body of research strongly suggests that studentspecific variability in ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Bayesian Knowledge Tracing (BKT)[1] is a user modeling method extensively used in the area of Intelligent Tutoring Systems. In the standard BKT implementation, there are only skillspecific parameters. However, a large body of research strongly suggests that studentspecific variability in the data, when accounted for, could enhance model accuracy [5, 6, 8]. In this work, we revisit the problem of introducing studentspecific parameters into BKT on a larger scale. We show that studentspecific parameters lead to a tangible improvement when predicting the data of unseen students, and that parameterizing students’ speed of learning is more beneficial than parameterizing a priori knowledge.
The Sum is Greater than the Parts: Ensembling Models of Student Knowledge in Educational Software
 THE JOURNAL OF MACHINE LEARNING RESEARCH W & CP
, 2012
"... ..."
(Show Context)
New Potentials for DataDriven Intelligent Tutoring System Development and Optimization Short title: DataDriven Improvement of Intelligent Tutors
"... learning for student modeling Increasing widespread use of educational technologies is producing vast amounts of data. Such data can be used to help advance our understanding of student learning and enable more intelligent, interactive, engaging, and effective education. In this paper, we discuss th ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
learning for student modeling Increasing widespread use of educational technologies is producing vast amounts of data. Such data can be used to help advance our understanding of student learning and enable more intelligent, interactive, engaging, and effective education. In this paper, we discuss the status and prospects of this new and powerful opportunity for datadriven development and optimization of educational technologies, focusing on Intelligent Tutoring Systems. We provide examples of use of a variety of techniques to develop or optimize the select, evaluate, suggest, and update functions of intelligent tutors, including probabilistic grammar learning, rule induction, Markov decision process, classification, and integrations of symbolic search and statistical inference. 1.
Limits to accuracy: How well can we do at student modeling
 Proceedings of the 6th International Conference on Educational Data Mining
, 2013
"... There has been a large body of work in the field of EDM involving predicting whether the student’s next attempt will be correct. Many promising ideas have resulted in negligible gains in accuracy, with differences in the thousandths place on RMSE or R2. This paper explores how well we can expect stu ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
There has been a large body of work in the field of EDM involving predicting whether the student’s next attempt will be correct. Many promising ideas have resulted in negligible gains in accuracy, with differences in the thousandths place on RMSE or R2. This paper explores how well we can expect student modeling approaches to perform at this task. We attempt to place an upper limit on model accuracy by performing a series of cheating experiments. We investigate how well a student model can perform that has: perfect information about a student’s incoming knowledge, the ability to detect the exact moment when a student learns a skill (binary knowledge), and the ability to precisely estimate a student’s level of knowledge (continuous knowledge). We find that binary knowledge model has an AUC of 0.804 on our sample data, relative to a baseline PFA model with a 0.745. If we weaken our cheating model slightly, such that it no longer knows student incoming knowledge but simply assumes students are incorrect on their first attempt, AUC drops to 0.747. Consequently, we argue that many student modeling techniques are relatively close to ceiling performance, and there are probably not large gains in accuracy to be had. In addition, knowledge tracing and performance factors analysis, two popular techniques, correlate with each other at 0.96 indicating few differences between them. We conclude by arguing that there are more useful student modeling tasks such as detecting robust learning or wheelspinning, and estimating parameters such as optimal spacing that are deserving of attention.
Ensembling predictions of student knowledge within intelligent tutoring systems.
 In User Modeling, Adaption and Personalization,
, 2011
"... ..."
(Show Context)
Adaptive Practice of Facts in Domains with Varied Prior Knowledge
"... We propose a modular approach to development of a computerized adaptive practice system for learning of facts in areas with widely varying prior knowledge: decomposing the system into estimation of prior knowledge, estimation of current knowledge, and selection of questions. We describe specific re ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
(Show Context)
We propose a modular approach to development of a computerized adaptive practice system for learning of facts in areas with widely varying prior knowledge: decomposing the system into estimation of prior knowledge, estimation of current knowledge, and selection of questions. We describe specific realization of the system for geography learning and use data from the developed system for evaluation of different student models for knowledge estimation. We argue that variants of the Elo rating systems and Performance factor analysis are suitable for this kind of educational system, as they provide good accuracy and at the same time are easy to apply in an online system. 1.