Results 1 
8 of
8
IRT modeling of tutor performance to predict end of year exam scores. Working paper
, 2006
"... Interest in endofyear accountability exams has increased dramatically since the passing of the NCLB law in 2001. With this increased interest comes a desire to use student data collected throughout the year to estimate student proficiency and predict how well they will perform on endofyear exams ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Interest in endofyear accountability exams has increased dramatically since the passing of the NCLB law in 2001. With this increased interest comes a desire to use student data collected throughout the year to estimate student proficiency and predict how well they will perform on endofyear exams. In this paper we use student performance on the Assistment System, an online mathematics tutor, to show that replacing percent correct with an Item Response Theory (IRT) estimate of student proficiency leads to better fitting prediction models. In addition, other tutor performance metrics are used to further increase prediction accuracy. Finally we calculate prediction error bounds to attain an absolute measure to which our models can be compared.
Clustering with Confidence: A Binning Approach
"... We present a plugin method for estimating the cluster tree of a density. The method takes advantage of the ability to exactly compute the level sets of a piecewise constant density estimate. We then introduce clustering with confidence, an automatic pruning procedure that assesses significance of s ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We present a plugin method for estimating the cluster tree of a density. The method takes advantage of the ability to exactly compute the level sets of a piecewise constant density estimate. We then introduce clustering with confidence, an automatic pruning procedure that assesses significance of splits (and thereby clusters) in the cluster tree; the only user input required is the desired confidence level. 1
Skill Set Profile Clustering Based on Student Capability Vectors Computed From Online Tutoring Data
"... Abstract. In educational research, a fundamental goal is identifying which skills students have mastered, which skills they have not, and which skills they are in the process of mastering. As the number of examinees, items, and skills increases, the estimation of even simple cognitive diagnosis mode ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. In educational research, a fundamental goal is identifying which skills students have mastered, which skills they have not, and which skills they are in the process of mastering. As the number of examinees, items, and skills increases, the estimation of even simple cognitive diagnosis models becomes difficult. To address this, we introduce a capability matrix showing for each skill the proportion correct on all items tried by each student involving that skill. We apply variations of common clustering methods to this matrix and discuss conditioning on sparse subspaces. We demonstrate the feasibility and scalability of our method on several simulated datasets and illustrate the difficulties inherent in real data using a subset of online mathematics tutor data. We also comment on the interpretability and application of the results for teachers. 1
Predicting Performance and Creating Better Student Proficiency Models by Improving Skill Codings
, 2007
"... Interest in endofyear accountability exams has increased dramatically since the passing of the NCLB law in 2001. This push has impacted educational research in a wide variety of ways, including a strong desire to be able to model student work in order to make conclusive statements about what stude ..."
Abstract
 Add to MetaCart
Interest in endofyear accountability exams has increased dramatically since the passing of the NCLB law in 2001. This push has impacted educational research in a wide variety of ways, including a strong desire to be able to model student work in order to make conclusive statements about what students know and how this relates to how they will perform on endofyear standardized exams. This thesis will look at using item response theory (IRT) to estimate student proficiency. This estimated proficiency will then be used to build prediction models for endofyear exam scores. Next, methods to improve a skills model will be explored. Models that account for learning over time will then be considered. Finally, I will compare various different approaches to modeling response data. 1
Educational Data Mining 2009 Conditional Subspace Clustering of Skill Mastery: Identifying Skills that Separate Students
"... Abstract. In educational research, a fundamental goal is identifying which skills students have mastered, which skills they have not, and which skills they are in the process of mastering. As the number of examinees, items, and skills increases, the estimation of even simple cognitive diagnosis mode ..."
Abstract
 Add to MetaCart
Abstract. In educational research, a fundamental goal is identifying which skills students have mastered, which skills they have not, and which skills they are in the process of mastering. As the number of examinees, items, and skills increases, the estimation of even simple cognitive diagnosis models becomes difficult. We adopt a faster, simpler approach: cluster a capability matrix estimating each student’s individual skill knowledge to generate skill set profile clusters of students. We complement this approach with the introduction of an automatic subspace clustering method that first identifies skills on which students are wellseparated prior to clustering smaller subspaces. This method also allows teachers to dictate the size and separation of the clusters, if need be, for practical reasons. We demonstrate the feasibility and scalability of our method on several simulated datasets and illustrate the difficulties inherent in real data using a subset of online mathematics tutor data. 1
Incorporating Learning Over Time into the Cognitive Assessment Framework
, 2010
"... We propose a variety of models for incorporating learning over time into the cognitive assessment modeling framework. In two of the models, we use Item Response Theory (IRT; VanDerLinden and Hambleton 1997) where we assume that a continuous latent parameter measures a student’s general proficiency i ..."
Abstract
 Add to MetaCart
We propose a variety of models for incorporating learning over time into the cognitive assessment modeling framework. In two of the models, we use Item Response Theory (IRT; VanDerLinden and Hambleton 1997) where we assume that a continuous latent parameter measures a student’s general proficiency in the area of interest. In the other two models, we use Cognitive Diagnosis Models (CDMs; Rupp and Templin 2008) where we estimate whether students possess a set of skills as the latent student parameter. In all four models, we assume that students take multiple exams in the same content area over a period of time and that at each time point, we are interested in tracking their learning. Therefore, the models consider what the students knew at the previous time point when estimating their current knowledge. With this information, we believe we can make better predictions about end of year, highstakes exam scores and inform teachers of areas where students are struggling. We may also be able to compare different methods of teaching to find ones that most promote learning and make some statements about the true rate and variability with which students learn which could help teachers, researchers, and policy makers set more realistic goals for students. Each model is discussed both empirically and mathematically. In a simulation study of one model, the parameters describing student learning were recovered with 94.6 % accuracy. 1 1
hosted at
"... Additional services and information for Educational and Psychological Measurement can be found at: ..."
Abstract
 Add to MetaCart
Additional services and information for Educational and Psychological Measurement can be found at: