ERIC Number: EJ1195519
Record Type: Journal
Publication Date: 2018-Oct
Pages: 38
Abstractor: As Provided
ISBN: N/A
ISSN: EISSN-2157-2100
EISSN: N/A
Most of the Time, It Works Every Time: Limitations in Refining Domain Models with Learning Curves
Goldin, Ilya; Galyardt, April
Journal of Educational Data Mining, v10 n2 p55-92 Oct 2018
Data from student learning provide learning curves that, ideally, demonstrate improvement in student performance over time. Existing data mining methods can leverage these data to characterize and improve the domain models that support a learning environment, and these methods have been validated both with already-collected data, and in close-the-loop studies that actually modify instruction. However, these methods may be less general than previously thought, because they have not been evaluated under a wide range of data conditions. We describe a problem space of 90 distinct scenarios within which data mining methods may be applied to recognize posited domain model improvements. The scenarios are defined by two kinds of domain model modifications, five kinds of learning curves, and 25 types of skill combinations under three ways of interleaving skill practice. These extensive tests are made possible by the use of simulated data. In each of the 90 scenarios, we test three predictive models that aim to recognize domain model improvements, and evaluate their performance. Results show that the conditions under which an automated method tests a proposed domain model improvement can drastically affect the method's accuracy in accepting or rejecting the proposed improvement, and the conditions can be affected by learning curve shapes, method of interleaving, choice of predictive model, and the threshold for predictive model comparison. Further, results show consistent problems with accuracy in accepting a proposed improvement by the Additive Factors Model, made popular in the DataShop software. Other models, namely Performance Factors Analysis and Recent-Performance Factors Analysis, are much more accurate, but still struggle under some conditions, such as when distinguishing curves from two skills where students have a high rate of errors after substantial practice. These findings bear on how to evaluate proposed refinements to a domain model. In light of these results, historical attempts to test domain model refinements may need to be reexamined.
Descriptors: Predictor Variables, Models, Learning Processes, Matrices, Mastery Learning, Evaluation Methods, Factor Analysis, Computation, Bayesian Statistics
International Educational Data Mining. e-mail: jedm.editor@gmail.com; Web site: http://jedm.educationaldatamining.org/index.php/JEDM
Publication Type: Journal Articles; Reports - Research
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A