NotesFAQContact Us
Collection
Advanced
Search Tips
Peer reviewed Peer reviewed
PDF on ERIC Download full text
ERIC Number: EJ1187320
Record Type: Journal
Publication Date: 2018
Pages: 21
Abstractor: As Provided
ISBN: N/A
ISSN: EISSN-1929-7750
EISSN: N/A
Evaluating Predictive Models of Student Success: Closing the Methodological Gap
Gardner, Josh; Brooks, Christopher
Journal of Learning Analytics, v5 n2 p105-125 2018
Model evaluation -- the process of making inferences about the performance of predictive models -- is a critical component of predictive modelling research in learning analytics. We survey the state of the practice with respect to model evaluation in learning analytics, which overwhelmingly uses only naïve methods for model evaluation or statistical tests that are not appropriate for predictive model evaluation. We conduct a critical comparison of both null hypothesis significance testing (NHST) and a preferred Bayesian method for model evaluation. Finally, we apply three methods -- the naïve average commonly used in learning analytics, NHST, and Bayesian -- to a predictive modelling experiment on a large set of MOOC data. We compare 96 different predictive models, including different feature sets, statistical modelling algorithms, and tuning hyperparameters for each, using this case study to demonstrate the different experimental conclusions these evaluation techniques provide.
Society for Learning Analytics Research. 121 Pointe Marsan, Beaumont, AB T4X 0A2, Canada. Tel: +61-429-920-838; e-mail: info@solaresearch.org; Web site: http://learning-analytics.info/journals/index.php/JLA/
Publication Type: Journal Articles; Reports - Research
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A