NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kopp, Kristopher J.; Johnson, Amy M.; Crossley, Scott A.; McNamara, Danielle S. – Grantee Submission, 2017
An NLP algorithm was developed to assess question quality to inform feedback on questions generated by students within iSTART (an intelligent tutoring system that teaches reading strategies). A corpus of 4575 questions was coded using a four-level taxonomy. NLP indices were calculated for each question and machine learning was used to predict…
Descriptors: Reading Comprehension, Reading Instruction, Intelligent Tutoring Systems, Reading Strategies
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dascalu, Mihai; Allen, Laura K.; McNamara, Danielle S.; Trausan-Matu, Stefan; Crossley, Scott A. – Grantee Submission, 2017
Dialogism provides the grounds for building a comprehensive model of discourse and it is focused on the multiplicity of perspectives (i.e., voices). Dialogism can be present in any type of text, while voices become themes or recurrent topics emerging from the discourse. In this study, we examine the extent that differences between…
Descriptors: Dialogs (Language), Protocol Analysis, Discourse Analysis, Automation
Snow, Erica L.; Allen, Laura K.; Jacovina, Matthew E.; Crossley, Scott A.; Perret, Cecile A.; McNamara, Danielle S. – Grantee Submission, 2015
Writing researchers have suggested that students who are perceived as strong writers (i.e., those who generate texts rated as high quality) demonstrate flexibility in their writing style. While anecdotally this has been a commonly held belief among researchers and educators, there is little empirical research to support this claim. This study…
Descriptors: Writing (Composition), Writing Strategies, Hypothesis Testing, Essays
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Roscoe, Rod D.; Crossley, Scott A.; Snow, Erica L.; Varner, Laura K.; McNamara, Danielle S. – Grantee Submission, 2014
Automated essay scoring tools are often criticized on the basis of construct validity. Specifically, it has been argued that computational scoring algorithms may be unaligned to higher-level indicators of quality writing, such as writers' demonstrated knowledge and understanding of the essay topics. In this paper, we consider how and whether the…
Descriptors: Correlation, Essays, Scoring, Writing Evaluation