NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
ERIC Number: ED590404
Record Type: Non-Journal
Publication Date: 2018
Pages: 50
Abstractor: As Provided
ISBN: N/A
ISSN: EISSN-
EISSN: N/A
Automated Scoring of Students' Small-Group Discussions to Assess Reading Ability
Kosh, Audra E.; Greene, Jeffrey A.; Murphy, P. Karen; Burdick, Hal; Firetto, Carla M.; Elmore, Jeff
Grantee Submission
We explored the feasibility of using automated scoring to assess upper-elementary students' reading ability through analysis of transcripts of students' small-group discussions about texts. Participants included 35 fourth-grade students across two classrooms that engaged in a literacy intervention called Quality Talk. During the course of one school year, data were collected at ten time points for a total of 327 student-text encounters, with a different text discussed at each time point. To explore the possibility of automated scoring, we considered which quantitative discourse variables (e.g., variables to measure language sophistication and latent semantic analysis variables) were the strongest predictors of scores on a multiple-choice and constructed-response reading comprehension test. Convergent validity evidence was collected by comparing automatically-calculated quantitative discourse features to scores on a reading fluency test. After examining a variety of discourse features using multilevel modeling, results showed that measures of word rareness and word diversity were the most promising variables to use in automated scoring of students' discussions. [This paper was published in "Educational Measurement: Issues and Practice" v37 p20-34 2018 (EJ1183280).]
Publication Type: Reports - Research
Education Level: Elementary Education; Grade 4; Intermediate Grades
Audience: N/A
Language: English
Sponsor: Institute of Education Sciences (ED)
Authoring Institution: N/A
IES Funded: Yes
Grant or Contract Numbers: R305A130031