ERIC Number: EJ1028877
Record Type: Journal
Publication Date: 2014
Reference Count: 6
Measuring Students' Writing Ability on a Computer-Analytic Development Scale: Orchestrating the Dance between Technology and the Teacher
Massey, Chris L.; Gambrell, Linda B.
Literacy Research and Instruction, v53 n2 p101-103 2014
Literacy educators and researchers have long recognized the importance of increasing students' writing proficiency across age and grade levels. With the release of the Common Core State Standards (CCSS), a new and greater emphasis is being placed on writing in the K-12 curriculum. Educators, as well as the authors of the CCSS, agree that writing is an essential and integral part of preparing students for college- and career-readiness. One major hurdle in effective teaching and assessment of writing is that educators and researchers continue to struggle to find valid and reliable ways to document student progress in the ability to write. As former K-12 classroom teachers and current university professors, the authors are acutely aware of the fact that the assessment of student writing is messy, confusing, and extremely time consuming. The authors are intrigued that the Writing Ability Developmental Scale (Burdick et al. 2013) has the potential to allow teachers and researchers to "easily and quickly estimate a writer's ability by using multiple writing occasions" (p. 259). Through the use of the affordances of technology, the Writing Ability Developmental Scale has the potential to provide new insights for both research and practice about the development of students' writing proficiency. In "Measuring Students' Writing Ability on a Computer-Analytic Developmental Scale: An Exploratory Validity Study," Burdick et al. (2013) sought to measure students' writing ability, which they defined as an individual trait used to indicate one's power to compose written text. In that study they explored the measurement validity of the Writing Ability Developmental Scale by comparing results of the computer scale to human evaluations. That is, they compared writing proficiency scores of human evaluators to those scores given by a computer-generated program based on Lexile scores. The study was well constructed, and Burdick et al. (2013) thoughtfully and thoroughly address both reliability and validity issues, as well as important limitations of the scale. The autho's agree with their conclusion that the Writing Ability Developmental Scale can serve as one tool to move the field forward with respect to assessing students' writing development across time. This article expresses their responses to the article by Burdick et al. (2013).
Descriptors: Writing Evaluation, State Standards, Instructional Effectiveness, Writing Ability, Student Evaluation, Measurement Techniques, Computer Assisted Testing, Reliability, Validity, Scores, Comparative Analysis, Formative Evaluation, Feedback (Response), Reader Response
Routledge. Available from: Taylor & Francis, Ltd. 325 Chestnut Street Suite 800, Philadelphia, PA 19106. Tel: 800-354-1420; Fax: 215-625-2940; Web site: http://www.tandf.co.uk/journals
Publication Type: Journal Articles; Opinion Papers; Reports - Evaluative
Education Level: Elementary Secondary Education
Authoring Institution: N/A