NotesFAQContact Us
Search Tips
Peer reviewed Peer reviewed
Direct linkDirect link
ERIC Number: EJ1179709
Record Type: Journal
Publication Date: 2018
Pages: 24
Abstractor: As Provided
ISSN: ISSN-0895-7347
Designing, Evaluating, and Deploying Automated Scoring Systems with Validity in Mind: Methodological Design Decisions
Rupp, André A.
Applied Measurement in Education, v31 n3 p191-214 2018
This article discusses critical methodological design decisions for collecting, interpreting, and synthesizing empirical evidence during the design, deployment, and operational quality-control phases for automated scoring systems. The discussion is inspired by work on operational large-scale systems for automated essay scoring but many of the principles have implications for principled reasoning and workflow management for other use contexts. The overall workflow is described as a series of five phases, each one having two critical sub-phases with a large number of associated methodological design decisions. These phases involve assessment design, linguistic component design, model design, model validation, and operational deployment. Through brief examples, the various considerations for these design decisions are illustrated, which have to be carefully weighed in the overall decision-making process for the system in order to unveil the complexities that underlie this work. The article closes with reflections on resource demands as well as recommendations for best practices of interdisciplinary teams who engage in this work, underscoring how this work is a blend of scientific rigor and artful practice.
Routledge. Available from: Taylor & Francis, Ltd. 530 Walnut Street Suite 850, Philadelphia, PA 19106. Tel: 800-354-1420; Tel: 215-625-8900; Fax: 215-207-0050; Web site:
Publication Type: Journal Articles; Reports - Descriptive
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A