NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 1 to 15 of 56 results
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ou Lydia; Brew, Chris; Blackmore, John; Gerard, Libby; Madhok, Jacquie; Linn, Marcia C. – Educational Measurement: Issues and Practice, 2014
Content-based automated scoring has been applied in a variety of science domains. However, many prior applications involved simplified scoring rubrics without considering rubrics representing multiple levels of understanding. This study tested a concept-based scoring tool for content-based scoring, c-rater™, for four science items with rubrics…
Descriptors: Science Tests, Test Items, Scoring, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Higgins, Derrick; Heilman, Michael – Educational Measurement: Issues and Practice, 2014
As methods for automated scoring of constructed-response items become more widely adopted in state assessments, and are used in more consequential operational configurations, it is critical that their susceptibility to gaming behavior be investigated and managed. This article provides a review of research relevant to how construct-irrelevant…
Descriptors: Automation, Scoring, Responses, Test Wiseness
Peer reviewed Peer reviewed
Direct linkDirect link
Williamson, David M.; Xi, Xiaoming; Breyer, F. Jay – Educational Measurement: Issues and Practice, 2012
A framework for evaluation and use of automated scoring of constructed-response tasks is provided that entails both evaluation of automated scoring as well as guidelines for implementation and maintenance in the context of constantly evolving technologies. Consideration of validity issues and challenges associated with automated scoring are…
Descriptors: Automation, Scoring, Evaluation, Guidelines
Peer reviewed Peer reviewed
Direct linkDirect link
Suto, Irenka – Educational Measurement: Issues and Practice, 2012
Internationally, many assessment systems rely predominantly on human raters to score examinations. Arguably, this facilitates the assessment of multiple sophisticated educational constructs, strengthening assessment validity. It can introduce subjectivity into the scoring process, however, engendering threats to accuracy. The present objectives…
Descriptors: Evaluation Methods, Scoring, Qualitative Research, Protocol Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Crisp, Victoria – Educational Measurement: Issues and Practice, 2012
In the United Kingdom, the majority of national assessments involve human raters. The processes by which raters determine the scores to award are central to the assessment process and affect the extent to which valid inferences can be made from assessment outcomes. Thus, understanding rater cognition has become a growing area of research in the…
Descriptors: Foreign Countries, Scores, Protocol Analysis, Social Influences
Peer reviewed Peer reviewed
Direct linkDirect link
Myford, Carol M. – Educational Measurement: Issues and Practice, 2012
Over the last several decades, researchers have studied many and varied aspects of rater cognition. Those interested in pursuing basic research have focused on gaining an understanding of raters' thought processes as they score different types of performances and products, striving to understand how raters' mental representations and the cognitive…
Descriptors: Evidence, Validity, Cognitive Processes, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Bejar, Issac I. – Educational Measurement: Issues and Practice, 2012
The scoring process is critical in the validation of tests that rely on constructed responses. Documenting that readers carry out the scoring in ways consistent with the construct and measurement goals is an important aspect of score validity. In this article, rater cognition is approached as a source of support for a validity argument for scores…
Descriptors: Scores, Inferences, Validity, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Goldberg, Gail Lynn – Educational Measurement: Issues and Practice, 2012
The engagement of teachers as raters to score constructed response items on assessments of student learning is widely claimed to be a valuable vehicle for professional development. This paper examines the evidence behind those claims from several sources, including research and reports over the past two decades, information from a dozen state…
Descriptors: Academic Achievement, Performance Based Assessment, Scoring, Professional Development
Peer reviewed Peer reviewed
Direct linkDirect link
Pommerich, Mary – Educational Measurement: Issues and Practice, 2012
Neil Dorans has made a career of advocating for the examinee. He continues to do so in his NCME career award address, providing a thought-provoking commentary on some current trends in educational measurement that could potentially affect the integrity of test scores. Concerns expressed in the address call attention to a conundrum that faces…
Descriptors: Testing, Scores, Measurement, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Dorans, Neil J. – Educational Measurement: Issues and Practice, 2012
Views on testing--its purpose and uses and how its data are analyzed--are related to one's perspective on test takers. Test takers can be viewed as learners, examinees, or contestants. I briefly discuss the perspective of test takers as learners. I maintain that much of psychometrics views test takers as examinees. I discuss test takers as a…
Descriptors: Testing, Test Theory, Item Response Theory, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Solano-Flores, Guillermo; Li, Min – Educational Measurement: Issues and Practice, 2009
We addressed the challenge of scoring cognitive interviews in research involving multiple cultural groups. We interviewed 123 fourth- and fifth-grade students from three cultural groups to probe how they related a mathematics item to their personal lives. Item meaningfulness--the tendency of students to relate the content and/or context of an item…
Descriptors: Generalizability Theory, Scoring, Error of Measurement, Grade 5
Peer reviewed Peer reviewed
Direct linkDirect link
Royal-Dawson, Lucy; Baird, Jo-Anne – Educational Measurement: Issues and Practice, 2009
Hundreds of thousands of raters are recruited internationally to score examinations, but little research has been conducted on the selection criteria for these raters. Many countries insist upon teaching experience as a selection criterion and this has frequently become embedded in the cultural expectations surrounding the tests. Shortages in…
Descriptors: National Curriculum, Scoring, Foreign Countries, Teaching Experience
Peer reviewed Peer reviewed
Direct linkDirect link
Raymond, Mark R.; Neustel, Sandra; Anderson, Dan – Educational Measurement: Issues and Practice, 2009
Examinees who take high-stakes assessments are usually given an opportunity to repeat the test if they are unsuccessful on their initial attempt. To prevent examinees from obtaining unfair score increases by memorizing the content of specific test items, testing agencies usually assign a different test form to repeat examinees. The use of multiple…
Descriptors: Test Results, Test Items, Testing, Aptitude Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Sykes, Robert C.; Ito, Kyoko; Wang, Zhen – Educational Measurement: Issues and Practice, 2008
Student responses to a large number of constructed response items in three Math and three Reading tests were scored on two occasions using three ways of assigning raters: single reader scoring, a different reader for each response (item-specific), and three readers each scoring a rater item block (RIB) containing approximately one-third of a…
Descriptors: Test Items, Mathematics Tests, Reading Tests, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Llosa, Lorena – Educational Measurement: Issues and Practice, 2008
Using an argument-based approach to validation, this study examines the quality of teacher judgments in the context of a standards-based classroom assessment of English proficiency. Using Bachman's (2005) assessment use argument (AUA) as a framework for the investigation, this paper first articulates the claims, warrants, rebuttals, and backing…
Descriptors: Protocol Analysis, Multitrait Multimethod Techniques, Validity, Scoring
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4