NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
ERIC Number: EJ872395
Record Type: Journal
Publication Date: 2010-Jan-27
Pages: 2
Abstractor: ERIC
ISBN: N/A
ISSN: ISSN-0277-4232
EISSN: N/A
Open-Ended Test Items Pose Challenges
Sawchuk, Stephen
Education Week, v29 n19 p1, 11 Jan 2010
Most experts in the testing community have presumed that the $350 million promised by the U.S. Department of Education to support common assessments would promote those that made greater use of open-ended items capable of measuring higher-order critical-thinking skills. But as measurement experts consider the multitude of possibilities for an assessment system based more heavily on such questions, they also are beginning to reflect on practical obstacles to putting such a system into place. The issues now on the table include the added expense of those items, as well as sensitive questions about who should be charged with the task of scoring them and whether they will prove reliable enough for high-stakes decisions. Also being confronted are matters of governance--the quandary of which entities would actually "own" any new assessments created in common by states and whether working in state consortia would generate savings. Assessment experts caution that open-ended test items carry with them a number of practical challenges. For one, items that measure higher-order skills are generally more expensive to devise, depending on how extensive the items are and how much of the total test such items make up. For instance, questions that require students to engage in extensive writing or to defend their answer choices often are "memorable," meaning the items can't be reused for many years and must be replaced in the meantime. The scoring process for open-ended items is also far more complicated than sticking a bunch of test papers into a computer scanner. It relies on "hand scorers" who are trained according to a scoring guide for each question and a set of "anchor papers" that give examples of performance on the item at each level. Each open-ended item typically goes through multiple reviews to ensure consistent scoring. Depending on the complexity of the item and how long it takes to score, the costs can increase dramatically. A short constructed-response item with four possible scores might take one minute to score. But an extended performance-based or portfolio item might take up to an hour. With test scorers paid in the range of $12 to $15 an hour, such costs would add up. Aside from costs, the results cannot be scored as quickly or efficiently as those done by machine. That means it could prove tough to turn results around under the quick timeline envisioned by the No Child Left Behind (NCLB) law and current state accountability systems.
Editorial Projects in Education. 6935 Arlington Road Suite 100, Bethesda, MD 20814-5233. Tel: 800-346-1834; Tel: 301-280-3100; e-mail: customercare@epe.org; Web site: http://www.edweek.org/info/about/
Publication Type: Journal Articles; Reports - Descriptive
Education Level: Elementary Secondary Education
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Identifiers - Laws, Policies, & Programs: No Child Left Behind Act 2001
Grant or Contract Numbers: N/A