NotesFAQContact Us
Search Tips
Peer reviewed Peer reviewed
Direct linkDirect link
ERIC Number: EJ1039797
Record Type: Journal
Publication Date: 2014
Pages: 6
Abstractor: ERIC
Reference Count: 19
ISSN: ISSN-0895-7347
Second-Generation Challenges for Making Content Assessments Accessible for ELLs
Kopriva, Rebecca J.
Applied Measurement in Education, v27 n4 p301-306 2014
In this commentary, Rebecca Kopriva examines the articles in this special issue by drawing on her experience from three series of investigations examining how English language learners (ELLs) and other students perceive what test items ask and how they can successfully represent what they know. The first series examined the effect of different testing conditions on academic performance of ELLs with different needs (Kopriva & Mislevy, 2005). The second series, called STELLA, developed and investigated an empirically based online system that produces profiles of individual ELLs populated by information about characteristics that were identified as critical to making sound test accommodation decisions (Kopriva, Emick, Hipolito-Delgado, & Cameron, 2007; Koran, Kopriva, Emick, Monroe, & Garavaglia, 2006). The third series, collectively called ONPAR (e.g., Kopriva et al., 2013; Wright & Kopriva, 2009), used dynamic multisemiotic representations and novel response spaces in online environments to measure challenging constructs in mathematics and science with relatively little language. Kopriva points out that the articles in this issue highlight some key outstanding considerations in assessment accessibility research for linguistic minorities enters its second generation. As the authors suggest, their recommendations are relevant for both large-scale and classroom testing. She goes on to say that in all, the articles make many excellent points, but three points in particular stand out for comment: (1) the articles do not identify a set of variables that impact access nor do they discuss how these variables might be prioritized or applied; (2) to prepare for large-scale and classroom assessments where the dynamic interplay of multiple features can be made an integral part of how questions are asked and answered for students, assessment techniques, as a methodology, will most likely have to be expanded to handle the increased variations; and (3) more nuanced variations and novel methodology will need to be properly evaluated as well, both in terms of validity of inferences within the language minority omnibus group, and comparability between interpretation of scores of language minority students and the general student population. After discussing each of these points, she concludes that the questions and considerations of the articles in this issue strongly suggest that the next generation of accessibility research promises to be insightful and fruitful for improving academic testing of language minority students. She asks the authors to consider how this thinking might be generalized to improve testing for the general population of test-takers as well.
Routledge. Available from: Taylor & Francis, Ltd. 325 Chestnut Street Suite 800, Philadelphia, PA 19106. Tel: 800-354-1420; Fax: 215-625-2940; Web site:
Publication Type: Journal Articles; Reports - Evaluative; Opinion Papers
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A