NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Klugman, Emma M.; Ho, Andrew D. – Educational Measurement: Issues and Practice, 2020
State testing programs regularly release previously administered test items to the public. We provide an open-source recipe for state, district, and school assessment coordinators to combine these items flexibly to produce scores linked to established state score scales. These would enable estimation of student score distributions and achievement…
Descriptors: Testing Programs, State Programs, Test Items, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Ferrara, Steve; Svetina, Dubravka; Skucha, Sylvia; Davidson, Anne H. – Educational Measurement: Issues and Practice, 2011
Items on test score scales located at and below the Proficient cut score define the content area knowledge and skills required to achieve proficiency. Alternately, examinees who perform at the Proficient level on a test can be expected to be able to demonstrate that they have mastered most of the knowledge and skills represented by the items at…
Descriptors: Knowledge Level, Mathematics Tests, Program Effectiveness, Inferences
Peer reviewed Peer reviewed
Direct linkDirect link
Sinharay, Sandip; Dorans, Neil J.; Liang, Longjuan – Educational Measurement: Issues and Practice, 2011
Over the past few decades, those who take tests in the United States have exhibited increasing diversity with respect to native language. Standard psychometric procedures for ensuring item and test fairness that have existed for some time were developed when test-taking groups were predominantly native English speakers. A better understanding of…
Descriptors: Test Bias, Testing Programs, Psychometrics, Language Proficiency
Peer reviewed Peer reviewed
Direct linkDirect link
Johnstone, Christopher J.; Thompson, Sandra J.; Bottsford-Miller, Nicole A.; Thurlow, Martha L. – Educational Measurement: Issues and Practice, 2008
Test items undergo multiple iterations of review before states and vendors deem them acceptable to be placed in a live statewide assessment. This article reviews three approaches that can add validity evidence to states' item review processes. The first process is a structured sensitivity review process that focuses on universal design…
Descriptors: Test Items, Disabilities, Test Construction, Testing Programs
Peer reviewed Peer reviewed
Lane, Suzanne; Parke, Carol S.; Stone, Clement A. – Educational Measurement: Issues and Practice, 1998
Provides a general framework for examining the consequences of assessment programs, especially statewide programs that intend to improve student learning by holding schools accountable. The framework is intended for use with programs using performance-based tasks but can be used with programs using traditional item formats as well. (SLD)
Descriptors: Accountability, Educational Assessment, Elementary Secondary Education, Performance Based Assessment
Peer reviewed Peer reviewed
Willis, John A. – Educational Measurement: Issues and Practice, 1990
The Learning Outcome Testing Program of the West Virginia Department of Education is designed to provide public school teachers/administrators with test questions matching learning outcomes. The approach, software selection, results of pilot tests with teachers in 13 sites, and development of test items for item banks are described. (SLD)
Descriptors: Classroom Techniques, Computer Assisted Testing, Computer Managed Instruction, Elementary Secondary Education
Peer reviewed Peer reviewed
Hills, John R. – Educational Measurement: Issues and Practice, 1989
Test bias detection methods based on item response theory (IRT) are reviewed. Five such methods are commonly used: (1) equality of item parameters; (2) area between item characteristic curves; (3) sums of squares; (4) pseudo-IRT; and (5) one-parameter-IRT. A table compares these and six newer or less tested methods. (SLD)
Descriptors: Item Analysis, Test Bias, Test Items, Testing Programs