NotesFAQContact Us
Collection
Advanced
Search Tips
Peer reviewed Peer reviewed
Direct linkDirect link
ERIC Number: EJ865750
Record Type: Journal
Publication Date: 2009-Jun
Pages: 14
Abstractor: As Provided
Reference Count: 9
ISBN: N/A
ISSN: ISSN-0013-1881
Investigation of the Factors Affecting the Pre-Test Effect in National Curriculum Science Assessment Development in England
Pyle, Katie; Jones, Emily; Williams, Chris; Morrison, Jo
Educational Research, v51 n2 p269-282 Jun 2009
Background: All national curriculum tests in England are pre-tested as part of the development process. Differences in pupil performance between pre-test and live test are consistently found. This difference has been termed the pre-test effect. Understanding the pre-test effect is essential in the test development and selection processes and in the setting of cut scores for the national curriculum tests in England. This study looked at the national science tests for 11-year-olds in England (termed key stage 2). Purpose: The aim of this study was to investigate whether the pre-test effect differs when the item data is divided in three ways: by pupil level, item level and item type. Sample: Second pre-test data was captured from Year 6 (aged 11) pupils (2006: n = 1010 for test A, n = 1002 for test B; 2007: n = 895 for test A, n = 889 for test B) from a representative population of state-funded schools in England as defined by school type, size of year group, region and achievement. Live test data was captured as part of the Monitoring Curriculum and Assessment Project where over 1000 nationally representative schools in England taking part in a study for the Centre for Formative Assessment Studies (CFAS) sent in the live test results from a sample of Year 6 pupils (2006: n = 2732 for both tests A and B; 2007: n = 2156 for both tests A and B). Design and methods: The study looked at item level data from pre-test 2 and the live test for each of the years 2006 and 2007 for the key stage 2 science tests and compared the performance of the items by grouping the data in the three specific ways: by pupil level, item level and item type. Average facilities were calculated for the various subsets of items and pupils, and the differences between the groupings were compared using either t-tests or analysis of variance (ANOVA). Results: Higher achieving pupils show a significantly greater pre-test effect than lower achieving ones, while the lowest achieving pupils do not show a significant pre-test effect at all. Items that are of the greatest difficulty show a greater pre-test effect than items that are easier. Items that are open response and require a short answer show a greater pre-test effect than open response items requiring a long answer. These latter item types in turn show a greater pre-test effect than closed response items. Conclusions: The study questions whether knowledge of test characteristics such as item level, item type and knowledge about the sample (i.e. levels of pupils) should affect the pre-test effect used in level setting. Item levels and pupil abilities are already accounted for in current pre-test effect predictions by means of test and sample specifications. Proportions of items of different formats are not specified, however. As the study showed a different pre-test effect for different item types, it is suggested there is consideration of the make-up of the tests with regard to item types during the level setting process. (Contains 7 figures, 2 tables and 3 notes.)
Routledge. Available from: Taylor & Francis, Ltd. 325 Chestnut Street Suite 800, Philadelphia, PA 19106. Tel: 800-354-1420; Fax: 215-625-2940; Web site: http://www.tandf.co.uk/journals
Publication Type: Journal Articles; Reports - Research
Education Level: Elementary Education; Intermediate Grades
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Identifiers - Location: United Kingdom (England)