NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing all 9 results
Peer reviewed Peer reviewed
Direct linkDirect link
Davis, Susan L.; Buckendahl, Chad W.; Plake, Barbara S. – Journal of Educational Measurement, 2008
As an alternative to adaptation, tests may also be developed simultaneously in multiple languages. Although the items on such tests could vary substantially, scores from these tests may be used to make the same types of decisions about different groups of examinees. The ability to make such decisions is contingent upon setting performance…
Descriptors: Test Results, Testing Programs, Multilingualism, Standard Setting
Peer reviewed Peer reviewed
Buckendahl, Chad W.; Smith, Russell W.; Impara, James C.; Plake, Barbara S. – Journal of Educational Measurement, 2002
Compared simplified variations on the Angoff and Bookmark methods for setting cut scores on educational assessments with data from a grade 7 mathematics test (23 panelists in all). Although the Angoff method is more widely used, results show that the Bookmark method has some promising features. (SLD)
Descriptors: Cutting Scores, Educational Assessment, Evaluators, Junior High School Students
Peer reviewed Peer reviewed
Plake, Barbara S.; Impara, James C.; Irwin, Patrick M. – Journal of Educational Measurement, 2000
Examined intra- and inter-rater consistency of item performance estimated from an Angoff standard setting over 2 years, with 29 panelists one year, and 30 the next. Results provide evidence that item performance estimates were consistent within and across panels within and across years. Factors that might have influenced this high degree of…
Descriptors: Evaluators, Prediction, Reliability, Standard Setting
Peer reviewed Peer reviewed
Impara, James C.; Plake, Barbara S. – Journal of Educational Measurement, 1998
Sixth-grade teachers (n=26) estimated item performance for their students (724 total students) on a 50-item district-wide science test. Teachers were more accurate in estimating performance of the total group than of the borderline group, but in neither case was their accuracy high. Estimating proportion-correct values using the Angoff standard…
Descriptors: Difficulty Level, Elementary School Teachers, Grade 6, Intermediate Grades
Peer reviewed Peer reviewed
Plake, Barbara S.; Impara, James C. – Journal of Educational Measurement, 1997
Two studies of variations of the Angoff method (W. Angoff, 1971) involving nine elementary school teachers in each case compared a yes/no estimation with a proportion correct estimation for setting cut scores. Both methods yielded essentially equal cut scores, but judges found the yes/no method easier to implement. (SLD)
Descriptors: Cutting Scores, Elementary Education, Elementary School Teachers, Estimation (Mathematics)
Peer reviewed Peer reviewed
Plake, Barbara S.; And Others – Journal of Educational Measurement, 1982
Effects of item arrangement, test anxiety, and sex on a mathematics test taken by motivated, upper-division undergraduates and beginning graduate students were investigated. Results showed that males outperformed females when items were arranged from easy to hard. (Author/GK)
Descriptors: Academic Achievement, College Mathematics, Higher Education, Sex Differences
Peer reviewed Peer reviewed
Plake, Barbara S.; And Others – Journal of Educational Measurement, 1994
The comparability of Angoff-based item ratings on a general education test battery made by judges from within-content and across-content domains was studied. Results with 26 college faculty judges indicate that, at least for some tests, item ratings might be essentially equivalent regardless of judge's content specialty. (SLD)
Descriptors: College Faculty, Comparative Analysis, General Education, Higher Education
Peer reviewed Peer reviewed
Plake, Barbara S.; Hoover, H. D. – Journal of Educational Measurement, 1979
An experiment investigated the extent to which the results of out-of-level testing may be biased because the child given an out of level test may have had a significantly different curriculum than the children given in-level tests. Item analysis data suggested this was unlikely. (CTM)
Descriptors: Achievement Tests, Elementary Education, Elementary School Curriculum, Grade Equivalent Scores
Peer reviewed Peer reviewed
Plake, Barbara S.; Kane, Michael T. – Journal of Educational Measurement, 1991
Several methods for determining a passing score on an examination from individual raters' estimates of minimal pass levels were compared through simulation. The methods used differed in the weighting estimates for each item received in the aggregation process. Reasons why the simplest procedure is most preferred are discussed. (SLD)
Descriptors: Comparative Analysis, Computer Simulation, Cutting Scores, Estimation (Mathematics)