NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 1 to 15 of 89 results
Peer reviewed Peer reviewed
Direct linkDirect link
Puhan, Gautam – Journal of Educational Measurement, 2013
When a constructed-response test form is reused, raw scores from the two administrations of the form may not be comparable. The solution to this problem requires a rescoring, at the current administration, of examinee responses from the previous administration. The scores from this "rescoring" can be used as an anchor for equating. In…
Descriptors: Scoring, Equated Scores, Testing, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Brennan, Robert L. – Journal of Educational Measurement, 2013
Kane's paper "Validating the Interpretations and Uses of Test Scores" is the most complete and clearest discussion yet available of the argument-based approach to validation. At its most basic level, validation as formulated by Kane is fundamentally a simply-stated two-step enterprise: (1) specify the claims inherent in a particular interpretation…
Descriptors: Validity, Test Interpretation, Test Use, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Suh, Youngsuk; Cho, Sun-Joo; Wollack, James A. – Journal of Educational Measurement, 2012
In the presence of test speededness, the parameter estimates of item response theory models can be poorly estimated due to conditional dependencies among items, particularly for end-of-test items (i.e., speeded items). This article conducted a systematic comparison of five-item calibration procedures--a two-parameter logistic (2PL) model, a…
Descriptors: Response Style (Tests), Timed Tests, Test Items, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Puhan, Gautam – Journal of Educational Measurement, 2012
Tucker and chained linear equatings were evaluated in two testing scenarios. In Scenario 1, referred to as rater comparability scoring and equating, the anchor-to-total correlation is often very high for the new form but moderate for the reference form. This may adversely affect the results of Tucker equating, especially if the new and reference…
Descriptors: Testing, Scoring, Equated Scores, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Leckie, George; Baird, Jo-Anne – Journal of Educational Measurement, 2011
This study examined rater effects on essay scoring in an operational monitoring system from England's 2008 national curriculum English writing test for 14-year-olds. We fitted two multilevel models and analyzed: (1) drift in rater severity effects over time; (2) rater central tendency effects; and (3) differences in rater severity and central…
Descriptors: Scoring, Foreign Countries, National Curriculum, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Sooyeon; Walker, Michael E.; McHale, Frederick – Journal of Educational Measurement, 2010
In this study we examined variations of the nonequivalent groups equating design for tests containing both multiple-choice (MC) and constructed-response (CR) items to determine which design was most effective in producing equivalent scores across the two tests to be equated. Using data from a large-scale exam, this study investigated the use of…
Descriptors: Measures (Individuals), Scoring, Equated Scores, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Clauser, Brian E.; Mee, Janet; Baldwin, Su G.; Margolis, Melissa J.; Dillon, Gerard F. – Journal of Educational Measurement, 2009
Although the Angoff procedure is among the most widely used standard setting procedures for tests comprising multiple-choice items, research has shown that subject matter experts have considerable difficulty accurately making the required judgments in the absence of examinee performance data. Some authors have viewed the need to provide…
Descriptors: Standard Setting (Scoring), Program Effectiveness, Expertise, Health Personnel
Peer reviewed Peer reviewed
Direct linkDirect link
Myford, Carol M.; Wolfe, Edward W. – Journal of Educational Measurement, 2009
In this study, we describe a framework for monitoring rater performance over time. We present several statistical indices to identify raters whose standards drift and explain how to use those indices operationally. To illustrate the use of the framework, we analyzed rating data from the 2002 Advanced Placement English Literature and Composition…
Descriptors: English Literature, Advanced Placement, Measures (Individuals), Writing (Composition)
Peer reviewed Peer reviewed
Direct linkDirect link
Gierl, Mark J.; Cui, Ying; Zhou, Jiawen – Journal of Educational Measurement, 2009
The attribute hierarchy method (AHM) is a psychometric procedure for classifying examinees' test item responses into a set of structured attribute patterns associated with different components from a cognitive model of task performance. Results from an AHM analysis yield information on examinees' cognitive strengths and weaknesses. Hence, the AHM…
Descriptors: Test Items, True Scores, Psychometrics, Algebra
Peer reviewed Peer reviewed
Direct linkDirect link
Stout, William – Journal of Educational Measurement, 2007
This article summarizes the continuous latent trait IRT approach to skills diagnosis as particularized by a representative variety of continuous latent trait models using item response functions (IRFs). First, several basic IRT-based continuous latent trait approaches are presented in some detail. Then a brief summary of estimation, model…
Descriptors: Identification, Item Response Theory, Scoring, Middle Schools
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Jinghua; Cahn, Miriam F.; Dorans, Neil J. – Journal of Educational Measurement, 2006
The College Board's SAT[R] data are used to illustrate how the score equity assessment (SEA) can help inform the program about equatability. SEA is used to examine whether the content change(s) to the revised new SAT result in differential linking functions across gender groups. Results of population sensitivity analyses are reported on the…
Descriptors: Aptitude Tests, Comparative Analysis, Gender Differences, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Dong-In; Brennan, Robert; Kolen, Michael – Journal of Educational Measurement, 2005
Four equating methods (3PL true score equating, 3PL observed score equating, beta 4 true score equating, and beta 4 observed score equating) were compared using four equating criteria: first-order equity (FOE), second-order equity (SOE), conditional-mean-squared-error (CMSE) difference, and the equi-percentile equating property. True score…
Descriptors: True Scores, Psychometrics, Equated Scores, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Wainer, Howard; Wang, X. A.; Skorupski, William P.; Bradlow, Eric T. – Journal of Educational Measurement, 2005
In this note, we demonstrate an interesting use of the posterior distributions (and corresponding posterior samples of proficiency) that are yielded by fitting a fully Bayesian test scoring model to a complex assessment. Specifically, we examine the efficacy of the test in combination with the specific passing score that was chosen through expert…
Descriptors: Scoring, Bayesian Statistics
Peer reviewed Peer reviewed
Wainer, Howard; Wang, Xiaohui – Journal of Educational Measurement, 2000
Modified the three-parameter model to include an additional random effect for items nested within the same testlet. Fitted the new model to 86 testlets from the Test of English as a Foreign Language (TOEFL) and compared standard parameters (discrimination, difficulty, and guessing) with those obtained through traditional modeling. Discusses the…
Descriptors: English (Second Language), Language Tests, Scoring, Statistical Analysis
Peer reviewed Peer reviewed
Clauser, Brian E.; Harik, Polina; Clyman, Stephen G. – Journal of Educational Measurement, 2000
Used generalizability theory to assess the impact of using independent, randomly equivalent groups of experts to develop scoring algorithms for computer simulation tasks designed to measure physicians' patient management skills. Results with three groups of four medical school faculty members each suggest that the impact of the expert group may be…
Descriptors: Computer Simulation, Generalizability Theory, Performance Based Assessment, Physicians
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6