ERIC Number: EJ1253238
Record Type: Journal
Publication Date: 2020-Jun
Pages: 23
Abstractor: As Provided
ISBN: N/A
ISSN: ISSN-0013-1644
EISSN: N/A
Rasch versus Classical Equating in the Context of Small Sample Sizes
Babcock, Ben; Hodge, Kari J.
Educational and Psychological Measurement, v80 n3 p499-521 Jun 2020
Equating and scaling in the context of small sample exams, such as credentialing exams for highly specialized professions, has received increased attention in recent research. Investigators have proposed a variety of both classical and Rasch-based approaches to the problem. This study attempts to extend past research by (1) directly comparing classical and Rasch techniques of equating exam scores when sample sizes are small (N [less than or equal to] 100 per exam form) and (2) attempting to pool multiple forms' worth of data to improve estimation in the Rasch framework. We simulated multiple years of a small-sample exam program by resampling from a larger certification exam program's real data. Results showed that combining multiple administrations' worth of data via the Rasch model can lead to more accurate equating compared to classical methods designed to work well in small samples. WINSTEPS-based Rasch methods that used multiple exam forms' data worked better than Bayesian Markov Chain Monte Carlo methods, as the prior distribution used to estimate the item difficulty parameters biased predicted scores when there were difficulty differences between exam forms.
Descriptors: Item Response Theory, Equated Scores, Scaling, Sample Size, Markov Processes, Monte Carlo Methods, Maximum Likelihood Statistics, Bayesian Statistics
SAGE Publications. 2455 Teller Road, Thousand Oaks, CA 91320. Tel: 800-818-7243; Tel: 805-499-9774; Fax: 800-583-2665; e-mail: journals@sagepub.com; Web site: http://sagepub.com
Publication Type: Journal Articles; Reports - Research
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A