NotesFAQContact Us
Search Tips
Peer reviewed Peer reviewed
Direct linkDirect link
ERIC Number: EJ983047
Record Type: Journal
Publication Date: 2012
Pages: 22
Abstractor: As Provided
Reference Count: 38
ISSN: ISSN-1530-5058
Obtaining Content Weights for Test Specifications from Job Analysis Task Surveys: An Application of the Many-Facets Rasch Model
Wang, Ning; Stahl, John
International Journal of Testing, v12 n4 p299-320 2012
This article discusses the use of the Many-Facets Rasch Model, via the FACETS computer program (Linacre, 2006a), to scale job/practice analysis survey data as well as to combine multiple rating scales into single composite weights representing the tasks' relative importance. Results from the Many-Facets Rasch Model are compared with those calculated from the Rasch Rating Scale Model (RRSM) (Spray & Huang, 2000) using two examples of actual job analysis data from diverse professions. In addition, this article proposes a solution for establishing the origin of the percentage scale when transferring the task importance weights from a logit unit into percentage weights. Although the resulting test specifications from the two compared methods are not radically different, a case is made that the use of the Many-Facets Rasch Model with a zero point based on the frequency rating scale provides a more justifiable basis for combining multiple rating scales and transforming task survey data. In addition, this study found that the Many-Facets Rasch Rating Model can better accommodate missing data than the RRSM method in situations in which respondents only rate subsets of the multiple scales and not all of the scales for the tasks being surveyed. (Contains 4 tables, 2 figures, and 3 notes.)
Routledge. Available from: Taylor & Francis, Ltd. 325 Chestnut Street Suite 800, Philadelphia, PA 19106. Tel: 800-354-1420; Fax: 215-625-2940; Web site:
Publication Type: Journal Articles; Reports - Research
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A