NotesFAQContact Us
Search Tips
ERIC Number: ED457183
Record Type: Non-Journal
Publication Date: 1999-Apr
Pages: 49
Abstractor: N/A
Reference Count: N/A
Examining Reliability and Validity of Job Analysis Survey Data.
Wang, Ning; Wiser, Randall F.; Newman, Larry S.
Job analysis has played a fundamental role in developing and validating licensure and certification examinations, but research on what constitutes reliable and valid job analysis data is lacking. This paper examines the reliability and validity of job analysis survey results. Generalizability theory and the multi-facet Rasch item response theory (IRT) model (FACETS) are applied to investigate consistency and generalizability in task importance measures, suggest reliable sample size, justify the number and use of rating scales, and detect possible rating errors. By using random samples from job analysis data for two professions with divergent job activities, the study finds that a representative sample as small as 400 respondents produced reliable estimates of task importance to the same degree of generalizability as obtained from a larger sample of job analysis respondents. Analyses of rating scales suggest that the effectiveness of using differing numbers and types of rating scales depends on the nature of a profession. Limited rating ranges and fatigue effects are two types of erratic ratings identified in this study. Results indicate that FACETS' indices, such as rater severity, as well as infit and outfit statistics, are efficient and precise in detecting those rating errors. Appendixes contain charts of task importance measures in Rasch logits with transformed percentage weights for combinations of rating scales and data. (Contains 9 tables and 21 references.) (SLD)
Publication Type: Numerical/Quantitative Data; Reports - Research; Speeches/Meeting Papers
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A