NotesFAQContact Us
Search Tips
ERIC Number: ED551750
Record Type: Non-Journal
Publication Date: 2012
Pages: 166
Abstractor: As Provided
Reference Count: N/A
ISBN: 978-1-2678-2737-1
The Relationship between Rating Scales Used to Evaluate Tasks from Task Inventories for Licensure and Certification Examinations
Cadle, Adrienne Woodley
ProQuest LLC, Ph.D. Dissertation, University of South Florida
The first step in developing or updating a licensure or certification examination is to conduct a job or task analysis. Following completion of the job analysis, a survey validation study is performed to validate the results of the job analysis and to obtain task ratings so that an examination blueprint may be created. Psychometricians and job analysts have spent years arguing over the choice of scales that should be used to evaluate job tasks, as well as how those scales should be combined to create an examination blueprint. The purpose of this study was to determine the relationship between individual and composite rating scales, examine how that relationship varied across industries, sample sizes, task presentation order, and number of tasks rated, and evaluate whether examination blueprint weightings would differ based on the choice of scales or composites of scales used. Findings from this study should be used to guide psychometricians and job analysts in their choice of rating scales, choice of composites of rating scales, and how to create examination blueprints based upon individual and/or composite rating scales. A secondary data analysis was performed to help answer some of these questions. As part of the secondary data analysis, data from 20 survey validation studies performed during a five year period were analyzed. Correlations were computed between 29 pairings of individual and composite rating scales to see if there were redundancies in task ratings. Meta-analytic techniques were used to evaluate the relationship between each pairing of rating scales and to determine if the relationship between pairings of rating scales was impacted by several factors. Lastly, sample examination blueprints were created from several individual and composite rating scales to determine if the rating scales that were used to create the examination blueprints would ultimately impact the weighting of the examination blueprint. The results of this study suggest that there is a high degree of redundancy between certain pairs of scales (i.e., the Importance and Criticality rating scale are highly related), and a somewhat lower degree of redundancy between other rating scales; but that the same relationship between rating scales is observed across many variables, including the industry for which the job analysis was being performed. The results also suggest the choice of rating scales used to create examination blueprints does not have a large effect on the finalized examination blueprint. This finding is especially true if a composite rating scale is used to create the weighting on the examination blueprint. [The dissertation citations contained here are published with the permission of ProQuest LLC. Further reproduction is prohibited without permission. Copies of dissertations may be obtained by Telephone (800) 1-800-521-0600. Web page:]
ProQuest LLC. 789 East Eisenhower Parkway, P.O. Box 1346, Ann Arbor, MI 48106. Tel: 800-521-0600; Web site:
Publication Type: Dissertations/Theses - Doctoral Dissertations
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A