NotesFAQContact Us
Search Tips
ERIC Number: ED355254
Record Type: Non-Journal
Publication Date: 1992-Dec
Pages: 4
Abstractor: N/A
Reference Count: N/A
Reducing Errors Due to the Use of Judges. ERIC/TM Digest.
Rudner, Lawrence M.
Several common sources of error in assessment that depends on the use of judges are identified, and ways to reduce the impact of rating errors are examined. Numerous threats to the validity of scores based on ratings exist. These threats include: (1) the halo effect; (2) stereotyping; (3) perception differences; (4) leniency/stringency error; and (5) scale shrinking. An established body of literature shows that training can minimize rater effects. To be successful, rater training should familiarize judges with the measures they will use, ensure that they understand the sequence of operations they must perform, and explain how any normative data should be interpreted. The choice of judges may have a significant impact. Considering demographic variables, choosing representatives from expert and interest groups, and forming smaller working groups can make the choice of judges more effective. Several statistical approaches may be followed to adjust potentially biased ratings given by different sets of multiple raters. Three approaches discussed in the literature are: (1) ordinary least squares regression; (2) weighted least squares regression; and (3) imputation of missing data. The imputation approach is most appropriate when variations are expected in rater reliability. The weighted regression approach is most appropriate when variations are expected in rater reliability. (SLD)
American Institutes for Research, 3333 K Street, N.W., Suite 300, Washington, DC 20007 (free).
Publication Type: ERIC Publications; ERIC Digests in Full Text
Education Level: N/A
Audience: N/A
Language: English
Sponsor: Office of Educational Research and Improvement (ED), Washington, DC.
Authoring Institution: ERIC Clearinghouse on Tests, Measurement, and Evaluation, Washington, DC.