Publication Date
| In 2015 | 0 |
| Since 2014 | 1 |
| Since 2011 (last 5 years) | 1 |
| Since 2006 (last 10 years) | 2 |
| Since 1996 (last 20 years) | 5 |
Descriptor
| Physicians | 3 |
| Scoring | 3 |
| Computer Assisted Testing | 2 |
| Generalizability Theory | 2 |
| Judges | 2 |
| Medical Students | 2 |
| Performance Based Assessment | 2 |
| Probability | 2 |
| Standard Setting (Scoring) | 2 |
| Test Construction | 2 |
| More ▼ | |
Source
| Applied Measurement in… | 5 |
Author
| Clauser, Brian E. | 5 |
| Clyman, Stephen G. | 2 |
| Margolis, Melissa J. | 2 |
| Swanson, David B. | 2 |
| Chang, Lucy | 1 |
| Chis, Liliana | 1 |
| Clauser, Jerome C. | 1 |
| El-Bayoumi, Gigi | 1 |
| Hambleton, Ronald K. | 1 |
| Harik, Polina | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 5 |
| Reports - Evaluative | 2 |
| Reports - Research | 2 |
| Reports - Descriptive | 1 |
Education Level
Audience
Showing all 5 results
Clauser, Jerome C.; Clauser, Brian E.; Hambleton, Ronald K. – Applied Measurement in Education, 2014
The purpose of the present study was to extend past work with the Angoff method for setting standards by examining judgments at the judge level rather than the panel level. The focus was on investigating the relationship between observed Angoff standard setting judgments and empirical conditional probabilities. This relationship has been used as a…
Descriptors: Standard Setting (Scoring), Validity, Reliability, Correlation
Clauser, Brian E.; Harik, Polina; Margolis, Melissa J.; McManus, I. C.; Mollon, Jennifer; Chis, Liliana; Williams, Simon – Applied Measurement in Education, 2009
Numerous studies have compared the Angoff standard-setting procedure to other standard-setting methods, but relatively few studies have evaluated the procedure based on internal criteria. This study uses a generalizability theory framework to evaluate the stability of the estimated cut score. To provide a measure of internal consistency, this…
Descriptors: Generalizability Theory, Group Discussion, Standard Setting (Scoring), Scoring
Peer reviewedClauser, Brian E.; Kane, Michael T.; Swanson, David B. – Applied Measurement in Education, 2002
Attempts to place the issues associated with computer-automated scoring within the context of current validity theory and presents a taxonomy of automated scoring procedures as a framework for discussing threats to validity that may take on increased importance for specific approaches to automated scoring. (SLD)
Descriptors: Classification, Computer Uses in Education, Performance Based Assessment, Test Construction
Peer reviewedClauser, Brian E.; Swanson, David B.; Clyman, Stephen G. – Applied Measurement in Education, 1999
Performed generalizability analyses of expert ratings and computer-produced scores for a computer-delivered performance assessment of physicians' patient management skills. The two automated scoring systems produced scores for the 200 medical students that were approximately as generalizable as those produced by the four expert raters. (SLD)
Descriptors: Comparative Analysis, Computer Assisted Testing, Generalizability Theory, Higher Education
Peer reviewedClauser, Brian E.; Ross, Linette P.; Clyman, Stephen G.; Rose, Kathie M.; Margolis, Melissa J.; Nungester, Ronald J.; Piemme, Thomas E.; Chang, Lucy; El-Bayoumi, Gigi; Malakoff, Gary L.; Pincetl, Pierre S. – Applied Measurement in Education, 1997
Describes an automated scoring algorithm for a computer-based simulation examination of physicians' patient-management skills. Results with 280 medical students show that scores produced using this algorithm are highly correlated to actual clinician ratings. Scores were also effective in discriminating between case performance judged passing or…
Descriptors: Algorithms, Computer Assisted Testing, Computer Simulation, Evaluators

Direct link
