NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 19 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kaliski, Pamela K.; Wind, Stefanie A.; Engelhard, George, Jr.; Morgan, Deanna L.; Plake, Barbara S.; Reshetar, Rosemary A. – Educational and Psychological Measurement, 2013
The many-faceted Rasch (MFR) model has been used to evaluate the quality of ratings on constructed response assessments; however, it can also be used to evaluate the quality of judgments from panel-based standard setting procedures. The current study illustrates the use of the MFR model for examining the quality of ratings obtained from a standard…
Descriptors: Item Response Theory, Models, Standard Setting (Scoring), Science Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Chang, Shu-Ren; Plake, Barbara S.; Kramer, Gene A.; Lien, Shu-Mei – Educational and Psychological Measurement, 2011
This study examined the amount of time that different ability-level examinees spend on questions they answer correctly or incorrectly across different pretest item blocks presented on a fixed-length, time-restricted computerized adaptive testing (CAT). Results indicate that different ability-level examinees require different amounts of time to…
Descriptors: Evidence, Test Items, Reaction Time, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Ferdous, Abdullah A.; Plake, Barbara S. – Educational and Psychological Measurement, 2008
Even when the scoring of an examination is based on item response theory (IRT), standard-setting methods seldom use this information directly when determining the minimum passing score (MPS) for an examination from an Angoff-based standard-setting study. Often, when IRT scoring is used, the MPS value for a test is converted to an IRT-based theta…
Descriptors: Standard Setting (Scoring), Scoring, Cutting Scores, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Ferdous, Abdullah A.; Plake, Barbara S. – Educational and Psychological Measurement, 2007
In an Angoff standard setting procedure, judges estimate the probability that a hypothetical randomly selected minimally competent candidate will answer correctly each item in the test. In many cases, these item performance estimates are made twice, with information shared with the panelists between estimates. Especially for long tests, this…
Descriptors: Test Items, Probability, Item Analysis, Standard Setting (Scoring)
Peer reviewed Peer reviewed
Direct linkDirect link
Ferdous, Abdullah A.; Plake, Barbara S. – Educational and Psychological Measurement, 2005
In an Angoff standard-setting procedure, judges estimate the probability that a hypothetical randomly selected minimally competent candidate will answer correctly each item constituting the test. In many cases, these item performance estimates are made twice, with information shared with the judges between estimates. Especially for long tests,…
Descriptors: Test Items, Probability, Standard Setting (Scoring)
Peer reviewed Peer reviewed
Plake, Barbara S.; And Others – Educational and Psychological Measurement, 1983
The purpose of this study was to investigate further the effect of differential item performance by males and females on tests which have different item arrangements. The study allows for a more accurate evaluation of whether differential sensitivity to reinforcement strategies is a factor in performance discrepancies for males and females.…
Descriptors: Feedback, Higher Education, Performance Factors, Quantitative Tests
Peer reviewed Peer reviewed
Plake, Barbara S. – Educational and Psychological Measurement, 1981
The methodology suggested in this paper employs a selection rule for identifying group members that generates groups that have a range of achievement within groups but equal distribution of raw scores between. (Author/BW)
Descriptors: Achievement Tests, Analysis of Variance, Elementary Education, Experimental Groups
Peer reviewed Peer reviewed
Plake, Barbara S.; And Others – Educational and Psychological Measurement, 1995
No significant differences in performance on a self-adapted test or anxiety were found for college students (n=218) taking a self-adapted test who selected item difficulty without any prior information, inspected an item before selecting, or answered a typical item and received performance feedback. (SLD)
Descriptors: Achievement, Adaptive Testing, College Students, Computer Assisted Testing
Peer reviewed Peer reviewed
Plake, Barbara S.; Ansorge, Charles J. – Educational and Psychological Measurement, 1984
Scores representing number of items right and self-perceptions were analyzed for a nonquantitative examination that was assembled into three forms. Multivariate ANCOVA revealed no significant effects for the cognitive measure. However, significant sex and sex x order effects were found for perceptions scores not parallel to those reported…
Descriptors: Analysis of Covariance, Higher Education, Multiple Choice Tests, Scores
Peer reviewed Peer reviewed
Plake, Barbara S.; Huntley, Renee M. – Educational and Psychological Measurement, 1984
Two studies examined the effect of making the correct answer of a multiple choice test item grammatically consistent with the item. American College Testing Assessment experimental items were constructed to investigate grammatical compliance to investigate grammatical compliance for plural-singular and vowel-consonant agreement. Results suggest…
Descriptors: Grammar, Higher Education, Item Analysis, Multiple Choice Tests
Peer reviewed Peer reviewed
Plake, Barbara S.; And Others – Educational and Psychological Measurement, 1982
The use of the Estes Reading Attitude Scale as a measure of academic attitude of fourth-, fifth-, and sixth- grade Mexican-American and Anglo student groups did not lead to substantial bias in test score interpretation. These results indicate that responses on the test as a whole can be judged as valid. (Author/PN)
Descriptors: Anglo Americans, Attitude Measures, Ethnic Groups, Factor Structure
Peer reviewed Peer reviewed
Phifer, Sandra J.; Plake, Barbara S. – Educational and Psychological Measurement, 1983
The results of this study of the factorial validity of the Bias in Attitudes Survey (BIAS) suggest the the BIAS in relation to other sex-role scales is measuring more complete and complex attitudes toward sex roles. Two factors represented both the traditional view of male/female roles and a nonsexist view. (Author/BW)
Descriptors: Attitude Measures, Factor Structure, Higher Education, Sex Differences
Peer reviewed Peer reviewed
Jonson, Jessica L.; Plake, Barbara S. – Educational and Psychological Measurement, 1998
The relationship between the validity theory of the past 50 years and actual validity practices was studied by comparing published test standards with the practices of measurement professionals expressed in the "Mental Measurements Yearbook" test reviews. Results show a symbiotic relationship between theory and practice on the influence…
Descriptors: Educational Testing, Measurement Techniques, Standards, Test Use
Peer reviewed Peer reviewed
Plake, Barbara S.; And Others – Educational and Psychological Measurement, 1997
The dominant profile judgment method, designed for use with profiles of polytomous scores on exercises in a performance-based assessment, is presented as a standard-setting method. The approach guides standard-setting panelists in articulating their standard-setting policies and allows for complex policy statements. (SLD)
Descriptors: Educational Policy, Field Tests, Performance Based Assessment, Policy Formation
Peer reviewed Peer reviewed
Plake, Barbara S.; And Others – Educational and Psychological Measurement, 1988
The effect of item context on differential item performance based on gender on mathematics test items was studied, using 404 male and 375 female adults. The analyses were based on a modified one-parameter item response theory methodology. Gender differences emerged; however, they may be due to chance. (TJH)
Descriptors: Achievement Tests, Adults, Latent Trait Theory, Mathematics Tests
Previous Page | Next Page ยป
Pages: 1  |  2