NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 151 to 165 of 3,486 results
Peer reviewed Peer reviewed
Direct linkDirect link
Zopluoglu, Cengiz; Davenport, Ernest C., Jr. – Educational and Psychological Measurement, 2012
The generalized binomial test (GBT) and [omega] indices are the most recent methods suggested in the literature to detect answer copying behavior on multiple-choice tests. The [omega] index is one of the most studied indices, but there has not yet been a systematic simulation study for the GBT index. In addition, the effect of the ability levels…
Descriptors: Statistical Analysis, Error of Measurement, Simulation, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Chang, Shu-Ren; Plake, Barbara S.; Kramer, Gene A.; Lien, Shu-Mei – Educational and Psychological Measurement, 2011
This study examined the amount of time that different ability-level examinees spend on questions they answer correctly or incorrectly across different pretest item blocks presented on a fixed-length, time-restricted computerized adaptive testing (CAT). Results indicate that different ability-level examinees require different amounts of time to…
Descriptors: Evidence, Test Items, Reaction Time, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Frey, Andreas; Seitz, Nicki-Nils – Educational and Psychological Measurement, 2011
The usefulness of multidimensional adaptive testing (MAT) for the assessment of student literacy in the Programme for International Student Assessment (PISA) was examined within a real data simulation study. The responses of N = 14,624 students who participated in the PISA assessments of the years 2000, 2003, and 2006 in Germany were used to…
Descriptors: Adaptive Testing, Literacy, Academic Achievement, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Garrido, Luis E.; Abad, Francisco J.; Ponsoda, Vicente – Educational and Psychological Measurement, 2011
Despite strong evidence supporting the use of Velicer's minimum average partial (MAP) method to establish the dimensionality of continuous variables, little is known about its performance with categorical data. Seeking to fill this void, the current study takes an in-depth look at the performance of the MAP procedure in the presence of…
Descriptors: Factor Analysis, Factor Structure, Correlation, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Brown, Anna; Maydeu-Olivares, Alberto – Educational and Psychological Measurement, 2011
Multidimensional forced-choice formats can significantly reduce the impact of numerous response biases typically associated with rating scales. However, if scored with classical methodology, these questionnaires produce ipsative data, which lead to distorted scale relationships and make comparisons between individuals problematic. This research…
Descriptors: Item Response Theory, Models, Questionnaires, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Skaggs, Gary; Hein, Serge F. – Educational and Psychological Measurement, 2011
Judgmental standard setting methods have been criticized for the cognitive complexity of the judgment task that panelists are asked to complete. This study compared two methods designed to reduce this complexity: the yes/no method and the single-passage bookmark method. Two mock standard setting panel meetings were convened, one for each method,…
Descriptors: Standard Setting (Scoring), Methods, Cutting Scores, Experienced Teachers
Peer reviewed Peer reviewed
Direct linkDirect link
Preston, Kathleen; Reise, Steven; Cai, Li; Hays, Ron D. – Educational and Psychological Measurement, 2011
The authors used a nominal response item response theory model to estimate category boundary discrimination (CBD) parameters for items drawn from the Emotional Distress item pools (Depression, Anxiety, and Anger) developed in the Patient-Reported Outcomes Measurement Information Systems (PROMIS) project. For polytomous items with ordered response…
Descriptors: Item Response Theory, Models, Item Banks, Rating Scales
Peer reviewed Peer reviewed
Direct linkDirect link
Choi, Seung W.; Grady, Matthew W.; Dodd, Barbara G. – Educational and Psychological Measurement, 2011
The goal of the current study was to introduce a new stopping rule for computerized adaptive testing (CAT). The predicted standard error reduction (PSER) stopping rule uses the predictive posterior variance to determine the reduction in standard error that would result from the administration of additional items. The performance of the PSER was…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Wheeler, Denna L.; Vassar, Matt; Worley, Jody A.; Barnes, Laura L. B. – Educational and Psychological Measurement, 2011
The purpose of this study was to synthesize internal consistency reliability for the subscale scores on the Maslach Burnout Inventory (MBI). The authors addressed three research questions: (a) What is the mean subscale score reliability for the MBI across studies? (b) What factors are associated with observed variance in MBI subscale score…
Descriptors: Burnout, Reliability, Measures (Individuals), Meta Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Nilsson, Johanna E.; Marszalek, Jacob M.; Linnemeyer, Rachel M.; Bahner, Angela D.; Misialek, Leah Hanson – Educational and Psychological Measurement, 2011
This article describes the development and the initial psychometric evaluation of the Social Issues Advocacy Scale in two studies. In the first study, an exploratory factor analysis (n = 278) revealed a four-factor scale, accounting for 71.4% of the variance, measuring different aspects of social issue advocacy: Political and Social Advocacy,…
Descriptors: Social Problems, Life Satisfaction, Test Validity, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Liu, Chen-Wei – Educational and Psychological Measurement, 2011
The generalized graded unfolding model (GGUM) has been recently developed to describe item responses to Likert items (agree-disagree) in attitude measurement. In this study, the authors (a) developed two item selection methods in computerized classification testing under the GGUM, the current estimate/ability confidence interval method and the cut…
Descriptors: Computer Assisted Testing, Adaptive Testing, Classification, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Kulas, John T.; Thompson, Richard C.; Anderson, Michael G. – Educational and Psychological Measurement, 2011
The California Psychological Inventory's Dominance scale was investigated for inconsistencies in item-trait associations across four samples (one American normative and three culturally dissociated manager groupings). The Kim, Cohen, and Park procedure was used, enabling simultaneous multigroup comparison in addition to the traditional…
Descriptors: Personality Traits, Measures (Individuals), Correlation, Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Cheng, Ying-Yao; Chen, Li-Ming; Liu, Kun-Shia; Chen, Yi-Ling – Educational and Psychological Measurement, 2011
The study aims to develop three school bullying scales--the Bully Scale, the Victim Scale, and the Witness Scale--to assess secondary school students' bullying behaviors, including physical bullying, verbal bullying, relational bullying, and cyber bullying. The items of the three scales were developed from viewpoints of bullies, victims, and…
Descriptors: Bullying, School Safety, Measures (Individuals), Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Penfield, Randall D. – Educational and Psychological Measurement, 2011
This article explores how the magnitude and form of differential item functioning (DIF) effects in multiple-choice items are determined by the underlying differential distractor functioning (DDF) effects, as modeled under the nominal response model. The results of a numerical investigation indicated that (a) the presence of one or more nonzero DDF…
Descriptors: Test Bias, Multiple Choice Tests, Test Items, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Goodman, Joshua T.; Willse, John T.; Allen, Nancy L.; Klaric, John S. – Educational and Psychological Measurement, 2011
The Mantel-Haenszel procedure is a popular technique for determining items that may exhibit differential item functioning (DIF). Numerous studies have focused on the strengths and weaknesses of this procedure, but few have focused the performance of the Mantel-Haenszel method when structurally missing data are present as a result of test booklet…
Descriptors: Test Bias, Identification, Tests, Test Length
Pages: 1  |  ...  |  7  |  8  |  9  |  10  |  11  |  12  |  13  |  14  |  15  |  ...  |  233