NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 54 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Levy, Roy – Educational Measurement: Issues and Practice, 2020
In this digital ITEMS module, Dr. Roy Levy describes Bayesian approaches to psychometric modeling. He discusses how Bayesian inference is a mechanism for reasoning in a probability-modeling framework and is well-suited to core problems in educational measurement: reasoning from student performances on an assessment to make inferences about their…
Descriptors: Bayesian Statistics, Psychometrics, Item Response Theory, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
Jones, Andrew T.; Kopp, Jason P.; Ong, Thai Q. – Educational Measurement: Issues and Practice, 2020
Studies investigating invariance have often been limited to measurement or prediction invariance. Selection invariance, wherein the use of test scores for classification results in equivalent classification accuracy between groups, has received comparatively little attention in the psychometric literature. Previous research suggests that some form…
Descriptors: Test Construction, Test Bias, Classification, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Leventhal, Brian; Ames, Allison – Educational Measurement: Issues and Practice, 2020
In this digital ITEMS module, Dr. Brian Leventhal and Dr. Allison Ames provide an overview of "Monte Carlo simulation studies" (MCSS) in "item response theory" (IRT). MCSS are utilized for a variety of reasons, one of the most compelling being that they can be used when analytic solutions are impractical or nonexistent because…
Descriptors: Item Response Theory, Monte Carlo Methods, Simulation, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Soland, James – Educational Measurement: Issues and Practice, 2019
As computer-based tests become more common, there is a growing wealth of metadata related to examinees' response processes, which include solution strategies, concentration, and operating speed. One common type of metadata is item response time. While response times have been used extensively to improve estimates of achievement, little work…
Descriptors: Test Items, Item Response Theory, Metadata, Self Efficacy
Peer reviewed Peer reviewed
Direct linkDirect link
van der Lans, Rikkert M.; van de Grift, Wim J. C. M.; van Veen, Klaas – Educational Measurement: Issues and Practice, 2019
Using item response theory, this study explores whether student survey and classroom observation items can be calibrated onto a common metric of teaching quality. The data comprises 269 lessons of 141 teachers that were scored on the International Comparative Analysis of Learning and Teaching (ICALT) observation instrument and the My Teacher…
Descriptors: Item Response Theory, Student Surveys, Classroom Observation Techniques, Teacher Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Sinharay, Sandip – Educational Measurement: Issues and Practice, 2019
Test score users often demand the reporting of subscores due to their potential diagnostic, remedial, and instructional benefits. Therefore, there is substantial pressure on testing programs to report subscores. However, professional standards require that subscores have to satisfy minimum quality standards before they can be reported. In this…
Descriptors: Testing, Scores, Item Response Theory, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Yoo, Hanwook; Hambleton, Ronald K. – Educational Measurement: Issues and Practice, 2019
Item analysis is an integral part of operational test development and is typically conducted within two popular statistical frameworks: classical test theory (CTT) and item response theory (IRT). In this digital ITEMS module, Hanwook Yoo and Ronald K. Hambleton provide an accessible overview of operational item analysis approaches within these…
Descriptors: Item Analysis, Item Response Theory, Guidelines, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Ames, Allison; Myers, Aaron – Educational Measurement: Issues and Practice, 2019
Drawing valid inferences from modern measurement models is contingent upon a good fit of the data to the model. Violations of model-data fit have numerous consequences, limiting the usefulness and applicability of the model. As Bayesian estimation is becoming more common, understanding the Bayesian approaches for evaluating model-data fit models…
Descriptors: Bayesian Statistics, Psychometrics, Models, Predictive Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Madison, Matthew J. – Educational Measurement: Issues and Practice, 2019
Recent advances have enabled diagnostic classification models (DCMs) to accommodate longitudinal data. These longitudinal DCMs were developed to study how examinees change, or transition, between different attribute mastery statuses over time. This study examines using longitudinal DCMs as an approach to assessing growth and serves three purposes:…
Descriptors: Longitudinal Studies, Item Response Theory, Psychometrics, Criterion Referenced Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Allen, Jeff; Mattern, Krista – Educational Measurement: Issues and Practice, 2019
We examined summary indices of high school performance (coursework, grades, and test scores) based on the graded response model (GRM). The indices varied by inclusion of ACT test scores and whether high school courses were constrained to have the same difficulty and discrimination across groups of schools. The indices were examined with respect to…
Descriptors: High School Students, Academic Achievement, Secondary School Curriculum, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Lindstrom Johnson, Sarah; Reichenberg, Ray E.; Shukla, Kathan; Waasdorp, Tracy E.; Bradshaw, Catherine P. – Educational Measurement: Issues and Practice, 2019
The U.S. government has become increasingly focused on school climate, as recently evidenced by its inclusion as an accountability indicator in the Every Student Succeeds Act. Yet, there remains considerable variability in both conceptualizing and measuring school climate. To better inform the research and practice related to school climate and…
Descriptors: Item Response Theory, Educational Environment, Accountability, Educational Legislation
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Jue; Engelhard, George, Jr. – Educational Measurement: Issues and Practice, 2019
In this digital ITEMS module, Dr. Jue Wang and Dr. George Engelhard Jr. describe the Rasch measurement framework for the construction and evaluation of new measures and scales. From a theoretical perspective, they discuss the historical and philosophical perspectives on measurement with a focus on Rasch's concept of specific objectivity and…
Descriptors: Item Response Theory, Evaluation Methods, Measurement, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Jerrim, John; Parker, Philip; Choi, Alvaro; Chmielewski, Anna Katyn; Sälzer, Christine; Shure, Nikki – Educational Measurement: Issues and Practice, 2018
The Programme for International Student Assessment (PISA) is an important international study of 15-olds' knowledge and skills. New results are released every 3 years, and have a substantial impact upon education policy. Yet, despite its influence, the methodology underpinning PISA has received significant criticism. Much of this criticism has…
Descriptors: Educational Assessment, Comparative Education, Achievement Tests, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Wind, Stefanie A. – Educational Measurement: Issues and Practice, 2018
In this digital ITEMS module, we introduce the framework of nonparametric item response theory (IRT), in particular Mokken scaling, which can be used to evaluate fundamental measurement properties with less strict assumptions than parametric IRT models. We walk through the key distinction between parametric and nonparametric models, introduce the…
Descriptors: Educational Assessment, Nonparametric Statistics, Item Response Theory, Scaling
Peer reviewed Peer reviewed
Direct linkDirect link
Hancock, Gregory R.; An, Ji – Educational Measurement: Issues and Practice, 2018
In this ITEMS module, we frame the topic of scale reliability within a "confirmatory factor analysis" and "structural equation modeling" (SEM) context and address some of the limitations of Cronbach's a. This modeling approach has two major advantages: (1) it allows researchers to make explicit the relation between their items…
Descriptors: Reliability, Structural Equation Models, Factor Analysis, Correlation
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4