Publication Date
| In 2024 | 145 |
| Since 2023 | 332 |
| Since 2020 (last 5 years) | 999 |
| Since 2015 (last 10 years) | 2119 |
| Since 2005 (last 20 years) | 4153 |
Descriptor
| Item Response Theory | 5507 |
| Test Items | 1804 |
| Foreign Countries | 1186 |
| Models | 1137 |
| Psychometrics | 904 |
| Scores | 778 |
| Comparative Analysis | 759 |
| Test Construction | 741 |
| Simulation | 737 |
| Statistical Analysis | 659 |
| Difficulty Level | 566 |
| More ▼ | |
Source
Author
| Sinharay, Sandip | 48 |
| Wilson, Mark | 45 |
| Cohen, Allan S. | 43 |
| Meijer, Rob R. | 43 |
| Tindal, Gerald | 42 |
| Wang, Wen-Chung | 40 |
| Alonzo, Julie | 37 |
| Ferrando, Pere J. | 36 |
| Cai, Li | 35 |
| van der Linden, Wim J. | 35 |
| Glas, Cees A. W. | 34 |
| More ▼ | |
Publication Type
Education Level
Location
| Turkey | 94 |
| Australia | 88 |
| Germany | 79 |
| United States | 74 |
| Netherlands | 68 |
| Taiwan | 59 |
| Indonesia | 52 |
| Canada | 49 |
| China | 49 |
| Japan | 38 |
| Florida | 37 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
Jianbin Fu; Xuan Tan; Patrick C. Kyllonen – Applied Measurement in Education, 2024
A process is proposed to create the one-dimensional expected item characteristic curve (ICC) and test characteristic curve (TCC) for each trait in multidimensional forced-choice questionnaires based on the Rank-2PL (two-parameter logistic) item response theory models for forced-choice items with two or three statements. Some examples of ICC and…
Descriptors: Item Response Theory, Questionnaires, Measurement Techniques, Statistics
Ken A. Fujimoto; Carl F. Falk – Educational and Psychological Measurement, 2024
Item response theory (IRT) models are often compared with respect to predictive performance to determine the dimensionality of rating scale data. However, such model comparisons could be biased toward nested-dimensionality IRT models (e.g., the bifactor model) when comparing those models with non-nested-dimensionality IRT models (e.g., a…
Descriptors: Item Response Theory, Rating Scales, Predictive Measurement, Bayesian Statistics
Wind, Stefanie A. – Educational and Psychological Measurement, 2023
Rating scale analysis techniques provide researchers with practical tools for examining the degree to which ordinal rating scales (e.g., Likert-type scales or performance assessment rating scales) function in psychometrically useful ways. When rating scales function as expected, researchers can interpret ratings in the intended direction (i.e.,…
Descriptors: Rating Scales, Testing Problems, Item Response Theory, Models
Zeyuan Jing – ProQuest LLC, 2023
This dissertation presents a comprehensive review of the evolution of DIF analysis within educational measurement from the 1980s to the present. The review elucidates the concept of DIF, particularly emphasizing the crucial role of grouping for exhibiting DIF. Then, the dissertation introduces an innovative modification to the newly developed…
Descriptors: Item Response Theory, Algorithms, Measurement, Test Bias
Miguel A. García-Pérez – Educational and Psychological Measurement, 2024
A recurring question regarding Likert items is whether the discrete steps that this response format allows represent constant increments along the underlying continuum. This question appears unsolvable because Likert responses carry no direct information to this effect. Yet, any item administered in Likert format can identically be administered…
Descriptors: Likert Scales, Test Construction, Test Items, Item Analysis
Joseph A. Rios; Jiayi Deng – Educational and Psychological Measurement, 2024
Rapid guessing (RG) is a form of non-effortful responding that is characterized by short response latencies. This construct-irrelevant behavior has been shown in previous research to bias inferences concerning measurement properties and scores. To mitigate these deleterious effects, a number of response time threshold scoring procedures have been…
Descriptors: Reaction Time, Scores, Item Response Theory, Guessing (Tests)
Gerhard Tutz; Pascal Jordan – Journal of Educational and Behavioral Statistics, 2024
A general framework of latent trait item response models for continuous responses is given. In contrast to classical test theory (CTT) models, which traditionally distinguish between true scores and error scores, the responses are clearly linked to latent traits. It is shown that CTT models can be derived as special cases, but the model class is…
Descriptors: Item Response Theory, Responses, Scores, Models
Sa'ar Karp Gershon; Ella Anghel; Giora Alexandron – Education and Information Technologies, 2024
For Massive Open Online Courses to have trustworthy credentials, assessments in these courses must be valid, reliable, and fair. Item Response Theory provides a robust approach to evaluating these properties. However, for this theory to be applicable, certain properties of the assessment items should be met, among them that item difficulties are…
Descriptors: MOOCs, Item Response Theory, Physics, Advanced Placement Programs
Jianbin Fu; Xuan Tan; Patrick C. Kyllonen – Journal of Educational Measurement, 2024
This paper presents the item and test information functions of the Rank two-parameter logistic models (Rank-2PLM) for items with two (pair) and three (triplet) statements in forced-choice questionnaires. The Rank-2PLM model for pairs is the MUPP-2PLM (Multi-Unidimensional Pairwise Preference) and, for triplets, is the Triplet-2PLM. Fisher's…
Descriptors: Questionnaires, Test Items, Item Response Theory, Models
Sooyong Lee; Suhwa Han; Seung W. Choi – Journal of Educational Measurement, 2024
Research has shown that multiple-indicator multiple-cause (MIMIC) models can result in inflated Type I error rates in detecting differential item functioning (DIF) when the assumption of equal latent variance is violated. This study explains how the violation of the equal variance assumption adversely impacts the detection of nonuniform DIF and…
Descriptors: Factor Analysis, Bayesian Statistics, Test Bias, Item Response Theory
Uk Hyun Cho – ProQuest LLC, 2024
The present study investigates the influence of multidimensionality on linking and equating in a unidimensional IRT. Two hypothetical multidimensional scenarios are explored under a nonequivalent group common-item equating design. The first scenario examines test forms designed to measure multiple constructs, while the second scenario examines a…
Descriptors: Item Response Theory, Classification, Correlation, Test Format
Stefanie A. Wind; Benjamin Lugu – Applied Measurement in Education, 2024
Researchers who use measurement models for evaluation purposes often select models with stringent requirements, such as Rasch models, which are parametric. Mokken Scale Analysis (MSA) offers a theory-driven nonparametric modeling approach that may be more appropriate for some measurement applications. Researchers have discussed using MSA as a…
Descriptors: Item Response Theory, Data Analysis, Simulation, Nonparametric Statistics
Jiaying Xiao – ProQuest LLC, 2024
Multidimensional Item Response Theory (MIRT) has been widely used in educational and psychological assessments. It estimates multiple constructs simultaneously and models the correlations among latent constructs. While it provides more accurate results, the unidimensional IRT model is still dominant in real applications. One major reason is that…
Descriptors: Item Response Theory, Algorithms, Computation, Efficiency
The Impact of Measurement Noninvariance across Time and Group in Longitudinal Item Response Modeling
In-Hee Choi – Asia Pacific Education Review, 2024
Longitudinal item response data often exhibit two types of measurement noninvariance: the noninvariance of item parameters between subject groups and that of item parameters across multiple time points. This study proposes a comprehensive approach to the simultaneous modeling of both types of measurement noninvariance in terms of longitudinal item…
Descriptors: Longitudinal Studies, Item Response Theory, Growth Models, Error of Measurement
Murat Tekin; Çetin Toraman; Aysen Melek Aytug Kosan – International Journal of Assessment Tools in Education, 2024
In the present study, we examined the psychometric properties of the data obtained from the Commitment to Profession of Medicine Scale (CPMS) with 4-point, 5-point, 6-point, and 7-point response sets based on Item Response Theory (IRT). A total of 2150 medical students from 16 different universities participated in the study. The participants were…
Descriptors: Psychometrics, Medical Students, Likert Scales, Data Collection

Peer reviewed
Direct link
