Publication Date
In 2024 | 8 |
Since 2023 | 24 |
Since 2020 (last 5 years) | 59 |
Since 2015 (last 10 years) | 132 |
Since 2005 (last 20 years) | 233 |
Descriptor
Models | 311 |
Item Response Theory | 114 |
Statistical Analysis | 79 |
Test Items | 64 |
Computation | 63 |
Goodness of Fit | 52 |
Comparative Analysis | 51 |
Factor Analysis | 50 |
Simulation | 50 |
Correlation | 46 |
Error of Measurement | 43 |
More ▼ |
Source
Educational and Psychological… | 311 |
Author
Raykov, Tenko | 18 |
Marcoulides, George A. | 11 |
Wang, Wen-Chung | 7 |
DeMars, Christine E. | 5 |
Harring, Jeffrey R. | 5 |
Cai, Li | 4 |
Dimitrov, Dimiter M. | 4 |
Engelhard, George, Jr. | 4 |
Huang, Hung-Yu | 4 |
Lewis, Robert A. | 4 |
Liu, Ren | 4 |
More ▼ |
Publication Type
Journal Articles | 289 |
Reports - Research | 208 |
Reports - Evaluative | 55 |
Reports - Descriptive | 24 |
Information Analyses | 3 |
Speeches/Meeting Papers | 3 |
Numerical/Quantitative Data | 1 |
Education Level
Audience
Location
Germany | 8 |
Australia | 5 |
Taiwan | 4 |
United States | 4 |
California | 2 |
Canada | 2 |
China | 2 |
Netherlands | 2 |
Saudi Arabia | 2 |
South Korea | 2 |
United Kingdom | 2 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Nájera, Pablo; Sorrel, Miguel A.; Abad, Francisco José – Educational and Psychological Measurement, 2019
Cognitive diagnosis models (CDMs) are latent class multidimensional statistical models that help classify people accurately by using a set of discrete latent variables, commonly referred to as attributes. These models require a Q-matrix that indicates the attributes involved in each item. A potential problem is that the Q-matrix construction…
Descriptors: Matrices, Statistical Analysis, Models, Classification
Man, Kaiwen; Harring, Jeffrey R. – Educational and Psychological Measurement, 2019
With the development of technology-enhanced learning platforms, eye-tracking biometric indicators can be recorded simultaneously with students item responses. In the current study, visual fixation, an essential eye-tracking indicator, is modeled to reflect the degree of test engagement when a test taker solves a set of test questions. Three…
Descriptors: Test Items, Eye Movements, Models, Regression (Statistics)
Wang, Jue; Engelhard, George, Jr. – Educational and Psychological Measurement, 2019
The purpose of this study is to explore the use of unfolding models for evaluating the quality of ratings obtained in rater-mediated assessments. Two different judgmental processes can be used to conceptualize ratings: impersonal judgments and personal preferences. Impersonal judgments are typically expected in rater-mediated assessments, and…
Descriptors: Evaluative Thinking, Preferences, Evaluators, Models
Park, Minjeong; Wu, Amery D. – Educational and Psychological Measurement, 2019
Item response tree (IRTree) models are recently introduced as an approach to modeling response data from Likert-type rating scales. IRTree models are particularly useful to capture a variety of individuals' behaviors involving in item responding. This study employed IRTree models to investigate response styles, which are individuals' tendencies to…
Descriptors: Item Response Theory, Models, Likert Scales, Response Style (Tests)
Sideridis, Georgios D.; Tsaousis, Ioannis; Al-Sadaawi, Abdullah – Educational and Psychological Measurement, 2019
The purpose of the present study was to apply the methodology developed by Raykov on modeling item-specific variance for the measurement of internal consistency reliability with longitudinal data. Participants were a randomly selected sample of 500 individuals who took on a professional qualifications test in Saudi Arabia over four different…
Descriptors: Test Reliability, Test Items, Longitudinal Studies, Foreign Countries
Luo, Yong; Dimitrov, Dimiter M. – Educational and Psychological Measurement, 2019
Plausible values can be used to either estimate population-level statistics or compute point estimates of latent variables. While it is well known that five plausible values are usually sufficient for accurate estimation of population-level statistics in large-scale surveys, the minimum number of plausible values needed to obtain accurate latent…
Descriptors: Item Response Theory, Monte Carlo Methods, Markov Processes, Outcome Measures
Ferrando, Pere Joan; Lorenzo-Seva, Urbano – Educational and Psychological Measurement, 2019
Many psychometric measures yield data that are compatible with (a) an essentially unidimensional factor analysis solution and (b) a correlated-factor solution. Deciding which of these structures is the most appropriate and useful is of considerable importance, and various procedures have been proposed to help in this decision. The only fully…
Descriptors: Validity, Models, Correlation, Factor Analysis
Fujimoto, Ken A. – Educational and Psychological Measurement, 2019
Advancements in item response theory (IRT) have led to models for dual dependence, which control for cluster and method effects during a psychometric analysis. Currently, however, this class of models does not include one that controls for when the method effects stem from two method sources in which one source functions differently across the…
Descriptors: Bayesian Statistics, Item Response Theory, Psychometrics, Models
DiStefano, Christine; McDaniel, Heather L.; Zhang, Liyun; Shi, Dexin; Jiang, Zhehan – Educational and Psychological Measurement, 2019
A simulation study was conducted to investigate the model size effect when confirmatory factor analysis (CFA) models include many ordinal items. CFA models including between 15 and 120 ordinal items were analyzed with mean- and variance-adjusted weighted least squares to determine how varying sample size, number of ordered categories, and…
Descriptors: Factor Analysis, Effect Size, Data, Sample Size
List, Marit Kristine; Köller, Olaf; Nagy, Gabriel – Educational and Psychological Measurement, 2019
Tests administered in studies of student achievement often have a certain amount of not-reached items (NRIs). The propensity for NRIs may depend on the proficiency measured by the test and on additional covariates. This article proposes a semiparametric model to study such relationships. Our model extends Glas and Pimentel's item response theory…
Descriptors: Educational Assessment, Item Response Theory, Multivariate Analysis, Test Items
DeMars, Christine E. – Educational and Psychological Measurement, 2019
Previous work showing that revised parallel analysis can be effective with dichotomous items has used a two-parameter model and normally distributed abilities. In this study, both two- and three-parameter models were used with normally distributed and skewed ability distributions. Relatively minor skew and kurtosis in the underlying ability…
Descriptors: Item Analysis, Models, Error of Measurement, Item Response Theory
Dueber, David M.; Love, Abigail M. A.; Toland, Michael D.; Turner, Trisha A. – Educational and Psychological Measurement, 2019
One of the most cited methodological issues is with the response format, which is traditionally a single-response Likert response format. Therefore, our study aims to elucidate and illustrate an alternative response format and analytic technique, Thurstonian item response theory (IRT), for analyzing data from surveys using an alternate response…
Descriptors: Item Response Theory, Surveys, Measurement Techniques, Psychometrics
Komboz, Basil; Strobl, Carolin; Zeileis, Achim – Educational and Psychological Measurement, 2018
Psychometric measurement models are only valid if measurement invariance holds between test takers of different groups. Global model tests, such as the well-established likelihood ratio (LR) test, are sensitive to violations of measurement invariance, such as differential item functioning and differential step functioning. However, these…
Descriptors: Item Response Theory, Models, Tests, Measurement
Matlock Cole, Ki Lynn; Turner, Ronna C.; Gitchel, W. Dent – Educational and Psychological Measurement, 2018
The generalized partial credit model (GPCM) is often used for polytomous data; however, the nominal response model (NRM) allows for the investigation of how adjacent categories may discriminate differently when items are positively or negatively worded. Ten items from three different self-reported scales were used (anxiety, depression, and…
Descriptors: Item Response Theory, Anxiety, Depression (Psychology), Self Evaluation (Individuals)
Is the Factor Observed in Investigations on the Item-Position Effect Actually the Difficulty Factor?
Schweizer, Karl; Troche, Stefan – Educational and Psychological Measurement, 2018
In confirmatory factor analysis quite similar models of measurement serve the detection of the difficulty factor and the factor due to the item-position effect. The item-position effect refers to the increasing dependency among the responses to successively presented items of a test whereas the difficulty factor is ascribed to the wide range of…
Descriptors: Investigations, Difficulty Level, Factor Analysis, Models