Publication Date
In 2024 | 3 |
Since 2023 | 10 |
Since 2020 (last 5 years) | 27 |
Since 2015 (last 10 years) | 56 |
Since 2005 (last 20 years) | 69 |
Descriptor
Bayesian Statistics | 69 |
Item Response Theory | 31 |
Models | 25 |
Statistical Analysis | 23 |
Monte Carlo Methods | 20 |
Computation | 19 |
Test Items | 19 |
Goodness of Fit | 15 |
Markov Processes | 14 |
Simulation | 14 |
Sample Size | 13 |
More ▼ |
Source
Educational and Psychological… | 69 |
Author
Harring, Jeffrey R. | 5 |
Huang, Hung-Yu | 5 |
Man, Kaiwen | 3 |
Wang, Wen-Chung | 3 |
Finch, W. Holmes | 2 |
Fujimoto, Ken A. | 2 |
Jiao, Hong | 2 |
Kamata, Akihito | 2 |
Liang, Xinya | 2 |
Luo, Yong | 2 |
Marcoulides, Katerina M. | 2 |
More ▼ |
Publication Type
Journal Articles | 69 |
Reports - Research | 61 |
Reports - Evaluative | 5 |
Reports - Descriptive | 3 |
Education Level
Audience
Location
Taiwan | 3 |
Saudi Arabia | 2 |
China | 1 |
Germany | 1 |
Netherlands | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Trends in International… | 2 |
Graduate Record Examinations | 1 |
Program for International… | 1 |
Students Evaluation of… | 1 |
United States Medical… | 1 |
Wechsler Adult Intelligence… | 1 |
Woodcock Johnson Psycho… | 1 |
What Works Clearinghouse Rating
James Ohisei Uanhoro – Educational and Psychological Measurement, 2024
Accounting for model misspecification in Bayesian structural equation models is an active area of research. We present a uniquely Bayesian approach to misspecification that models the degree of misspecification as a parameter--a parameter akin to the correlation root mean squared residual. The misspecification parameter can be interpreted on its…
Descriptors: Bayesian Statistics, Structural Equation Models, Simulation, Statistical Inference
Tenko Raykov; Christine DiStefano; Lisa Calvocoressi – Educational and Psychological Measurement, 2024
This note demonstrates that the widely used Bayesian Information Criterion (BIC) need not be generally viewed as a routinely dependable index for model selection when the bifactor and second-order factor models are examined as rival means for data description and explanation. To this end, we use an empirically relevant setting with…
Descriptors: Bayesian Statistics, Models, Decision Making, Comparative Analysis
Ken A. Fujimoto; Carl F. Falk – Educational and Psychological Measurement, 2024
Item response theory (IRT) models are often compared with respect to predictive performance to determine the dimensionality of rating scale data. However, such model comparisons could be biased toward nested-dimensionality IRT models (e.g., the bifactor model) when comparing those models with non-nested-dimensionality IRT models (e.g., a…
Descriptors: Item Response Theory, Rating Scales, Predictive Measurement, Bayesian Statistics
Huang, Hung-Yu – Educational and Psychological Measurement, 2023
The forced-choice (FC) item formats used for noncognitive tests typically develop a set of response options that measure different traits and instruct respondents to make judgments among these options in terms of their preference to control the response biases that are commonly observed in normative tests. Diagnostic classification models (DCMs)…
Descriptors: Test Items, Classification, Bayesian Statistics, Decision Making
Han, Yuting; Zhang, Jihong; Jiang, Zhehan; Shi, Dexin – Educational and Psychological Measurement, 2023
In the literature of modern psychometric modeling, mostly related to item response theory (IRT), the fit of model is evaluated through known indices, such as X[superscript 2], M2, and root mean square error of approximation (RMSEA) for absolute assessments as well as Akaike information criterion (AIC), consistent AIC (CAIC), and Bayesian…
Descriptors: Goodness of Fit, Psychometrics, Error of Measurement, Item Response Theory
Lozano, José H.; Revuelta, Javier – Educational and Psychological Measurement, 2023
The present paper introduces a general multidimensional model to measure individual differences in learning within a single administration of a test. Learning is assumed to result from practicing the operations involved in solving the items. The model accounts for the possibility that the ability to learn may manifest differently for correct and…
Descriptors: Bayesian Statistics, Learning Processes, Test Items, Item Analysis
Stoevenbelt, Andrea H.; Wicherts, Jelte M.; Flore, Paulette C.; Phillips, Lorraine A. T.; Pietschnig, Jakob; Verschuere, Bruno; Voracek, Martin; Schwabe, Inga – Educational and Psychological Measurement, 2023
When cognitive and educational tests are administered under time limits, tests may become speeded and this may affect the reliability and validity of the resulting test scores. Prior research has shown that time limits may create or enlarge gender gaps in cognitive and academic testing. On average, women complete fewer items than men when a test…
Descriptors: Timed Tests, Gender Differences, Item Response Theory, Correlation
Man, Kaiwen; Harring, Jeffrey R. – Educational and Psychological Measurement, 2023
Preknowledge cheating jeopardizes the validity of inferences based on test results. Many methods have been developed to detect preknowledge cheating by jointly analyzing item responses and response times. Gaze fixations, an essential eye-tracker measure, can be utilized to help detect aberrant testing behavior with improved accuracy beyond using…
Descriptors: Cheating, Reaction Time, Test Items, Responses
Kreitchmann, Rodrigo S.; Sorrel, Miguel A.; Abad, Francisco J. – Educational and Psychological Measurement, 2023
Multidimensional forced-choice (FC) questionnaires have been consistently found to reduce the effects of socially desirable responding and faking in noncognitive assessments. Although FC has been considered problematic for providing ipsative scores under the classical test theory, item response theory (IRT) models enable the estimation of…
Descriptors: Measurement Techniques, Questionnaires, Social Desirability, Adaptive Testing
Gonzalez, Oscar – Educational and Psychological Measurement, 2023
When scores are used to make decisions about respondents, it is of interest to estimate classification accuracy (CA), the probability of making a correct decision, and classification consistency (CC), the probability of making the same decision across two parallel administrations of the measure. Model-based estimates of CA and CC computed from the…
Descriptors: Classification, Accuracy, Intervals, Probability
Beauducel, André; Hilger, Norbert – Educational and Psychological Measurement, 2022
In the context of Bayesian factor analysis, it is possible to compute plausible values, which might be used as covariates or predictors or to provide individual scores for the Bayesian latent variables. Previous simulation studies ascertained the validity of mean plausible values by the mean squared difference of the mean plausible values and the…
Descriptors: Bayesian Statistics, Factor Analysis, Prediction, Simulation
Kim, Nana; Bolt, Daniel M. – Educational and Psychological Measurement, 2021
This paper presents a mixture item response tree (IRTree) model for extreme response style. Unlike traditional applications of single IRTree models, a mixture approach provides a way of representing the mixture of respondents following different underlying response processes (between individuals), as well as the uncertainty present at the…
Descriptors: Item Response Theory, Response Style (Tests), Models, Test Items
Gilholm, Patricia; Mengersen, Kerrie; Thompson, Helen – Educational and Psychological Measurement, 2021
Developmental surveillance tools are used to closely monitor the early development of infants and young children. This study provides a novel implementation of a multidimensional item response model, using Bayesian hierarchical priors, to construct developmental profiles for a small sample of children (N = 115) with sparse data collected through…
Descriptors: Bayesian Statistics, Item Response Theory, Sample Size, Child Development
Mangino, Anthony A.; Finch, W. Holmes – Educational and Psychological Measurement, 2021
Oftentimes in many fields of the social and natural sciences, data are obtained within a nested structure (e.g., students within schools). To effectively analyze data with such a structure, multilevel models are frequently employed. The present study utilizes a Monte Carlo simulation to compare several novel multilevel classification algorithms…
Descriptors: Prediction, Hierarchical Linear Modeling, Classification, Bayesian Statistics
Levy, Roy; Xia, Yan; Green, Samuel B. – Educational and Psychological Measurement, 2021
A number of psychometricians have suggested that parallel analysis (PA) tends to yield more accurate results in determining the number of factors in comparison with other statistical methods. Nevertheless, all too often PA can suggest an incorrect number of factors, particularly in statistically unfavorable conditions (e.g., small sample sizes and…
Descriptors: Bayesian Statistics, Statistical Analysis, Factor Structure, Probability