Publication Date
In 2024 | 0 |
Since 2023 | 4 |
Since 2020 (last 5 years) | 6 |
Since 2015 (last 10 years) | 20 |
Since 2005 (last 20 years) | 36 |
Descriptor
Computer Software | 41 |
Item Response Theory | 21 |
Models | 16 |
Statistical Analysis | 15 |
Computation | 12 |
Measurement Techniques | 10 |
Evaluation Methods | 8 |
Test Items | 8 |
Bayesian Statistics | 7 |
Monte Carlo Methods | 7 |
Correlation | 6 |
More ▼ |
Source
Educational and Psychological… | 41 |
Author
Raykov, Tenko | 6 |
Luo, Yong | 3 |
Marcoulides, George A. | 3 |
Wang, Wen-Chung | 3 |
DeMars, Christine E. | 2 |
Algina, James | 1 |
Ames, Allison J. | 1 |
Barnette, J. Jackson | 1 |
Bernaards, Coen A. | 1 |
Calvocoressi, Lisa | 1 |
Chang, Chi | 1 |
More ▼ |
Publication Type
Journal Articles | 41 |
Reports - Research | 21 |
Reports - Descriptive | 15 |
Reports - Evaluative | 5 |
Tests/Questionnaires | 1 |
Education Level
Elementary Education | 3 |
Grade 4 | 2 |
Higher Education | 2 |
Postsecondary Education | 2 |
Secondary Education | 2 |
Elementary Secondary Education | 1 |
Grade 2 | 1 |
Grade 3 | 1 |
Grade 5 | 1 |
Grade 6 | 1 |
Grade 8 | 1 |
More ▼ |
Audience
Practitioners | 1 |
Students | 1 |
Teachers | 1 |
Location
Germany | 1 |
Hong Kong | 1 |
Kentucky | 1 |
Saudi Arabia | 1 |
Taiwan | 1 |
United States | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Trends in International… | 3 |
Program for International… | 2 |
Computer Anxiety Scale | 1 |
Self Perception Profile for… | 1 |
Students Evaluation of… | 1 |
What Works Clearinghouse Rating
D'Urso, E. Damiano; Tijmstra, Jesper; Vermunt, Jeroen K.; De Roover, Kim – Educational and Psychological Measurement, 2023
Assessing the measurement model (MM) of self-report scales is crucial to obtain valid measurements of individuals' latent psychological constructs. This entails evaluating the number of measured constructs and determining which construct is measured by which item. Exploratory factor analysis (EFA) is the most-used method to evaluate these…
Descriptors: Factor Analysis, Measurement Techniques, Self Evaluation (Individuals), Psychological Patterns
Sideridis, Georgios; Tsaousis, Ioannis; Ghamdi, Hanan – Educational and Psychological Measurement, 2023
The purpose of the present study was to provide the means to evaluate the "interval-scaling" assumption that governs the use of parametric statistics and continuous data estimators in self-report instruments that utilize Likert-type scaling. Using simulated and real data, the methodology to test for this important assumption is evaluated…
Descriptors: Intervals, Scaling, Computer Software, Likert Scales
Raykov, Tenko; Pusic, Martin – Educational and Psychological Measurement, 2023
This note is concerned with evaluation of location parameters for polytomous items in multiple-component measuring instruments. A point and interval estimation procedure for these parameters is outlined that is developed within the framework of latent variable modeling. The method permits educational, behavioral, biomedical, and marketing…
Descriptors: Item Analysis, Measurement Techniques, Computer Software, Intervals
von Davier, Matthias; Tyack, Lillian; Khorramdel, Lale – Educational and Psychological Measurement, 2023
Automated scoring of free drawings or images as responses has yet to be used in large-scale assessments of student achievement. In this study, we propose artificial neural networks to classify these types of graphical responses from a TIMSS 2019 item. We are comparing classification accuracy of convolutional and feed-forward approaches. Our…
Descriptors: Scoring, Networks, Artificial Intelligence, Elementary Secondary Education
Raykov, Tenko; DiStefano, Christine; Calvocoressi, Lisa; Volker, Martin – Educational and Psychological Measurement, 2022
A class of effect size indices are discussed that evaluate the degree to which two nested confirmatory factor analysis models differ from each other in terms of fit to a set of observed variables. These descriptive effect measures can be used to quantify the impact of parameter restrictions imposed in an initially considered model and are free…
Descriptors: Effect Size, Models, Measurement Techniques, Factor Analysis
Nagy, Gabriel; Ulitzsch, Esther – Educational and Psychological Measurement, 2022
Disengaged item responses pose a threat to the validity of the results provided by large-scale assessments. Several procedures for identifying disengaged responses on the basis of observed response times have been suggested, and item response theory (IRT) models for response engagement have been proposed. We outline that response time-based…
Descriptors: Item Response Theory, Hierarchical Linear Modeling, Predictor Variables, Classification
Evaluation of Variance Inflation Factors in Regression Models Using Latent Variable Modeling Methods
Marcoulides, Katerina M.; Raykov, Tenko – Educational and Psychological Measurement, 2019
A procedure that can be used to evaluate the variance inflation factors and tolerance indices in linear regression models is discussed. The method permits both point and interval estimation of these factors and indices associated with explanatory variables considered for inclusion in a regression model. The approach makes use of popular latent…
Descriptors: Regression (Statistics), Statistical Analysis, Computation, Computer Software
Luo, Yong; Dimitrov, Dimiter M. – Educational and Psychological Measurement, 2019
Plausible values can be used to either estimate population-level statistics or compute point estimates of latent variables. While it is well known that five plausible values are usually sufficient for accurate estimation of population-level statistics in large-scale surveys, the minimum number of plausible values needed to obtain accurate latent…
Descriptors: Item Response Theory, Monte Carlo Methods, Markov Processes, Outcome Measures
Isiordia, Marilu; Ferrer, Emilio – Educational and Psychological Measurement, 2018
A first-order latent growth model assesses change in an unobserved construct from a single score and is commonly used across different domains of educational research. However, examining change using a set of multiple response scores (e.g., scale items) affords researchers several methodological benefits not possible when using a single score. A…
Descriptors: Educational Research, Statistical Analysis, Models, Longitudinal Studies
Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2018
This article outlines a procedure for examining the degree to which a common factor may be dominating additional factors in a multicomponent measuring instrument consisting of binary items. The procedure rests on an application of the latent variable modeling methodology and accounts for the discrete nature of the manifest indicators. The method…
Descriptors: Measurement Techniques, Factor Analysis, Item Response Theory, Likert Scales
Luo, Yong; Jiao, Hong – Educational and Psychological Measurement, 2018
Stan is a new Bayesian statistical software program that implements the powerful and efficient Hamiltonian Monte Carlo (HMC) algorithm. To date there is not a source that systematically provides Stan code for various item response theory (IRT) models. This article provides Stan code for three representative IRT models, including the…
Descriptors: Bayesian Statistics, Item Response Theory, Probability, Computer Software
Luo, Yong – Educational and Psychological Measurement, 2018
Mplus is a powerful latent variable modeling software program that has become an increasingly popular choice for fitting complex item response theory models. In this short note, we demonstrate that the two-parameter logistic testlet model can be estimated as a constrained bifactor model in Mplus with three estimators encompassing limited- and…
Descriptors: Computer Software, Models, Statistical Analysis, Computation
Paek, Insu; Cui, Mengyao; Öztürk Gübes, Nese; Yang, Yanyun – Educational and Psychological Measurement, 2018
The purpose of this article is twofold. The first is to provide evaluative information on the recovery of model parameters and their standard errors for the two-parameter item response theory (IRT) model using different estimation methods by Mplus. The second is to provide easily accessible information for practitioners, instructors, and students…
Descriptors: Item Response Theory, Computation, Factor Analysis, Statistical Analysis
Fikis, David R. J.; Oshima, T. C. – Educational and Psychological Measurement, 2017
Purification of the test has been a well-accepted procedure in enhancing the performance of tests for differential item functioning (DIF). As defined by Lord, purification requires reestimation of ability parameters after removing DIF items before conducting the final DIF analysis. IRTPRO 3 is a recently updated program for analyses in item…
Descriptors: Test Bias, Item Response Theory, Statistical Analysis, Computer Software
McNeish, Daniel – Educational and Psychological Measurement, 2017
In behavioral sciences broadly, estimating growth models with Bayesian methods is becoming increasingly common, especially to combat small samples common with longitudinal data. Although Mplus is becoming an increasingly common program for applied research employing Bayesian methods, the limited selection of prior distributions for the elements of…
Descriptors: Models, Bayesian Statistics, Statistical Analysis, Computer Software