NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…1249
Laws, Policies, & Programs
No Child Left Behind Act 20014
What Works Clearinghouse Rating
Showing 151 to 165 of 1,249 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E. – Educational and Psychological Measurement, 2021
An essential question when computing test--retest and alternate forms reliability coefficients is how many days there should be between tests. This article uses data from reading and math computerized adaptive tests to explore how the number of days between tests impacts alternate forms reliability coefficients. Results suggest that the highest…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Reliability, Reading Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Ames, Allison J.; Myers, Aaron J. – Educational and Psychological Measurement, 2021
Contamination of responses due to extreme and midpoint response style can confound the interpretation of scores, threatening the validity of inferences made from survey responses. This study incorporated person-level covariates in the multidimensional item response tree model to explain heterogeneity in response style. We include an empirical…
Descriptors: Response Style (Tests), Item Response Theory, Longitudinal Studies, Adolescents
Peer reviewed Peer reviewed
Direct linkDirect link
Cassiday, Kristina R.; Cho, Youngmi; Harring, Jeffrey R. – Educational and Psychological Measurement, 2021
Simulation studies involving mixture models inevitably aggregate parameter estimates and other output across numerous replications. A primary issue that arises in these methodological investigations is label switching. The current study compares several label switching corrections that are commonly used when dealing with mixture models. A growth…
Descriptors: Probability, Models, Simulation, Mathematics
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Eunsook; von der Embse, Nathaniel – Educational and Psychological Measurement, 2021
Although collecting data from multiple informants is highly recommended, methods to model the congruence and incongruence between informants are limited. Bauer and colleagues suggested the trifactor model that decomposes the variances into common factor, informant perspective factors, and item-specific factors. This study extends their work to the…
Descriptors: Probability, Models, Statistical Analysis, Congruence (Psychology)
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Bluemke, Matthias – Educational and Psychological Measurement, 2021
A widely applicable procedure of examining proximity to unidimensionality for multicomponent measuring instruments with multidimensional structure is discussed. The method is developed within the framework of latent variable modeling and allows one to point and interval estimate an explained variance proportion-based index that may be considered a…
Descriptors: Proximity, Measures (Individuals), Models, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Lenhard, Wolfgang; Lenhard, Alexandra – Educational and Psychological Measurement, 2021
The interpretation of psychometric test results is usually based on norm scores. We compared semiparametric continuous norming (SPCN) with conventional norming methods by simulating results for test scales with different item numbers and difficulties via an item response theory approach. Subsequently, we modeled the norm scores based on random…
Descriptors: Test Norms, Scores, Regression (Statistics), Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Schulte, Niklas; Holling, Heinz; Bürkner, Paul-Christian – Educational and Psychological Measurement, 2021
Forced-choice questionnaires can prevent faking and other response biases typically associated with rating scales. However, the derived trait scores are often unreliable and ipsative, making interindividual comparisons in high-stakes situations impossible. Several studies suggest that these problems vanish if the number of measured traits is high.…
Descriptors: Questionnaires, Measurement Techniques, Test Format, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Ferrando, Pere J.; Lorenzo-Seva, Urbano – Educational and Psychological Measurement, 2021
Unit-weight sum scores (UWSSs) are routinely used as estimates of factor scores on the basis of solutions obtained with the nonlinear exploratory factor analysis (EFA) model for ordered-categorical responses. Theoretically, this practice results in a loss of information and accuracy, and is expected to lead to biased estimates. However, the…
Descriptors: Scores, Factor Analysis, Automation, Fidelity
Peer reviewed Peer reviewed
Direct linkDirect link
Wind, Stefanie A.; Schumacker, Randall E. – Educational and Psychological Measurement, 2021
Researchers frequently use Rasch models to analyze survey responses because these models provide accurate parameter estimates for items and examinees when there are missing data. However, researchers have not fully considered how missing data affect the accuracy of dimensionality assessment in Rasch analyses such as principal components analysis…
Descriptors: Item Response Theory, Data, Factor Analysis, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Roozenbeek, Jon; Maertens, Rakoen; McClanahan, William; van der Linden, Sander – Educational and Psychological Measurement, 2021
Online misinformation is a pervasive global problem. In response, psychologists have recently explored the theory of psychological inoculation: If people are preemptively exposed to a weakened version of a misinformation technique, they can build up cognitive resistance. This study addresses two unanswered methodological questions about a widely…
Descriptors: Games, Intervention, Scores, Pretests Posttests
Peer reviewed Peer reviewed
Direct linkDirect link
Dimitrov, Dimiter M.; Atanasov, Dimitar V. – Educational and Psychological Measurement, 2021
This study presents a latent (item response theory--like) framework of a recently developed classical approach to test scoring, equating, and item analysis, referred to as "D"-scoring method. Specifically, (a) person and item parameters are estimated under an item response function model on the "D"-scale (from 0 to 1) using…
Descriptors: Scoring, Equated Scores, Item Analysis, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Bezirhan, Ummugul; von Davier, Matthias; Grabovsky, Irina – Educational and Psychological Measurement, 2021
This article presents a new approach to the analysis of how students answer tests and how they allocate resources in terms of time on task and revisiting previously answered questions. Previous research has shown that in high-stakes assessments, most test takers do not end the testing session early, but rather spend all of the time they were…
Descriptors: Response Style (Tests), Accuracy, Reaction Time, Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Pavlov, Goran; Maydeu-Olivares, Alberto; Shi, Dexin – Educational and Psychological Measurement, 2021
We examine the accuracy of p values obtained using the asymptotic mean and variance (MV) correction to the distribution of the sample standardized root mean squared residual (SRMR) proposed by Maydeu-Olivares to assess the exact fit of SEM models. In a simulation study, we found that under normality, the MV-corrected SRMR statistic provides…
Descriptors: Structural Equation Models, Goodness of Fit, Simulation, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Thompson, Yutian T.; Song, Hairong; Shi, Dexin; Liu, Zhengkui – Educational and Psychological Measurement, 2021
Conventional approaches for selecting a reference indicator (RI) could lead to misleading results in testing for measurement invariance (MI). Several newer quantitative methods have been available for more rigorous RI selection. However, it is still unknown how well these methods perform in terms of correctly identifying a truly invariant item to…
Descriptors: Measurement, Statistical Analysis, Selection, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Liang, Xinya – Educational and Psychological Measurement, 2020
Bayesian structural equation modeling (BSEM) is a flexible tool for the exploration and estimation of sparse factor loading structures; that is, most cross-loading entries are zero and only a few important cross-loadings are nonzero. The current investigation was focused on the BSEM with small-variance normal distribution priors (BSEM-N) for both…
Descriptors: Factor Structure, Bayesian Statistics, Structural Equation Models, Goodness of Fit
Pages: 1  |  ...  |  7  |  8  |  9  |  10  |  11  |  12  |  13  |  14  |  15  |  ...  |  84