NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 30 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Sung Eun; Ahn, Soyeon; Zopluoglu, Cengiz – Educational and Psychological Measurement, 2021
This study presents a new approach to synthesizing differential item functioning (DIF) effect size: First, using correlation matrices from each study, we perform a multigroup confirmatory factor analysis (MGCFA) that examines measurement invariance of a test item between two subgroups (i.e., focal and reference groups). Then we synthesize, across…
Descriptors: Item Analysis, Effect Size, Difficulty Level, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Sideridis, Georgios D.; Tsaousis, Ioannis; Alamri, Abeer A. – Educational and Psychological Measurement, 2020
The main thesis of the present study is to use the Bayesian structural equation modeling (BSEM) methodology of establishing approximate measurement invariance (A-MI) using data from a national examination in Saudi Arabia as an alternative to not meeting strong invariance criteria. Instead, we illustrate how to account for the absence of…
Descriptors: Bayesian Statistics, Structural Equation Models, Foreign Countries, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Liang, Xinya; Kamata, Akihito; Li, Ji – Educational and Psychological Measurement, 2020
One important issue in Bayesian estimation is the determination of an effective informative prior. In hierarchical Bayes models, the uncertainty of hyperparameters in a prior can be further modeled via their own priors, namely, hyper priors. This study introduces a framework to construct hyper priors for both the mean and the variance…
Descriptors: Bayesian Statistics, Randomized Controlled Trials, Effect Size, Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
Xia, Yan; Green, Samuel B.; Xu, Yuning; Thompson, Marilyn S. – Educational and Psychological Measurement, 2019
Past research suggests revised parallel analysis (R-PA) tends to yield relatively accurate results in determining the number of factors in exploratory factor analysis. R-PA can be interpreted as a series of hypothesis tests. At each step in the series, a null hypothesis is tested that an additional factor accounts for zero common variance among…
Descriptors: Effect Size, Factor Analysis, Hypothesis Testing, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
DiStefano, Christine; McDaniel, Heather L.; Zhang, Liyun; Shi, Dexin; Jiang, Zhehan – Educational and Psychological Measurement, 2019
A simulation study was conducted to investigate the model size effect when confirmatory factor analysis (CFA) models include many ordinal items. CFA models including between 15 and 120 ordinal items were analyzed with mean- and variance-adjusted weighted least squares to determine how varying sample size, number of ordered categories, and…
Descriptors: Factor Analysis, Effect Size, Data, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Olivera-Aguilar, Margarita; Rikoon, Samuel H.; Gonzalez, Oscar; Kisbu-Sakarya, Yasemin; MacKinnon, David P. – Educational and Psychological Measurement, 2018
When testing a statistical mediation model, it is assumed that factorial measurement invariance holds for the mediating construct across levels of the independent variable X. The consequences of failing to address the violations of measurement invariance in mediation models are largely unknown. The purpose of the present study was to…
Descriptors: Error of Measurement, Statistical Analysis, Factor Analysis, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Trafimow, David – Educational and Psychological Measurement, 2018
Because error variance alternatively can be considered to be the sum of systematic variance associated with unknown variables and randomness, a tripartite assumption is proposed that total variance in the dependent variable can be partitioned into three variance components. These are variance in the dependent variable that is explained by the…
Descriptors: Statistical Analysis, Correlation, Experiments, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Cao, Mengyang; Tay, Louis; Liu, Yaowu – Educational and Psychological Measurement, 2017
This study examined the performance of a proposed iterative Wald approach for detecting differential item functioning (DIF) between two groups when preknowledge of anchor items is absent. The iterative approach utilizes the Wald-2 approach to identify anchor items and then iteratively tests for DIF items with the Wald-1 approach. Monte Carlo…
Descriptors: Monte Carlo Methods, Test Items, Test Bias, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Wei; Konstantopoulos, Spyros – Educational and Psychological Measurement, 2017
Field experiments in education frequently assign entire groups such as schools to treatment or control conditions. These experiments incorporate sometimes a longitudinal component where for example students are followed over time to assess differences in the average rate of linear change, or rate of acceleration. In this study, we provide methods…
Descriptors: Educational Experiments, Field Studies, Models, Randomized Controlled Trials
Peer reviewed Peer reviewed
Direct linkDirect link
Nugent, William R. – Educational and Psychological Measurement, 2017
Meta-analysis is a significant methodological advance that is increasingly important in research synthesis. Fundamental to meta-analysis is the presumption that effect sizes, such as the standardized mean difference (SMD), based on scores from different measures are comparable. It has been argued that population observed score SMDs based on scores…
Descriptors: Meta Analysis, Effect Size, Comparative Analysis, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Marsman, Maarten; Wagenmakers, Eric-Jan – Educational and Psychological Measurement, 2017
P values have been critiqued on several grounds but remain entrenched as the dominant inferential method in the empirical sciences. In this article, we elaborate on the fact that in many statistical models, the one-sided "P" value has a direct Bayesian interpretation as the approximate posterior mass for values lower than zero. The…
Descriptors: Bayesian Statistics, Statistical Inference, Probability, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Wilcox, Rand R.; Serang, Sarfaraz – Educational and Psychological Measurement, 2017
The article provides perspectives on p values, null hypothesis testing, and alternative techniques in light of modern robust statistical methods. Null hypothesis testing and "p" values can provide useful information provided they are interpreted in a sound manner, which includes taking into account insights and advances that have…
Descriptors: Hypothesis Testing, Bayesian Statistics, Computation, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Wiens, Stefan; Nilsson, Mats E. – Educational and Psychological Measurement, 2017
Because of the continuing debates about statistics, many researchers may feel confused about how to analyze and interpret data. Current guidelines in psychology advocate the use of effect sizes and confidence intervals (CIs). However, researchers may be unsure about how to extract effect sizes from factorial designs. Contrast analysis is helpful…
Descriptors: Data Analysis, Effect Size, Computation, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
García-Pérez, Miguel A. – Educational and Psychological Measurement, 2017
Null hypothesis significance testing (NHST) has been the subject of debate for decades and alternative approaches to data analysis have been proposed. This article addresses this debate from the perspective of scientific inquiry and inference. Inference is an inverse problem and application of statistical methods cannot reveal whether effects…
Descriptors: Hypothesis Testing, Statistical Inference, Effect Size, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Tongyun; Jiao, Hong; Macready, George B. – Educational and Psychological Measurement, 2016
The present study investigates different approaches to adding covariates and the impact in fitting mixture item response theory models. Mixture item response theory models serve as an important methodology for tackling several psychometric issues in test development, including the detection of latent differential item functioning. A Monte Carlo…
Descriptors: Item Response Theory, Psychometrics, Test Construction, Monte Carlo Methods
Previous Page | Next Page »
Pages: 1  |  2