NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Meets WWC Standards with or without Reservations1
Showing 1 to 15 of 1,262 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kulinskaya, Elena; Mah, Eung Yaw – Research Synthesis Methods, 2022
To present time-varying evidence, cumulative meta-analysis (CMA) updates results of previous meta-analyses to incorporate new study results. We investigate the properties of CMA, suggest possible improvements and provide the first in-depth simulation study of the use of CMA and CUSUM methods for detection of temporal trends in random-effects…
Descriptors: Meta Analysis, Computation, Statistical Analysis, Statistical Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Man, Kaiwen; Schumacker, Randall; Morell, Monica; Wang, Yurou – Educational and Psychological Measurement, 2022
While hierarchical linear modeling is often used in social science research, the assumption of normally distributed residuals at the individual and cluster levels can be violated in empirical data. Previous studies have focused on the effects of nonnormality at either lower or higher level(s) separately. However, the violation of the normality…
Descriptors: Hierarchical Linear Modeling, Statistical Distributions, Statistical Bias, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Davies, Annabel L.; Galla, Tobias – Research Synthesis Methods, 2021
Network meta-analysis (NMA) is a statistical technique for the comparison of treatment options. Outcomes of Bayesian NMA include estimates of treatment effects, and the probabilities that each treatment is ranked best, second best and so on. How exactly network topology affects the accuracy and precision of these outcomes is not fully understood.…
Descriptors: Meta Analysis, Network Analysis, Probability, Statistical Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Schauer, Jacob M.; Lee, Jihyun; Diaz, Karina; Pigott, Therese D. – Research Synthesis Methods, 2022
Missing covariates is a common issue when fitting meta-regression models. Standard practice for handling missing covariates tends to involve one of two approaches. In a complete-case analysis, effect sizes for which relevant covariates are missing are omitted from model estimation. Alternatively, researchers have employed the so-called…
Descriptors: Statistical Bias, Meta Analysis, Regression (Statistics), Research Problems
Kim, Yongnam; Steiner, Peter M. – Sociological Methods & Research, 2021
For misguided reasons, social scientists have long been reluctant to use gain scores for estimating causal effects. This article develops graphical models and graph-based arguments to show that gain score methods are a viable strategy for identifying causal treatment effects in observational studies. The proposed graphical models reveal that gain…
Descriptors: Scores, Graphs, Causal Models, Statistical Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Penaloza, Roberto V.; Berends, Mark – Sociological Methods & Research, 2022
To measure "treatment" effects, social science researchers typically rely on nonexperimental data. In education, school and teacher effects on students are often measured through value-added models (VAMs) that are not fully understood. We propose a framework that relates to the education production function in its most flexible form and…
Descriptors: Data, Value Added Models, Error of Measurement, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Glaman, Ryan; Chen, Qi; Henson, Robin K. – Journal of Experimental Education, 2022
This study compared three approaches for handling a fourth level of nesting when analyzing cluster-randomized trial (CRT) data. Although CRT data analyses may include repeated measures, individual, and cluster levels, there may be an additional fourth level that is typically ignored. This study examined the impact of ignoring this fourth level,…
Descriptors: Randomized Controlled Trials, Hierarchical Linear Modeling, Data Analysis, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Kush, Joseph M.; Konold, Timothy R.; Bradshaw, Catherine P. – Educational and Psychological Measurement, 2022
Multilevel structural equation modeling (MSEM) allows researchers to model latent factor structures at multiple levels simultaneously by decomposing within- and between-group variation. Yet the extent to which the sampling ratio (i.e., proportion of cases sampled from each group) influences the results of MSEM models remains unknown. This article…
Descriptors: Structural Equation Models, Factor Structure, Statistical Bias, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Hollenbach, Florian M.; Bojinov, Iavor; Minhas, Shahryar; Metternich, Nils W.; Ward, Michael D.; Volfovsky, Alexander – Sociological Methods & Research, 2021
Missing observations are pervasive throughout empirical research, especially in the social sciences. Despite multiple approaches to dealing adequately with missing data, many scholars still fail to address this vital issue. In this article, we present a simple-to-use method for generating multiple imputations (MIs) using a Gaussian copula. The…
Descriptors: Data, Statistical Analysis, Statistical Distributions, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Tong, Guangyu; Guo, Guang – Sociological Methods & Research, 2022
Meta-analysis is a statistical method that combines quantitative findings from previous studies. It has been increasingly used to obtain more credible results in a wide range of scientific fields. Combining the results of relevant studies allows researchers to leverage study similarities while modeling potential sources of between-study…
Descriptors: Meta Analysis, Social Science Research, Regression (Statistics), Statistical Bias
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E. – Applied Measurement in Education, 2021
Estimation of parameters for the many-facets Rasch model requires that conditional on the values of the facets, such as person ability, item difficulty, and rater severity, the observed responses within each facet are independent. This requirement has often been discussed for the Rasch models and 2PL and 3PL models, but it becomes more complex…
Descriptors: Item Response Theory, Test Items, Ability, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Arellano, Lucy – Education Sciences, 2022
Higher education is in a moment of pause, facing an opportunity to transform or continue to perpetuate the status quo. The COVID-19 pandemic, coupled with the recognition of racial violence, has created an opportunity for institutions to question their own policies and practices. The purpose of this inquiry is to question the science behind…
Descriptors: Higher Education, Equal Education, Racial Bias, Statistical Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Poom, Leo; af Wåhlberg, Anders – Research Synthesis Methods, 2022
In meta-analysis, effect sizes often need to be converted into a common metric. For this purpose conversion formulas have been constructed; some are exact, others are approximations whose accuracy has not yet been systematically tested. We performed Monte Carlo simulations where samples with pre-specified population correlations between the…
Descriptors: Meta Analysis, Effect Size, Mathematical Formulas, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Aidoo, Eric Nimako; Appiah, Simon K.; Boateng, Alexander – Journal of Experimental Education, 2021
This study investigated the small sample biasness of the ordered logit model parameters under multicollinearity using Monte Carlo simulation. The results showed that the level of biasness associated with the ordered logit model parameters consistently decreases for an increasing sample size while the distribution of the parameters becomes less…
Descriptors: Statistical Bias, Monte Carlo Methods, Simulation, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Rios, Joseph A.; Soland, James – Educational and Psychological Measurement, 2021
As low-stakes testing contexts increase, low test-taking effort may serve as a serious validity threat. One common solution to this problem is to identify noneffortful responses and treat them as missing during parameter estimation via the effort-moderated item response theory (EM-IRT) model. Although this model has been shown to outperform…
Descriptors: Computation, Accuracy, Item Response Theory, Response Style (Tests)
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  85