NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 13 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Yan; Rodríguez de Gil, Patricia; Chen, Yi-Hsin; Kromrey, Jeffrey D.; Kim, Eun Sook; Pham, Thanh; Nguyen, Diep; Romano, Jeanine L. – Educational and Psychological Measurement, 2017
Various tests to check the homogeneity of variance assumption have been proposed in the literature, yet there is no consensus as to their robustness when the assumption of normality does not hold. This simulation study evaluated the performance of 14 tests for the homogeneity of variance assumption in one-way ANOVA models in terms of Type I error…
Descriptors: Comparative Analysis, Statistical Analysis, Robustness (Statistics), Observation
Peer reviewed Peer reviewed
Direct linkDirect link
Romano, Jeanine L.; Kromrey, Jeffrey D.; Hibbard, Susan T. – Educational and Psychological Measurement, 2010
The purpose of this research is to examine eight of the different methods for computing confidence intervals around alpha that have been proposed to determine which of these, if any, is the most accurate and precise. Monte Carlo methods were used to simulate samples under known and controlled population conditions. In general, the differences in…
Descriptors: Monte Carlo Methods, Intervals, Computation, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Romano, Jeanine L.; Kromrey, Jeffrey D. – Educational and Psychological Measurement, 2009
This study was conducted to evaluate alternative analysis strategies for the meta-analysis method of reliability generalization when the reliability estimates are not statistically independent. Five approaches to dealing with the violation of independence were implemented: ignoring the violation and treating each observation as independent,…
Descriptors: Reliability, Generalization, Meta Analysis, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Hess, Melinda R.; Hogarty, Kristine Y.; Ferron, John M.; Kromrey, Jeffrey D. – Educational and Psychological Measurement, 2007
Monte Carlo methods were used to examine techniques for constructing confidence intervals around multivariate effect sizes. Using interval inversion and bootstrapping methods, confidence intervals were constructed around the standard estimate of Mahalanobis distance (D[superscript 2]), two bias-adjusted estimates of D[superscript 2], and Huberty's…
Descriptors: Population Distribution, Intervals, Monte Carlo Methods, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Kromrey, Jeffrey D.; Rendina-Gobioff, Gianna – Educational and Psychological Measurement, 2006
The performance of methods for detecting publication bias in meta-analysis was evaluated using Monte Carlo methods. Four methods of bias detection were investigated: Begg's rank correlation, Egger's regression, funnel plot regression, and trim and fill. Five factors were included in the simulation design: number of primary studies in each…
Descriptors: Comparative Analysis, Meta Analysis, Monte Carlo Methods, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Hogarty, Kristine Y.; Hines, Constance V.; Kromrey, Jeffrey D.; Ferron, John M.; Mumford, Karen R. – Educational and Psychological Measurement, 2005
The purpose of this study was to investigate the relationship between sample size and the quality of factor solutions obtained from exploratory factor analysis. This research expanded upon the range of conditions previously examined, employing a broad selection of criteria for the evaluation of the quality of sample factor solutions. Results…
Descriptors: Sample Size, Factor Analysis, Factor Structure, Evaluation Methods
Peer reviewed Peer reviewed
Hogarty, Kristine Y.; Lang, Thomas R.; Kromrey, Jeffrey D. – Educational and Psychological Measurement, 2003
Developed and provided initial validation of scores from a survey designed to measure teachers' reported use of technology in their classrooms. Interprets results from a sample of 2000 teachers in terms of the ability of the instrument to measure the confluence of factors that are critical for study of technology use in classrooms. (SLD)
Descriptors: Computer Uses in Education, Educational Technology, Elementary Secondary Education, Scores
Peer reviewed Peer reviewed
Kromrey, Jeffrey D.; Dickinson, Wendy B. – Educational and Psychological Measurement, 1996
Empirical estimates of the power and Type I error rate of the test of the classrooms-within-treatments effect in the nested analysis of variance approach are provided for a variety of nominal alpha levels and a range of classroom effect sizes and research designs. (SLD)
Descriptors: Analysis of Variance, Correlation, Educational Research, Effect Size
Peer reviewed Peer reviewed
Kromrey, Jeffrey D.; Foster-Johnson, Lynn – Educational and Psychological Measurement, 1998
Provides a comparison of centered and raw-score analyses in least squares regression. The two methods are demonstrated with constructed data in a Monte Carlo study to be equivalent, yielding identical hypothesis tests associated with the moderation effect and regression equations that are functionally equivalent. (SLD)
Descriptors: Hypothesis Testing, Least Squares Statistics, Monte Carlo Methods, Raw Scores
Peer reviewed Peer reviewed
Kromrey, Jeffrey D.; Foster-Johnson, Lynn – Educational and Psychological Measurement, 1999
Shows that the procedure recommended by D. Lubinski and L. Humphreys (1990) for differentiating between moderated and nonlinear regression models evidences statistical problems characteristic of stepwise procedures. Interprets Monte Carlo results in terms of the researchers' need to differentiate between exploratory and confirmatory aspects of…
Descriptors: Interaction, Models, Monte Carlo Methods, Regression (Statistics)
Peer reviewed Peer reviewed
Kromrey, Jeffrey D.; Hines, Constance V. – Educational and Psychological Measurement, 1994
Results from bootstrap samples of 50, 100, and 200 indicate that 3 imputation procedures for missing data produce biased estimates of R2 and both standardized regression weights used. Two deletion procedures (listwise and pairwise) provided accurate parameter estimates with up to 30% of data missing. (SLD)
Descriptors: Comparative Analysis, Estimation (Mathematics), Field Studies, Prediction
Peer reviewed Peer reviewed
Kromrey, Jeffrey D.; Hines, Constance V. – Educational and Psychological Measurement, 1995
The accuracy of four empirical techniques to estimate shrinkage in multiple regression was studied through Monte Carlo simulation. None of the techniques provided unbiased estimates of the population squared multiple correlation coefficient, but the normalized jackknife and bootstrap techniques demonstrated marginally acceptable performance with…
Descriptors: Estimation (Mathematics), Monte Carlo Methods, Regression (Statistics), Sample Size
Peer reviewed Peer reviewed
Parshall, Cynthia G.; Kromrey, Jeffrey D. – Educational and Psychological Measurement, 1996
Power and Type I error rates were estimated for contingency tables with small sample sizes for the following four types of tests: (1) Pearson's chi-square; (2) chi-square with Yates's continuity correction; (3) the likelihood ratio test; and (4) Fisher's Exact Test. Various marginal distributions, sample sizes, and effect sizes were examined. (SLD)
Descriptors: Chi Square, Comparative Analysis, Effect Size, Estimation (Mathematics)