NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing all 5 results
Peer reviewed Peer reviewed
Direct linkDirect link
Ferron, John; Jones, Peggy K. – Journal of Experimental Education, 2006
The authors present a method that ensures control over the Type I error rate for those who visually analyze the data from response-guided multiple-baseline designs. The method can be seen as a modification of visual analysis methods to incorporate a mechanism to control Type I errors or as a modification of randomization test methods to allow…
Descriptors: Multivariate Analysis, Data Analysis, Inferences, Monte Carlo Methods
Peer reviewed Peer reviewed
Ferron, John; Foster-Johnson, Lynn; Kromrey, Jeffrey D. – Journal of Experimental Education, 2003
Used Monte Carlo methods to examine the Type I error rates for randomization tests applied to single-case data arising from ABAB designs involving random, systematic, or response-guided assignment of interventions. Discusses conditions under which Type I error rate is controlled or is not. (SLD)
Descriptors: Error of Measurement, Monte Carlo Methods, Research Design
Peer reviewed Peer reviewed
Ferron, John; Sentovich, Chris – Journal of Experimental Education, 2002
Estimated statistical power for three randomization tests used with multiple-baseline designs using Monte Carlo methods. For an effect size of 0.5, none of the tests provided an adequate level of power, and for an effect size of 1.0, power was adequate for the Koehler-Levin test and the Marascuilo-Busk test only when the series length was long and…
Descriptors: Effect Size, Monte Carlo Methods, Power (Statistics), Research Design
Peer reviewed Peer reviewed
Ferron, John; Onghena, Patrick – Journal of Experimental Education, 1996
Monte Carlo methods were used to estimate the power of randomization tests used with single-case designs involving random assignment of treatments to phases. Simulations of two treatments and six phases showed an adequate level of power when effect sizes were large, phase lengths exceeded five, and autocorrelation was not negative. (SLD)
Descriptors: Case Studies, Correlation, Educational Research, Effect Size
Peer reviewed Peer reviewed
Ferron, John; Ware, William – Journal of Experimental Education, 1995
The power of randomization tests was systematically examined through simulation for typical designs that rely on the random assignment of interventions within the observation sequence. A 30-observation AB design, 32-observation AB design, and multiple baseline AB (15 observations on 4 individuals) were studied. Power estimates were generally found…
Descriptors: Data Analysis, Effect Size, Estimation (Mathematics), Power (Statistics)