Publication Date
| In 2015 | 0 |
| Since 2014 | 5 |
| Since 2011 (last 5 years) | 6 |
| Since 2006 (last 10 years) | 12 |
| Since 1996 (last 20 years) | 22 |
Descriptor
| Research Design | 59 |
| Educational Research | 17 |
| Statistical Analysis | 16 |
| Monte Carlo Methods | 13 |
| Simulation | 13 |
| Research Methodology | 12 |
| Effect Size | 9 |
| Data Analysis | 8 |
| Research Problems | 8 |
| Tables (Data) | 8 |
| More ▼ | |
Source
| Journal of Experimental… | 59 |
Author
| Beretvas, S. Natasha | 4 |
| Ferron, John | 3 |
| Ferron, John M. | 2 |
| Guo, Jiin-Huarng | 2 |
| Klockars, Alan J. | 2 |
| Levy, Kenneth J. | 2 |
| Luh, Wei-Ming | 2 |
| Moeyaert, Mariola | 2 |
| Onghena, Patrick | 2 |
| Pohl, Norval F. | 2 |
| More ▼ | |
Publication Type
Education Level
| Higher Education | 2 |
| Early Childhood Education | 1 |
| Elementary Secondary Education | 1 |
| Intermediate Grades | 1 |
| Postsecondary Education | 1 |
| Preschool Education | 1 |
Audience
| Researchers | 1 |
Showing 1 to 15 of 59 results
Spybrook, Jessaca – Journal of Experimental Education, 2014
The Institute of Education Sciences has funded more than 100 experiments to evaluate educational interventions in an effort to generate scientific evidence of program effectiveness on which to base education policy and practice. In general, these studies are designed with the goal of having adequate statistical power to detect the average…
Descriptors: Intervention, Educational Research, Research Methodology, Statistical Analysis
Fenesi, Barbara; Heisz, Jennifer J.; Savage, Philip I.; Shore, David I.; Kim, Joseph A. – Journal of Experimental Education, 2014
This experiment combined controlled experimental design with a best-practice approach (i.e., real course content, subjective evaluations) to clarify the role of verbal redundancy, confirm the multimodal impact of images and narration, and highlight discrepancies between actual and perceived understanding. The authors presented 1 of 3…
Descriptors: Multimedia Instruction, Best Practices, Research Design, Computer Uses in Education
Wu, Jiun-Yu; Kwok, Oi-Man; Willson, Victor L. – Journal of Experimental Education, 2014
The authors compared the effects of using the true Multilevel Latent Growth Curve Model (MLGCM) with single-level regular and design-based Latent Growth Curve Models (LGCM) with or without the higher-level predictor on various criterion variables for multilevel longitudinal data. They found that random effect estimates were biased when the…
Descriptors: Longitudinal Studies, Hierarchical Linear Modeling, Prediction, Regression (Statistics)
Moeyaert, Mariola; Ugille, Maaike; Ferron, John M.; Beretvas, S. Natasha; Van den Noortgate, Wim – Journal of Experimental Education, 2014
One approach for combining single-case data involves use of multilevel modeling. In this article, the authors use a Monte Carlo simulation study to inform applied researchers under which realistic conditions the three-level model is appropriate. The authors vary the value of the immediate treatment effect and the treatment's effect on the…
Descriptors: Hierarchical Linear Modeling, Monte Carlo Methods, Case Studies, Research Design
Ugille, Maaike; Moeyaert, Mariola; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim – Journal of Experimental Education, 2014
A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect…
Descriptors: Effect Size, Statistical Bias, Sample Size, Regression (Statistics)
Luh, Wei-Ming; Guo, Jiin-Huarng – Journal of Experimental Education, 2011
Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…
Descriptors: Sample Size, Monte Carlo Methods, Statistical Analysis, Heterogeneous Grouping
Manolov, Rumen; Solanas, Antonio; Bulte, Isis; Onghena, Patrick – Journal of Experimental Education, 2010
This study deals with the statistical properties of a randomization test applied to an ABAB design in cases where the desirable random assignment of the points of change in phase is not possible. To obtain information about each possible data division, the authors carried out a conditional Monte Carlo simulation with 100,000 samples for each…
Descriptors: Monte Carlo Methods, Effect Size, Simulation, Evaluation Methods
Konstantopoulos, Spyros – Journal of Experimental Education, 2010
Previous work on statistical power has discussed mainly single-level designs or 2-level balanced designs with random effects. Although balanced experiments are common, in practice balance cannot always be achieved. Work on class size is one example of unbalanced designs. This study provides methods for power analysis in 2-level unbalanced designs…
Descriptors: Class Size, Computers, Statistical Analysis, Experiments
Luh, Wei-Ming; Guo, Jiin-Huarng – Journal of Experimental Education, 2009
The sample size determination is an important issue for planning research. However, limitations in size have seldom been discussed in the literature. Thus, how to allocate participants into different treatment groups to achieve the desired power is a practical issue that still needs to be addressed when one group size is fixed. The authors focused…
Descriptors: Sample Size, Research Methodology, Evaluation Methods, Simulation
Pituch, Keenan A.; Murphy, Daniel L.; Tate, Richard L. – Journal of Experimental Education, 2009
Due to the clustered nature of field data, multi-level modeling has become commonly used to analyze data arising from educational field experiments. While recent methodological literature has focused on multi-level mediation analysis, relatively little attention has been devoted to mediation analysis when three levels (e.g., student, class,…
Descriptors: Research Design, Educational Experiments, Models, Mediation Theory
Wang, Zhongmiao; Thompson, Bruce – Journal of Experimental Education, 2007
In this study the authors investigated the use of 5 (i.e., Claudy, Ezekiel, Olkin-Pratt, Pratt, and Smith) R[squared] correction formulas with the Pearson r[squared]. The authors estimated adjustment bias and precision under 6 x 3 x 6 conditions (i.e., population [rho] values of 0.0, 0.1, 0.3, 0.5, 0.7, and 0.9; population shapes normal, skewness…
Descriptors: Effect Size, Correlation, Mathematical Formulas, Monte Carlo Methods
Unstructured Collaboration versus Individual Practice for Complex Problem Solving: A Cautionary Tale
Yetter, Georgette; Gutkin, Terry B.; Saunders, Anita; Galloway, Ann M.; Sobansky, Robin R.; Song, Samuel Y. – Journal of Experimental Education, 2006
The authors used an experimental design to compare the effectiveness of unstructured collaborative practice with individual practice on achievement on a complex well-structured problem-solving task. Participants included postsecondary students (N = 257) from a liberal arts college serving primarily nontraditional students and from 2 state…
Descriptors: State Universities, Statistical Analysis, Research Design, Heuristics
Peer reviewedAng, Rebecca P. – Journal of Experimental Education, 2005
Development and validation of the 14-item Teacher-Student Relationship Inventory (TSRI) is described. The TSRI is a self-report measure assessing teacher perceptions of the quality of their relationship with students from Grade 4 through junior high school. In Study 1, findings from exploratory factor analysis provided evidence for a 3-factor…
Descriptors: Research Design, Educational Environment, Predictive Validity, Factor Analysis
Peer reviewedFerron, John; Foster-Johnson, Lynn; Kromrey, Jeffrey D. – Journal of Experimental Education, 2003
Used Monte Carlo methods to examine the Type I error rates for randomization tests applied to single-case data arising from ABAB designs involving random, systematic, or response-guided assignment of interventions. Discusses conditions under which Type I error rate is controlled or is not. (SLD)
Descriptors: Error of Measurement, Monte Carlo Methods, Research Design
Peer reviewedFerron, John; Sentovich, Chris – Journal of Experimental Education, 2002
Estimated statistical power for three randomization tests used with multiple-baseline designs using Monte Carlo methods. For an effect size of 0.5, none of the tests provided an adequate level of power, and for an effect size of 1.0, power was adequate for the Koehler-Levin test and the Marascuilo-Busk test only when the series length was long and…
Descriptors: Effect Size, Monte Carlo Methods, Power (Statistics), Research Design

Direct link
