ERIC Number: ED435702
Record Type: RIE
Publication Date: 1999-Nov
Statistical Significance and Effect Size: Two Sides of a Coin.
This paper suggests that statistical significance testing and effect size are two sides of the same coin; they complement each other, but do not substitute for one another. Good research practice requires that both should be taken into consideration to make sound quantitative decisions. A Monte Carlo simulation experiment was conducted, and a three-factor crossed design, with 500 replications within each cell, was implemented in the simulation. The sampling variability of two popular effect sizes ("d" and "R squared") was empirically obtained under different data conditions. It is shown empirically that there is considerable variability of sample effect size measure, and the extent of sampling variability of effect size measures is strongly influenced by sample size. Although that which is statistically significant may not be practically meaningful, that which appears to be a practically meaningful effect size could occur by chance (i.e., sampling error), thus not trustworthy. It is pointed out that statistical significance testing and effect size measurement serve different purposes, and the sole reliance on either may be misleading. Some practical guidelines are recommended for combining statistical significance testing and effect size measure for making decisions in quantitative analysis. (Contains 2 tables, 3 figures, and 20 references.) (Author/SLD)
Publication Type: Reports - Evaluative; Speeches/Meeting Papers
Education Level: N/A
Authoring Institution: N/A
Note: Paper presented at the Annual Meeting of the American Evaluation Association (Orlando, FL, November 3-6, 1999).