ERIC Number: ED346123
Record Type: RIE
Publication Date: 1992-Apr
Reference Count: N/A
Evaluating Statistical Significance Using Corrected and Uncorrected Magnitude of Effect Size Estimates.
Snyder, Patricia; Lawson, Stephen
Magnitude of effect measures (MEMs), when adequately understood and correctly used, are important aids for researchers who do not want to rely solely on tests of statistical significance in substantive result interpretation. The MEM tells how much of the dependent variable can be controlled, predicted, or explained by the independent variables. Why methodologists encourage the use of MEMs as interpretive aids is described, and different types of measures are discussed. Correction formulas are presented to attenuate statistical bias in MEMs. MEMs are broadly grouped into measures of effect size and measures of associative strength. Several dozen computational indices are discussed in the literature concerning MEMs. The paper reviews the following categories: (1) biased versus unbiased magnitude of association measures; (2) population versus sample indices; (3) indices for fixed versus random-effect design models; (4) univariate versus multivariate magnitude of effect measures; and (5) equivalent measures from varying perspectives of the general linear model. Eight tables illustrate formula effects for differing sample and effect sizes. Several cautions against the indiscriminate use of these measures are offered. There is a 39-item list of references. (SLD)
Publication Type: Speeches/Meeting Papers
Education Level: N/A
Authoring Institution: N/A
Identifiers: Association Strength; Dependent Variables; Linear Models; Preservice Teachers
Note: Paper presented at the Annual Meeting of the American Educational Research Association (San Francisco, CA, April 20-24, 1992).