NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 44 results Save | Export
Peer reviewed Peer reviewed
Olejnik, Stephen F.; Algina, James – Educational and Psychological Measurement, 1988
Type I error rates and power were estimated for 10 tests of variance equality under various combinations of the following factors: similar and dissimilar distributional forms, equal and unequal means, and equal and unequal sample sizes. (TJH)
Descriptors: Analysis of Variance, Equated Scores, Error of Measurement, Power (Statistics)
Peer reviewed Peer reviewed
Haase, Richard F. – Educational and Psychological Measurement, 1986
This paper describes a BASIC computer program that computes power for any combination of effect size, degrees of freedom for hypothesis, degrees of freedom for error, and alpha level. As a consequence of the algorithm, an approximation to the critical value of the Bonferroni F-test is also computed. (Author/JAZ)
Descriptors: Analysis of Variance, Effect Size, Error of Measurement, Input Output
Peer reviewed Peer reviewed
McFatter, Robert M.; Gollob, Harry F. – Educational and Psychological Measurement, 1986
Correct simple formulas are provided for the value of phi needed to use the commonly available Pearson and Hartley power charts in determining the power of hypothesis tests involving simple degree-of-freedom comparisons in the fixed effects analysis of variance. (LMO)
Descriptors: Analysis of Variance, Hypothesis Testing, Mathematical Models, Power (Statistics)
Peer reviewed Peer reviewed
Garg, Rashmi – Educational and Psychological Measurement, 1983
This Monte Carlo study was designed to investigate empirically the efficacy of three statistical strategies for estimating the relationships in extreme group designs in terms of estimation of correlation, power and mean square error for the correlation. Alf and Abrahams's "covariance information statistic" proved to be the best strategy.…
Descriptors: Correlation, Evaluation Methods, Mathematical Formulas, Power (Statistics)
Peer reviewed Peer reviewed
Tarling, Roger – Educational and Psychological Measurement, 1982
The Mean Cost Rating, P(A) from Signal Detection Theory, Kendall's rank correlation coefficient tau, and Goodman and Kruskal's gamma measures of predictive power are compared and shown to be different transformations of the statistic S. Gamma is generally preferred for hypothesis testing. Measures of association for ordered contingency tables are…
Descriptors: Comparative Analysis, Hypothesis Testing, Power (Statistics), Predictive Measurement
Peer reviewed Peer reviewed
Friedman, Herbert – Educational and Psychological Measurement, 1982
A concise table is presented based on a general measure of magnitude of effect which allows direct determinations of statistical power over a practical range of values and alpha levels. The table also facilitates the setting of the research sample size needed to provide a given degree of power. (Author/CM)
Descriptors: Hypothesis Testing, Power (Statistics), Research Design, Sampling
Peer reviewed Peer reviewed
Bonett, Douglas G. – Educational and Psychological Measurement, 1982
Post-hoc blocking and analysis of covariance (ANCOVA) both employ a concomitant variable to increase statistical power relative to the completely randomized design. It is argued that the advantages attributed to the block design are not always valid and that there are circumstances when the ANCOVA would be preferred to post-hoc blocking.…
Descriptors: Analysis of Covariance, Comparative Analysis, Hypothesis Testing, Power (Statistics)
Peer reviewed Peer reviewed
Katz, Barry M.; McSweeney, Maryellen – Educational and Psychological Measurement, 1980
Errors of misclassification associated with two concept acquisition criteria and their effects on the actual significance level and power of a statistical test for sequential development of these concepts are presented. Explicit illustrations of actual significance levels and power values are provided for different misclassification models.…
Descriptors: Concept Formation, Hypothesis Testing, Mathematical Models, Power (Statistics)
Peer reviewed Peer reviewed
Dyer, Frank J. – Educational and Psychological Measurement, 1980
Power analysis is in essence a technique for estimating the probability of obtaining a specific minimum observed effect size. Power analysis techniques are applied to research planning problems in test reliability studies. A table for use in research planning and hypothesis testing is presented. (Author)
Descriptors: Hypothesis Testing, Mathematical Formulas, Power (Statistics), Probability
Peer reviewed Peer reviewed
Hsu, Louis M. – Educational and Psychological Measurement, 1980
The problem addressed is of assessing the loss of power which results from keeping the probability that at least one Type I error will occur in a family of N statistical tests at a tolerably low level. (Author/BW)
Descriptors: Hypothesis Testing, Orthogonal Rotation, Power (Statistics), Research Problems
Peer reviewed Peer reviewed
Hollingsworth, Holly H. – Educational and Psychological Measurement, 1980
If heterogeneous regression slopes are present in analysis of covariance (ANCOVA), the likelihood of committing a Type I error is greater than what had been prespecified. The power of the ANCOVA test of hypothesis for all possible differences of treatment effects is not maximized. (Author/RL)
Descriptors: Analysis of Covariance, Hypothesis Testing, Mathematical Models, Power (Statistics)
Peer reviewed Peer reviewed
Milligan, Glenn W. – Educational and Psychological Measurement, 1979
A FORTRAN program is provided for calculating the power of statistical tests based on the chi-square distribution. The program produces approximations to the exact probabilities obtained from the noncentral chi-square distribution. The calculation of the noncentrality parameter is discussed for tests of independence and goodness of fit.…
Descriptors: Computer Programs, Goodness of Fit, Hypothesis Testing, Nonparametric Statistics
Peer reviewed Peer reviewed
Conard, Elizabeth H.; Lutz, J. Gary – Educational and Psychological Measurement, 1979
A program is described which selects the most powerful among four methods for conducting a priori comparisons in an analysis of variance: orthogonal contrasts, Scheffe's method, Dunn's method, and Dunnett's test. The program supplies the critical t ratio and the per-comparison Type I error risk for each of the relevant methods. (Author/JKS)
Descriptors: Analysis of Variance, Computer Programs, Hypothesis Testing, Power (Statistics)
Peer reviewed Peer reviewed
Caruso, John C.; Cliff, Norman – Educational and Psychological Measurement, 1997
Several methods of constructing confidence intervals for Spearman's rho (rank correlation coefficient) (C. Spearman, 1904) were tested in a Monte Carlo study using 2,000 samples of 3 different sizes. Results support the continued use of Spearman's rho in behavioral research. (SLD)
Descriptors: Behavioral Science Research, Correlation, Monte Carlo Methods, Power (Statistics)
Peer reviewed Peer reviewed
Wilcox, Rand R. – Educational and Psychological Measurement, 1997
Some results on how the Alexander-Govern heteroscedastic analysis of variance (ANOVA) procedure (R. Alexander and D. Govern, 1994) performs under nonnormality are presented. This method can provide poor control of Type I errors in some cases, and in some situations power decreases as differences among the means get large. (SLD)
Descriptors: Analysis of Variance, Error of Measurement, Power (Statistics), Statistical Distributions
Previous Page | Next Page ยป
Pages: 1  |  2  |  3