NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Algina, James; Keselman, H. J.; Penfield, Randall D. – Educational and Psychological Measurement, 2010
The increase in the squared multiple correlation coefficient ([delta]R[superscript 2]) associated with a variable in a regression equation is a commonly used measure of importance in regression analysis. Algina, Keselman, and Penfield found that intervals based on asymptotic principles were typically very inaccurate, even though the sample size…
Descriptors: Computation, Statistical Analysis, Correlation, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
Algina, James; Keselman, H. J. – Educational and Psychological Measurement, 2008
Applications of distribution theory for the squared multiple correlation coefficient and the squared cross-validation coefficient are reviewed, and computer programs for these applications are made available. The applications include confidence intervals, hypothesis testing, and sample size selection. (Contains 2 tables.)
Descriptors: Intervals, Sample Size, Validity, Hypothesis Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Algina, James; Keselman, H. J.; Penfield, Randall D. – Educational and Psychological Measurement, 2007
The increase in the squared multiple correlation coefficient ([Delta]R[squared]) associated with a variable in a regression equation is a commonly used measure of importance in regression analysis. The coverage probability that an asymptotic and percentile bootstrap confidence interval includes [Delta][rho][squared] was investigated. As expected,…
Descriptors: Probability, Intervals, Multiple Regression Analysis, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Algina, James; Keselman, H. J.; Penfield, Randall D. – Educational and Psychological Measurement, 2006
Kelley compared three methods for setting a confidence interval (CI) around Cohen's standardized mean difference statistic: the noncentral-"t"-based, percentile (PERC) bootstrap, and biased-corrected and accelerated (BCA) bootstrap methods under three conditions of nonnormality, eight cases of sample size, and six cases of population…
Descriptors: Effect Size, Comparative Analysis, Sample Size, Investigations
Peer reviewed Peer reviewed
Keselman, H. J.; Toothaker, Larry E. – Educational and Psychological Measurement, 1974
Descriptors: Analysis of Variance, Comparative Analysis, Hypothesis Testing, Research Methodology
Peer reviewed Peer reviewed
Keselman, H. J. – Educational and Psychological Measurement, 1976
Investigates the Tukey statistic for the empirical probability of a Type II error under numerous parametric specifications defined by Cohen (1969) as being representative of behavioral research data. For unequal numbers of observations per treatment group and for unequal population variancies, the Tukey test was simulated when sampling from a…
Descriptors: Analysis of Variance, Hypothesis Testing, Power (Statistics), Probability
Peer reviewed Peer reviewed
Keselman, H. J.; And Others – Educational and Psychological Measurement, 1976
Compares the harmonic mean and Kramer unequal group forms of the Tukey test for various: (a) degrees of disparate group sizes, (b) numbers of groups, and (c) nominal significant levels. (RC)
Descriptors: Comparative Analysis, Probability, Sampling, Statistical Significance
Peer reviewed Peer reviewed
Keselman, Joanne C.; Keselman, H. J. – Educational and Psychological Measurement, 1987
The power to detect main and interaction effects in a factorial design was determined when the Bonferroni method was used to control the overall rate of Type I error. For sample sizes typical of educational research, the power of this procedure was considerably less than that of recommended standards. (TJH)
Descriptors: Educational Research, Sample Size, Statistical Analysis
Peer reviewed Peer reviewed
Keselman, H. J.; And Others – Educational and Psychological Measurement, 1981
This paper demonstrates that multiple comparison tests using a pooled error term are dependent on the circularity assumption and shows how to compute tests which are insensitive (robust) to this assumption. (Author/GK)
Descriptors: Hypothesis Testing, Mathematical Models, Research Design, Statistical Significance
Peer reviewed Peer reviewed
Algina, James; Keselman, H. J. – Educational and Psychological Measurement, 2003
Investigated the approximate confidence intervals for effect sizes developed by K. Bird (2002) and proposed a more accurate method developed through simulation studies. The average coverage probability for the new method was 0.959. (SLD)
Descriptors: Effect Size, Research Methodology, Simulation
Peer reviewed Peer reviewed
Cribbie, Robert A.; Keselman, H. J. – Educational and Psychological Measurement, 2003
Compared strategies for performing multiple comparisons with nonnormal data under various data conditions, including simultaneous violations of the assumptions of normality and variance homogeneity. Monte Carlo study results show the conditions under which different strategies are most appropriate. (SLD)
Descriptors: Comparative Analysis, Monte Carlo Methods, Nonparametric Statistics
Peer reviewed Peer reviewed
Lix, Lisa M.; Keselman, H. J. – Educational and Psychological Measurement, 1998
Comparison of six procedures to test for location equality among two or more groups when population variances are heterogeneous suggests that, when the variance homogeneity and normality assumptions are not satisfied, and the design is unbalanced, the use of any of these test statistics with the usual least squares estimators is not recommended.…
Descriptors: Comparative Analysis, Estimation (Mathematics), Least Squares Statistics, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Kowalchuk, Rhonda K.; Keselman, H. J.; Algina, James; Wolfinger, Russell D. – Educational and Psychological Measurement, 2004
One approach to the analysis of repeated measures data allows researchers to model the covariance structure of their data rather than presume a certain structure, as is the case with conventional univariate and multivariate test statistics. This mixed-model approach, available through SAS PROC MIXED, was compared to a Welch-James type statistic.…
Descriptors: Interaction, Sample Size, Statistical Analysis, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Algina, James; Keselman, H. J.; Penfield, Randall D. – Educational and Psychological Measurement, 2005
Probability coverage for eight different confidence intervals (CIs) of measures of effect size (ES) in a two-level repeated measures design was investigated. The CIs and measures of ES differed with regard to whether they used least squares or robust estimates of central tendency and variability, whether the end critical points of the interval…
Descriptors: Probability, Intervals, Least Squares Statistics, Effect Size