NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Eun Sook; Willson, Victor L. – Educational and Psychological Measurement, 2010
This article presents a method to evaluate pretest effects on posttest scores in the absence of an un-pretested control group using published results of pretesting effects due to Willson and Putnam. Confidence intervals around the expected theoretical gain due to pretesting are computed, and observed gains or differential gains are compared with…
Descriptors: Control Groups, Intervals, Educational Research, Educational Psychology
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Shudong; Jiao, Hong; Young, Michael J.; Brooks, Thomas; Olson, John – Educational and Psychological Measurement, 2008
In recent years, computer-based testing (CBT) has grown in popularity, is increasingly being implemented across the United States, and will likely become the primary mode for delivering tests in the future. Although CBT offers many advantages over traditional paper-and-pencil testing, assessment experts, researchers, practitioners, and users have…
Descriptors: Elementary Secondary Education, Reading Achievement, Computer Assisted Testing, Comparative Analysis
Peer reviewed Peer reviewed
Hubbard, Raymond; Ryan, Patricia A. – Educational and Psychological Measurement, 2000
Examined the historical growth in the popularity of statistical significance testing using a random sample of data from 12 American Psychological Association journals. Results replicate and extend findings from a study that used only one such journal. Discusses the role of statistical significance testing and the use of replication and…
Descriptors: Meta Analysis, Psychological Testing, Scholarly Journals, Statistical Significance
Peer reviewed Peer reviewed
Mick, David Glen – Educational and Psychological Measurement, 2000
Suggest that the call for more pointed graduate education and more affirmative journal policies on replication-extension made by R. Hubbard and P. Ryan is useful, although inadequate and probably pointless. Statistical significance testing appears to be here to stay despite the charge that it is of "marginal scientific value." (SLD)
Descriptors: Graduate Study, Higher Education, Meta Analysis, Psychological Testing
Peer reviewed Peer reviewed
Stewart, David W. – Educational and Psychological Measurement, 2000
Suggests that replication research and meta-analysis are not substitutes for statistical significance testing, but rather, like measures of effect size, they are complements to statistical significance testing. Significance testing does provide a means for determining what might be usefully replicated. (SLD)
Descriptors: Effect Size, Meta Analysis, Psychological Testing, Scholarly Journals
Peer reviewed Peer reviewed
Kover, Arthur J. – Educational and Psychological Measurement, 2000
The Hubbard and Ryan article is a little ingenuous in its implications for action. Both meta-analyses and replication have problems of their own; each requires careful attention. Good measurement emphasizes proper sampling techniques and using whatever means possible to analyze data. (SLD)
Descriptors: Meta Analysis, Psychological Testing, Scholarly Journals, Statistical Significance
Peer reviewed Peer reviewed
Winer, Russell S. – Educational and Psychological Measurement, 2000
Agrees with R. Hubbard and P. Ryan that statistical significance testing has had a negative impact in that some users have closed their minds to alternative approaches to conducting research. In marketing, the alternatives are not completely satisfactory, however, and researchers are likely to continue to rely on statistical significance testing.…
Descriptors: Meta Analysis, Psychological Testing, Scholarly Journals, Statistical Significance