NotesFAQContact Us
Collection
Advanced
Search Tips
ERIC Number: ED407423
Record Type: Non-Journal
Publication Date: 1997-Jan-23
Pages: 17
Abstractor: N/A
Reference Count: N/A
ISBN: N/A
ISSN: N/A
Ways To Explore the Replicability of Multivariate Results (Since Statistical Significance Testing Does Not).
Kier, Frederick J.
It is a false, but common, belief that statistical significance testing evaluates result replicability. In truth, statistical significance testing reveals nothing about results replicability. Since science is based on replication of results, methods that assess replicability are important. This is particularly true when multivariate methods, which capitalize on sampling error, are used. This paper explores three methods that can give an idea of the replicability of results in multivariate analysis without having to repeat the study. The first method is cross validation, a replication technique in which the entire sample is first run through the planned analysis and then the sample is randomly split into two unequal parts so that separate analyses are done on each half. The jackknife is a second method of replicability that relies on partitioning out the impact or effect of a particular subset of the data on an estimate derived from the total sample. The bootstrap, a third method of studying replicability, involves copying the data set into an infinitely large "mega" data set. Many different samples are then drawn from the file and results are computed separately for each sample and then averaged. The main drawback of all these internal replicability procedures is that their results are all based on the data from the one sample being analyzed. However, internal replication techniques are better than not addressing the issue at all. (Contains 18 references.) (SLD)
Publication Type: Reports - Evaluative; Speeches/Meeting Papers
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A