NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 2,671 to 2,685 of 3,486 results
Peer reviewed Peer reviewed
Bonett, Douglas G. – Educational and Psychological Measurement, 1982
Post-hoc blocking and analysis of covariance (ANCOVA) both employ a concomitant variable to increase statistical power relative to the completely randomized design. It is argued that the advantages attributed to the block design are not always valid and that there are circumstances when the ANCOVA would be preferred to post-hoc blocking.…
Descriptors: Analysis of Covariance, Comparative Analysis, Hypothesis Testing, Power (Statistics)
Peer reviewed Peer reviewed
Redfering, David L.; Collins, Jackie – Educational and Psychological Measurement, 1982
Forty elementary students were administered the Bender-Gestalt Test using two techniques: Koppitz routine instructions and the Hutt testing-the-limits method. The mean number of Koppitz errors was approximately two greater than the number obtained using the Hutt technique. (Author/BW)
Descriptors: Comparative Analysis, Correlation, Elementary Education, Intelligence Tests
Peer reviewed Peer reviewed
Pearlman, Charles – Educational and Psychological Measurement, 1982
Three indicators of effectance motivation were devised: two teacher ratings and a direct observation of a student's choice of either a hard or an easy problem. Data from 600 sixth graders indicate that these indicators are related to each other when IQ is controlled. (Author/BW)
Descriptors: Intelligence, Intermediate Grades, Measurement Techniques, Observation
Peer reviewed Peer reviewed
Daniel, Wayne W.; And Others – Educational and Psychological Measurement, 1982
To test the use of Bayes's theorem to adjust for nonresponse bias, 600 hospitals were used in a simulated sample survey. On the basis of known information on five variables, Bayes's formula correctly predicted the status of 92 of the 100 "nonrespondents" relative to a sixth variable. (Author/BW)
Descriptors: Bayesian Statistics, Data Analysis, Data Collection, Hospitals
Peer reviewed Peer reviewed
Willson, Victor L. – Educational and Psychological Measurement, 1982
The Serlin-Kaiser procedure is used to complete a principal components solution for scoring weights for all options of a given item. Coefficient alpha is maximized for a given multiple choice test. (Author/GK)
Descriptors: Analysis of Covariance, Factor Analysis, Multiple Choice Tests, Scoring Formulas
Peer reviewed Peer reviewed
Faden, Vivian; Bobko, Philip – Educational and Psychological Measurement, 1982
Ridge regression offers advantages over ordinary least squares estimation when a validity shrinkage criterion is considered. Comparisons of cross-validated multiple correlations indicate that ridge estimation is superior when the predictors are multicollinear, the number of predictors is large relative to sample size, and the population multiple…
Descriptors: Correlation, Least Squares Statistics, Predictor Variables, Regression (Statistics)
Peer reviewed Peer reviewed
Smith, Malbert, III; And Others – Educational and Psychological Measurement, 1982
The degree to which it is possible to identify, from data in their cumulative folders, students likely to fail a high school competency test is investigated. Results indicated that some predictor variables could be used to identify those likely to fail Senior High Assessment of Reading Progress and the Test of Proficiency in Computational Skills.…
Descriptors: Academic Achievement, Academic Failure, Educational Diagnosis, Mathematics Achievement
Peer reviewed Peer reviewed
Vegelius, Jan – Educational and Psychological Measurement, 1982
The possibility of using a Q-analysis also for nominal data is discussed, using the J-index as a measure of similarity between persons. An example is given when ten persons sorted 16 playing cards into as many groups as they wished. A Q-analysis of these data gave a natural two-dimensional structure. (Author/BW)
Descriptors: Correlation, Factor Analysis, Mathematical Models, Statistical Analysis
Peer reviewed Peer reviewed
Raju, Nambury S. – Educational and Psychological Measurement, 1982
Rajaratnam, Cronbach and Gleser's generalizability formula for stratified-parallel tests and Raju's coefficient beta are generalized to estimate the reliability of a composite of criterion-referenced tests, where the parts have different cutting scores. (Author/GK)
Descriptors: Criterion Referenced Tests, Cutting Scores, Mathematical Formulas, Scoring Formulas
Peer reviewed Peer reviewed
Dawson-Sunders, Beth K. – Educational and Psychological Measurement, 1982
The canonical redundancy statistic, an estimate of the amount of shared variance between two sets of variables, exhibits an amount of bias similar to that of the first squared canonical correlation coefficient. Two formulae, Wherry and Olkin-Pratt, adequately correct the bias of the redundancy statistic. (Author/BW)
Descriptors: Correlation, Mathematical Formulas, Multivariate Analysis, Statistical Bias
Peer reviewed Peer reviewed
Raju, Nambury S. – Educational and Psychological Measurement, 1982
A necessary and sufficient condition for a perfectly homogeneous test in the sense of Loevinger is stated and proved. Using this result, a formula for computing the maximum possible KR-20 when the test variance is assumed fixed is presented. A new index of test homogeneity is also presented and discussed. (Author/BW)
Descriptors: Mathematical Formulas, Mathematical Models, Multiple Choice Tests, Test Reliability
Peer reviewed Peer reviewed
Wilcox, Rand R. – Educational and Psychological Measurement, 1982
Results in the engineering literature on "k out of n system reliability" can be used to characterize tests based on estimates of the probability of correctly determining whether the examinee knows the correct response. In particular, the minimum number of distractors required for multiple-choice tests can be empirically determined. (Author/BW)
Descriptors: Achievement Tests, Mathematical Models, Multiple Choice Tests, Test Format
Peer reviewed Peer reviewed
Wang, Marilyn D. – Educational and Psychological Measurement, 1982
Formulas for estimating the population measure of effect strength are based on the assumption that sample sizes are proportional to the sizes of their respective treatment populations. Because this assumption is frequently violated, a general method of estimating effect strength for the one-factor, fixed-effects design is presented. (Author/BW)
Descriptors: Analysis of Variance, Estimation (Mathematics), Hypothesis Testing, Mathematical Models
Peer reviewed Peer reviewed
Uebersax, John S. – Educational and Psychological Measurement, 1982
A more general method for calculating the Kappa measure of nominal rating agreement among multiple raters is presented. It can be used across a broad range of rating designs, including those in which raters vary with respect to their base rates and how many subjects they rate in common. (Author/BW)
Descriptors: Mathematical Formulas, Statistical Significance, Test Reliability
Peer reviewed Peer reviewed
Green, Kathy; And Others – Educational and Psychological Measurement, 1982
Achievement test reliability and validity as a function of ability were determined for multiple sections of a large undergraduate French class. Results did not support previous arguments that decreasing the number of options results in a more efficient test for high-level examinees, but less efficient for low-level examinees. (Author/GK)
Descriptors: Academic Ability, Comparative Analysis, Higher Education, Multiple Choice Tests
Pages: 1  |  ...  |  175  |  176  |  177  |  178  |  179  |  180  |  181  |  182  |  183  |  ...  |  233