NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 1 to 15 of 17 results
Peer reviewed Peer reviewed
Direct linkDirect link
Hardin, Andrew; Marcoulides, George A. – Educational and Psychological Measurement, 2011
The recent flurry of articles on formative measurement, particularly in the information systems literature, appears to be symptomatic of a much larger problem. Despite significant objections by methodological experts, these articles continue to deliver a predominately pro formative measurement message to researchers who rapidly incorporate these…
Descriptors: Measurement, Theories, Statistical Analysis, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
MacCann, Robert G. – Educational and Psychological Measurement, 2008
It is shown that the Angoff and bookmarking cut scores are examples of true score equating that in the real world must be applied to observed scores. In the context of defining minimal competency, the percentage "failed" by such methods is a function of the length of the measuring instrument. It is argued that this length is largely arbitrary,…
Descriptors: True Scores, Cutting Scores, Minimum Competencies, Scores
Peer reviewed Peer reviewed
Kirk, Roger E. – Educational and Psychological Measurement, 2001
Makes the case that science is best served when researchers focus on the size of effects and their practical significance. Advocates the use of confidence intervals for deciding whether chance or sampling variability is an unlikely explanation for an observed effect. Calls for more emphasis on effect sizes in the next edition of the American…
Descriptors: Effect Size, Hypothesis Testing, Psychology, Research Reports
Peer reviewed Peer reviewed
Schmeidler, James – Educational and Psychological Measurement, 1978
The basic assumption of Cooper's nonparametric test for trend (EJ 125 069) is questioned. It is contended that the proper assumption alters the distribution of the statistic and reduces its usefulness. (JKS)
Descriptors: Analysis of Variance, Hypothesis Testing, Nonparametric Statistics, Research Design
Peer reviewed Peer reviewed
Branthwaite, Alan; Trueman, Mark – Educational and Psychological Measurement, 1985
This paper presents two major criticisms of the construct validity investigation of the McCarthy Scales of Children's Abilities conducted by Watkins and Wiebe (1980). The criticisms pertain to the nature of the data presented and the accuracy and appropriateness of the statistical procedures employed. (BS)
Descriptors: Aptitude Tests, Early Childhood Education, Multiple Regression Analysis, Predictor Variables
Peer reviewed Peer reviewed
Hanna, Gerald S.; Bennett, Judith A. – Educational and Psychological Measurement, 1984
The presently viewed role and utility of measures of instructional sensitivity are summarized. A case is made that the rationale for the assessment of instructional sensitivity can be applied to all achievement tests and should not be restricted to criterion-referenced mastery tests. (Author/BW)
Descriptors: Achievement Tests, Context Effect, Criterion Referenced Tests, Mastery Tests
Peer reviewed Peer reviewed
Calberg, Magda – Educational and Psychological Measurement, 1984
A discussion of the problems inherent in verbal test taxonomies not based on the schematic domain of logic is presented. This discussion is made in the context of syntactically complex tests. A suggested structure for a logically constructed taxonomy developed for use in a federal testing program is delineated. (Author/DWH)
Descriptors: Adults, Classification, Federal Programs, Logic
Peer reviewed Peer reviewed
Gillmore, Gerald M.; And Others – Educational and Psychological Measurement, 1983
This article argues that the 1981 work of Carbno presented unwarranted conclusions because its design included an improper operationalization of the object of measurement, given the problems addressed, and because the sample sizes employed were too small. (Author/PN)
Descriptors: Generalizability Theory, Higher Education, Research Design, Research Problems
Peer reviewed Peer reviewed
Benson, Jeri – Educational and Psychological Measurement, 1981
A review of the research on item writing, item format, test instructions, and item readability indicated the importance of instrument structure in the interpretation of test data. The effect of failing to consider these areas on the content validity of achievement test scores is discussed. (Author/GK)
Descriptors: Achievement Tests, Elementary Secondary Education, Literature Reviews, Scores
Peer reviewed Peer reviewed
Kuder, Frederic – Educational and Psychological Measurement, 1980
Traditional vocational aptitude tests attempt to match the counselee's responses to those of a large group of people in a certain occupation. Instead, person matching attempts to match the counselee's responses to those of individuals who are satisfied with their occupations. (BW)
Descriptors: Career Choice, Individual Characteristics, Interest Inventories, Job Satisfaction
Peer reviewed Peer reviewed
Fleming, James S. – Educational and Psychological Measurement, 1981
The perfunctory use of factor scores in conjunction with regression analysis is inappropriate for many purposes. It is suggested that factoring methods are most suitable for independent variable sets when some consideration has been given to the nature of the domain, which is implied by the predictors. (Author/BW)
Descriptors: Factor Analysis, Multiple Regression Analysis, Predictor Variables, Research Problems
Peer reviewed Peer reviewed
Rae, Gordon – Educational and Psychological Measurement, 1982
Analyses of artificial data involving repeated, related binary measures to different samples suggest that Tideman's generalized chi-square statistic and conventional repeated-measures analysis of variance do not produce conflicting outcomes. Provided the appropriate assumptions are met, analysis of variance may provide a more versatile approach.…
Descriptors: Analysis of Variance, Hypothesis Testing, Research Design, Statistical Analysis
Peer reviewed Peer reviewed
Howell, David C.; McConaughy, Stephanie H. – Educational and Psychological Measurement, 1982
It is argued here that the choice of the appropriate method for calculating least squares analysis of variance with unequal sample sizes depends upon the question the experimenter wants to answer about the data. The different questions reflect different null hypotheses. An example is presented using two alternative methods. (Author/BW)
Descriptors: Analysis of Variance, Hypothesis Testing, Least Squares Statistics, Mathematical Models
Peer reviewed Peer reviewed
Bonett, Douglas G. – Educational and Psychological Measurement, 1982
Post-hoc blocking and analysis of covariance (ANCOVA) both employ a concomitant variable to increase statistical power relative to the completely randomized design. It is argued that the advantages attributed to the block design are not always valid and that there are circumstances when the ANCOVA would be preferred to post-hoc blocking.…
Descriptors: Analysis of Covariance, Comparative Analysis, Hypothesis Testing, Power (Statistics)
Peer reviewed Peer reviewed
Kuder, Frederic – Educational and Psychological Measurement, 1991
Recommendations are made for the appropriate use and identification of traditional Kuder-Richardson formulas for the estimation of reliability. "Alpha" should be used for reliabilities estimated for tests or scales composed of items yielding scores distributed on more than two points. (SLD)
Descriptors: Estimation (Mathematics), Evaluation Methods, Mathematical Formulas, Scores
Previous Page | Next Page ยป
Pages: 1  |  2