NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 18 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Zumbo, Bruno D.; Kroc, Edward – Educational and Psychological Measurement, 2019
Chalmers recently published a critique of the use of ordinal a[alpha] proposed in Zumbo et al. as a measure of test reliability in certain research settings. In this response, we take up the task of refuting Chalmers' critique. We identify three broad misconceptions that characterize Chalmers' criticisms: (1) confusing assumptions with…
Descriptors: Test Reliability, Statistical Analysis, Misconceptions, Mathematical Models
Peer reviewed Peer reviewed
Direct linkDirect link
Hardin, Andrew; Marcoulides, George A. – Educational and Psychological Measurement, 2011
The recent flurry of articles on formative measurement, particularly in the information systems literature, appears to be symptomatic of a much larger problem. Despite significant objections by methodological experts, these articles continue to deliver a predominately pro formative measurement message to researchers who rapidly incorporate these…
Descriptors: Measurement, Theories, Statistical Analysis, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
MacCann, Robert G. – Educational and Psychological Measurement, 2008
It is shown that the Angoff and bookmarking cut scores are examples of true score equating that in the real world must be applied to observed scores. In the context of defining minimal competency, the percentage "failed" by such methods is a function of the length of the measuring instrument. It is argued that this length is largely…
Descriptors: True Scores, Cutting Scores, Minimum Competencies, Scores
Peer reviewed Peer reviewed
Branthwaite, Alan; Trueman, Mark – Educational and Psychological Measurement, 1985
This paper presents two major criticisms of the construct validity investigation of the McCarthy Scales of Children's Abilities conducted by Watkins and Wiebe (1980). The criticisms pertain to the nature of the data presented and the accuracy and appropriateness of the statistical procedures employed. (BS)
Descriptors: Aptitude Tests, Early Childhood Education, Multiple Regression Analysis, Predictor Variables
Peer reviewed Peer reviewed
Kuder, Frederic – Educational and Psychological Measurement, 1980
Traditional vocational aptitude tests attempt to match the counselee's responses to those of a large group of people in a certain occupation. Instead, person matching attempts to match the counselee's responses to those of individuals who are satisfied with their occupations. (BW)
Descriptors: Career Choice, Individual Characteristics, Interest Inventories, Job Satisfaction
Peer reviewed Peer reviewed
Shine, Lester C. II – Educational and Psychological Measurement, 1980
When reporting results, researchers must not change predetermined significance levels. Such attempts to make results more significant are statistically inaccurate, illogical, and unethical. American Psychological Association standards for reporting significance should be more explicit. (CP)
Descriptors: Ethics, Hypothesis Testing, Research Design, Research Reports
Peer reviewed Peer reviewed
Kuder, Frederic – Educational and Psychological Measurement, 1991
Recommendations are made for the appropriate use and identification of traditional Kuder-Richardson formulas for the estimation of reliability. "Alpha" should be used for reliabilities estimated for tests or scales composed of items yielding scores distributed on more than two points. (SLD)
Descriptors: Estimation (Mathematics), Evaluation Methods, Mathematical Formulas, Scores
Peer reviewed Peer reviewed
Hanna, Gerald S.; Bennett, Judith A. – Educational and Psychological Measurement, 1984
The presently viewed role and utility of measures of instructional sensitivity are summarized. A case is made that the rationale for the assessment of instructional sensitivity can be applied to all achievement tests and should not be restricted to criterion-referenced mastery tests. (Author/BW)
Descriptors: Achievement Tests, Context Effect, Criterion Referenced Tests, Mastery Tests
Peer reviewed Peer reviewed
Gillmore, Gerald M.; And Others – Educational and Psychological Measurement, 1983
This article argues that the 1981 work of Carbno presented unwarranted conclusions because its design included an improper operationalization of the object of measurement, given the problems addressed, and because the sample sizes employed were too small. (Author/PN)
Descriptors: Generalizability Theory, Higher Education, Research Design, Research Problems
Peer reviewed Peer reviewed
Calberg, Magda – Educational and Psychological Measurement, 1984
A discussion of the problems inherent in verbal test taxonomies not based on the schematic domain of logic is presented. This discussion is made in the context of syntactically complex tests. A suggested structure for a logically constructed taxonomy developed for use in a federal testing program is delineated. (Author/DWH)
Descriptors: Adults, Classification, Federal Programs, Logic
Peer reviewed Peer reviewed
Fisicaro, Sebastiano A.; Vance, Robert J. – Educational and Psychological Measurement, 1994
This article presents arguments that the correlation measure "r" of halo is not conceptually more appropriate than the standard deviation (SD) measure. It also describes conditions under which halo effects occur and when the SD and r measures can be used. Neither measure is uniformly superior to the other. (SLD)
Descriptors: Correlation, Evaluation Methods, Interrater Reliability, Measurement Techniques
Peer reviewed Peer reviewed
Kirk, Roger E. – Educational and Psychological Measurement, 2001
Makes the case that science is best served when researchers focus on the size of effects and their practical significance. Advocates the use of confidence intervals for deciding whether chance or sampling variability is an unlikely explanation for an observed effect. Calls for more emphasis on effect sizes in the next edition of the American…
Descriptors: Effect Size, Hypothesis Testing, Psychology, Research Reports
Peer reviewed Peer reviewed
Benson, Jeri – Educational and Psychological Measurement, 1981
A review of the research on item writing, item format, test instructions, and item readability indicated the importance of instrument structure in the interpretation of test data. The effect of failing to consider these areas on the content validity of achievement test scores is discussed. (Author/GK)
Descriptors: Achievement Tests, Elementary Secondary Education, Literature Reviews, Scores
Peer reviewed Peer reviewed
Howell, David C.; McConaughy, Stephanie H. – Educational and Psychological Measurement, 1982
It is argued here that the choice of the appropriate method for calculating least squares analysis of variance with unequal sample sizes depends upon the question the experimenter wants to answer about the data. The different questions reflect different null hypotheses. An example is presented using two alternative methods. (Author/BW)
Descriptors: Analysis of Variance, Hypothesis Testing, Least Squares Statistics, Mathematical Models
Peer reviewed Peer reviewed
Bonett, Douglas G. – Educational and Psychological Measurement, 1982
Post-hoc blocking and analysis of covariance (ANCOVA) both employ a concomitant variable to increase statistical power relative to the completely randomized design. It is argued that the advantages attributed to the block design are not always valid and that there are circumstances when the ANCOVA would be preferred to post-hoc blocking.…
Descriptors: Analysis of Covariance, Comparative Analysis, Hypothesis Testing, Power (Statistics)
Previous Page | Next Page ยป
Pages: 1  |  2