NotesFAQContact Us
Search Tips
Showing 1 to 15 of 19 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; DiStefano, Christine – Educational and Psychological Measurement, 2022
A latent variable modeling-based procedure is discussed that permits to readily point and interval estimate the design effect index in multilevel settings using widely circulated software. The method provides useful information about the relationship of important parameter standard errors when accounting for clustering effects relative to…
Descriptors: Hierarchical Linear Modeling, Correlation, Evaluation, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Zumbo, Bruno D.; Kroc, Edward – Educational and Psychological Measurement, 2019
Chalmers recently published a critique of the use of ordinal a[alpha] proposed in Zumbo et al. as a measure of test reliability in certain research settings. In this response, we take up the task of refuting Chalmers' critique. We identify three broad misconceptions that characterize Chalmers' criticisms: (1) confusing assumptions with…
Descriptors: Test Reliability, Statistical Analysis, Misconceptions, Mathematical Models
Peer reviewed Peer reviewed
Direct linkDirect link
Hardin, Andrew; Marcoulides, George A. – Educational and Psychological Measurement, 2011
The recent flurry of articles on formative measurement, particularly in the information systems literature, appears to be symptomatic of a much larger problem. Despite significant objections by methodological experts, these articles continue to deliver a predominately pro formative measurement message to researchers who rapidly incorporate these…
Descriptors: Measurement, Theories, Statistical Analysis, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
MacCann, Robert G. – Educational and Psychological Measurement, 2008
It is shown that the Angoff and bookmarking cut scores are examples of true score equating that in the real world must be applied to observed scores. In the context of defining minimal competency, the percentage "failed" by such methods is a function of the length of the measuring instrument. It is argued that this length is largely…
Descriptors: True Scores, Cutting Scores, Minimum Competencies, Scores
Peer reviewed Peer reviewed
Branthwaite, Alan; Trueman, Mark – Educational and Psychological Measurement, 1985
This paper presents two major criticisms of the construct validity investigation of the McCarthy Scales of Children's Abilities conducted by Watkins and Wiebe (1980). The criticisms pertain to the nature of the data presented and the accuracy and appropriateness of the statistical procedures employed. (BS)
Descriptors: Aptitude Tests, Early Childhood Education, Multiple Regression Analysis, Predictor Variables
Peer reviewed Peer reviewed
Howell, David C.; McConaughy, Stephanie H. – Educational and Psychological Measurement, 1982
It is argued here that the choice of the appropriate method for calculating least squares analysis of variance with unequal sample sizes depends upon the question the experimenter wants to answer about the data. The different questions reflect different null hypotheses. An example is presented using two alternative methods. (Author/BW)
Descriptors: Analysis of Variance, Hypothesis Testing, Least Squares Statistics, Mathematical Models
Peer reviewed Peer reviewed
Bonett, Douglas G. – Educational and Psychological Measurement, 1982
Post-hoc blocking and analysis of covariance (ANCOVA) both employ a concomitant variable to increase statistical power relative to the completely randomized design. It is argued that the advantages attributed to the block design are not always valid and that there are circumstances when the ANCOVA would be preferred to post-hoc blocking.…
Descriptors: Analysis of Covariance, Comparative Analysis, Hypothesis Testing, Power (Statistics)
Peer reviewed Peer reviewed
Hanna, Gerald S.; Bennett, Judith A. – Educational and Psychological Measurement, 1984
The presently viewed role and utility of measures of instructional sensitivity are summarized. A case is made that the rationale for the assessment of instructional sensitivity can be applied to all achievement tests and should not be restricted to criterion-referenced mastery tests. (Author/BW)
Descriptors: Achievement Tests, Context Effect, Criterion Referenced Tests, Mastery Tests
Peer reviewed Peer reviewed
Calberg, Magda – Educational and Psychological Measurement, 1984
A discussion of the problems inherent in verbal test taxonomies not based on the schematic domain of logic is presented. This discussion is made in the context of syntactically complex tests. A suggested structure for a logically constructed taxonomy developed for use in a federal testing program is delineated. (Author/DWH)
Descriptors: Adults, Classification, Federal Programs, Logic
Peer reviewed Peer reviewed
Rae, Gordon – Educational and Psychological Measurement, 1982
Analyses of artificial data involving repeated, related binary measures to different samples suggest that Tideman's generalized chi-square statistic and conventional repeated-measures analysis of variance do not produce conflicting outcomes. Provided the appropriate assumptions are met, analysis of variance may provide a more versatile approach.…
Descriptors: Analysis of Variance, Hypothesis Testing, Research Design, Statistical Analysis
Peer reviewed Peer reviewed
Gillmore, Gerald M.; And Others – Educational and Psychological Measurement, 1983
This article argues that the 1981 work of Carbno presented unwarranted conclusions because its design included an improper operationalization of the object of measurement, given the problems addressed, and because the sample sizes employed were too small. (Author/PN)
Descriptors: Generalizability Theory, Higher Education, Research Design, Research Problems
Peer reviewed Peer reviewed
Fleming, James S. – Educational and Psychological Measurement, 1981
The perfunctory use of factor scores in conjunction with regression analysis is inappropriate for many purposes. It is suggested that factoring methods are most suitable for independent variable sets when some consideration has been given to the nature of the domain, which is implied by the predictors. (Author/BW)
Descriptors: Factor Analysis, Multiple Regression Analysis, Predictor Variables, Research Problems
Peer reviewed Peer reviewed
Fisicaro, Sebastiano A.; Vance, Robert J. – Educational and Psychological Measurement, 1994
This article presents arguments that the correlation measure "r" of halo is not conceptually more appropriate than the standard deviation (SD) measure. It also describes conditions under which halo effects occur and when the SD and r measures can be used. Neither measure is uniformly superior to the other. (SLD)
Descriptors: Correlation, Evaluation Methods, Interrater Reliability, Measurement Techniques
Peer reviewed Peer reviewed
Kuder, Frederic – Educational and Psychological Measurement, 1991
Recommendations are made for the appropriate use and identification of traditional Kuder-Richardson formulas for the estimation of reliability. "Alpha" should be used for reliabilities estimated for tests or scales composed of items yielding scores distributed on more than two points. (SLD)
Descriptors: Estimation (Mathematics), Evaluation Methods, Mathematical Formulas, Scores
Peer reviewed Peer reviewed
Kirk, Roger E. – Educational and Psychological Measurement, 2001
Makes the case that science is best served when researchers focus on the size of effects and their practical significance. Advocates the use of confidence intervals for deciding whether chance or sampling variability is an unlikely explanation for an observed effect. Calls for more emphasis on effect sizes in the next edition of the American…
Descriptors: Effect Size, Hypothesis Testing, Psychology, Research Reports
Previous Page | Next Page ยป
Pages: 1  |  2