NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 59 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Foster, Robert C. – Educational and Psychological Measurement, 2021
This article presents some equivalent forms of the common Kuder-Richardson Formula 21 and 20 estimators for nondichotomous data belonging to certain other exponential families, such as Poisson count data, exponential data, or geometric counts of trials until failure. Using the generalized framework of Foster (2020), an equation for the reliability…
Descriptors: Test Reliability, Data, Computation, Mathematical Formulas
Peer reviewed Peer reviewed
Direct linkDirect link
Hayes, Timothy; Usami, Satoshi – Educational and Psychological Measurement, 2020
Recently, quantitative researchers have shown increased interest in two-step factor score regression (FSR) approaches to structural model estimation. A particularly promising approach proposed by Croon involves first extracting factor scores for each latent factor in a larger model, then correcting the variance-covariance matrix of the factor…
Descriptors: Regression (Statistics), Structural Equation Models, Statistical Bias, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Ippel, Lianne; Magis, David – Educational and Psychological Measurement, 2020
In dichotomous item response theory (IRT) framework, the asymptotic standard error (ASE) is the most common statistic to evaluate the precision of various ability estimators. Easy-to-use ASE formulas are readily available; however, the accuracy of some of these formulas was recently questioned and new ASE formulas were derived from a general…
Descriptors: Item Response Theory, Error of Measurement, Accuracy, Standards
Peer reviewed Peer reviewed
Direct linkDirect link
Conger, Anthony J. – Educational and Psychological Measurement, 2017
Drawing parallels to classical test theory, this article clarifies the difference between rater accuracy and reliability and demonstrates how category marginal frequencies affect rater agreement and Cohen's kappa. Category assignment paradigms are developed: comparing raters to a standard (index) versus comparing two raters to one another…
Descriptors: Interrater Reliability, Evaluators, Accuracy, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Donadello, Ivan; Spoto, Andrea; Sambo, Francesco; Badaloni, Silvana; Granziol, Umberto; Vidotto, Giulio – Educational and Psychological Measurement, 2017
The clinical assessment of mental disorders can be a time-consuming and error-prone procedure, consisting of a sequence of diagnostic hypothesis formulation and testing aimed at restricting the set of plausible diagnoses for the patient. In this article, we propose a novel computerized system for the adaptive testing of psychological disorders.…
Descriptors: Adaptive Testing, Mental Disorders, Computer Assisted Testing, Psychological Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2016
The frequently neglected and often misunderstood relationship between classical test theory and item response theory is discussed for the unidimensional case with binary measures and no guessing. It is pointed out that popular item response models can be directly obtained from classical test theory-based models by accounting for the discrete…
Descriptors: Test Theory, Item Response Theory, Models, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Guo, Jiin-Huarng; Luh, Wei-Ming – Educational and Psychological Measurement, 2008
This study proposes an approach for determining appropriate sample size for Welch's F test when unequal variances are expected. Given a certain maximum deviation in population means and using the quantile of F and t distributions, there is no need to specify a noncentrality parameter and it is easy to estimate the approximate sample size needed…
Descriptors: Sample Size, Monte Carlo Methods, Statistical Analysis, Mathematical Formulas
Peer reviewed Peer reviewed
Direct linkDirect link
Rupp, Andre A.; Zumbo, Bruno D. – Educational and Psychological Measurement, 2006
One theoretical feature that makes item response theory (IRT) models those of choice for many psychometric data analysts is parameter invariance, the equality of item and examinee parameters from different examinee populations or measurement conditions. In this article, using the well-known fact that item and examinee parameters are identical only…
Descriptors: Psychometrics, Probability, Simulation, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Raju, Nambury S.; Oshima, T.C. – Educational and Psychological Measurement, 2005
Two new prophecy formulas for estimating item response theory (IRT)-based reliability of a shortened or lengthened test are proposed. Some of the relationships between the two formulas, one of which is identical to the well-known Spearman-Brown prophecy formula, are examined and illustrated. The major assumptions underlying these formulas are…
Descriptors: Item Response Theory, Test Reliability, Evaluation Methods, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Bernaards, Coen A.; Jennrich, Robert I. – Educational and Psychological Measurement, 2005
Almost all modern rotation of factor loadings is based on optimizing a criterion, for example, the quartimax criterion for quartimax rotation. Recent advancements in numerical methods have led to general orthogonal and oblique algorithms for optimizing essentially any rotation criterion. All that is required for a specific application is a…
Descriptors: Computer Software, Factor Analysis, Evaluation Methods, Statistical Analysis
Peer reviewed Peer reviewed
Alexander, Ralph A.; And Others – Educational and Psychological Measurement, 1987
This article presents an improved approximation formula for the problem of correcting correlation coefficients that arise from range-restricted distributions on either or both the independent and dependent variable. (BS)
Descriptors: Correlation, Estimation (Mathematics), Mathematical Formulas, Sampling
Peer reviewed Peer reviewed
Davenport, Ernest C., Jr. – Educational and Psychological Measurement, 1987
The G coefficient is proven to be an inaccurate indicator of latent relations and, therefore, unacceptable as a measure of association when used in R-type factor analysis. The G coefficient is useful in measuring another type of relation, the simple agreement of scores for a pair of items. (Author/BS)
Descriptors: Correlation, Factor Analysis, Mathematical Formulas, Measurement Techniques
Peer reviewed Peer reviewed
Fowler, Robert L. – Educational and Psychological Measurement, 1987
This paper develops a general method for comparing treatment magnitudes for research employing multiple treatment fixed effects analysis of variance designs, which may be used for main effects with any number of levels without regard to directionality. (Author/BS)
Descriptors: Analysis of Variance, Comparative Analysis, Effect Size, Hypothesis Testing
Peer reviewed Peer reviewed
Glutting, Joseph J.; And Others – Educational and Psychological Measurement, 1987
This paper discusses the basic theory underlying confidence limits and presents reasons why psychologists should incorporate confidence ranges in their psychodiagnostic reports. Four methods for establishing confidence limits are compared. Three of the methods involve estimated true scores, and the fourth is the standard error of measurement…
Descriptors: Error of Measurement, Mathematical Formulas, Psychological Evaluation, Scores
Peer reviewed Peer reviewed
Feldt, Leonard S. – Educational and Psychological Measurement, 1984
The binomial error model includes form-to-form difficulty differences as error variance and leads to Ruder-Richardson formula 21 as an estimate of reliability. If the form-to-form component is removed from the estimate of error variance, the binomial model leads to KR 20 as the reliability estimate. (Author/BW)
Descriptors: Achievement Tests, Difficulty Level, Error of Measurement, Mathematical Formulas
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4