NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…502
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 502 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Hong, Maxwell; Steedle, Jeffrey T.; Cheng, Ying – Educational and Psychological Measurement, 2020
Insufficient effort responding (IER) affects many forms of assessment in both educational and psychological contexts. Much research has examined different types of IER, IER's impact on the psychometric properties of test scores, and preprocessing procedures used to detect IER. However, there is a gap in the literature in terms of practical advice…
Descriptors: Responses, Psychometrics, Test Validity, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Olvera Astivia, Oscar Lorenzo; Kroc, Edward; Zumbo, Bruno D. – Educational and Psychological Measurement, 2020
Simulations concerning the distributional assumptions of coefficient alpha are contradictory. To provide a more principled theoretical framework, this article relies on the Fréchet-Hoeffding bounds, in order to showcase that the distribution of the items play a role on the estimation of correlations and covariances. More specifically, these bounds…
Descriptors: Test Items, Test Reliability, Computation, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Raborn, Anthony W.; Leite, Walter L.; Marcoulides, Katerina M. – Educational and Psychological Measurement, 2020
This study compares automated methods to develop short forms of psychometric scales. Obtaining a short form that has both adequate internal structure and strong validity with respect to relationships with other variables is difficult with traditional methods of short-form development. Metaheuristic algorithms can select items for short forms while…
Descriptors: Test Construction, Automation, Heuristics, Mathematics
Peer reviewed Peer reviewed
Direct linkDirect link
Walker, Cindy M.; Göçer Sahin, Sakine – Educational and Psychological Measurement, 2020
The purpose of this study was to investigate a new way of evaluating interrater reliability that can allow one to determine if two raters differ with respect to their rating on a polytomous rating scale or constructed response item. Specifically, differential item functioning (DIF) analyses were used to assess interrater reliability and compared…
Descriptors: Test Bias, Interrater Reliability, Responses, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2019
This note discusses the merits of coefficient alpha and their conditions in light of recent critical publications that miss out on significant research findings over the past several decades. That earlier research has demonstrated the empirical relevance and utility of coefficient alpha under certain empirical circumstances. The article highlights…
Descriptors: Test Validity, Test Reliability, Test Items, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Nicewander, W. Alan – Educational and Psychological Measurement, 2019
This inquiry is focused on three indicators of the precision of measurement--conditional on fixed values of ?, the latent variable of item response theory (IRT). The indicators that are compared are (1) The traditional, conditional standard errors, s(eX|?) = CSEM; (2) the IRT-based conditional standard errors, s[subscript irt](eX|?)=C[subscript…
Descriptors: Measurement, Accuracy, Scores, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Sideridis, Georgios D.; Tsaousis, Ioannis; Al-Sadaawi, Abdullah – Educational and Psychological Measurement, 2019
The purpose of the present study was to apply the methodology developed by Raykov on modeling item-specific variance for the measurement of internal consistency reliability with longitudinal data. Participants were a randomly selected sample of 500 individuals who took on a professional qualifications test in Saudi Arabia over four different…
Descriptors: Test Reliability, Test Items, Longitudinal Studies, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Zumbo, Bruno D.; Kroc, Edward – Educational and Psychological Measurement, 2019
Chalmers recently published a critique of the use of ordinal a[alpha] proposed in Zumbo et al. as a measure of test reliability in certain research settings. In this response, we take up the task of refuting Chalmers' critique. We identify three broad misconceptions that characterize Chalmers' criticisms: (1) confusing assumptions with…
Descriptors: Test Reliability, Statistical Analysis, Misconceptions, Mathematical Models
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A.; Harrison, Michael; Menold, Natalja – Educational and Psychological Measurement, 2019
This note confronts the common use of a single coefficient alpha as an index informing about reliability of a multicomponent measurement instrument in a heterogeneous population. Two or more alpha coefficients could instead be meaningfully associated with a given instrument in finite mixture settings, and this may be increasingly more likely the…
Descriptors: Statistical Analysis, Test Reliability, Measures (Individuals), Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Andersson, Björn; Xin, Tao – Educational and Psychological Measurement, 2018
In applications of item response theory (IRT), an estimate of the reliability of the ability estimates or sum scores is often reported. However, analytical expressions for the standard errors of the estimators of the reliability coefficients are not available in the literature and therefore the variability associated with the estimated reliability…
Descriptors: Item Response Theory, Test Reliability, Test Items, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Chalmers, R. Philip – Educational and Psychological Measurement, 2018
This article discusses the theoretical and practical contributions of Zumbo, Gadermann, and Zeisser's family of ordinal reliability statistics. Implications, interpretation, recommendations, and practical applications regarding their ordinal measures, particularly ordinal alpha, are discussed. General misconceptions relating to this family of…
Descriptors: Misconceptions, Test Theory, Test Reliability, Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Zijlmans, Eva A. O.; Tijmstra, Jesper; van der Ark, L. Andries; Sijtsma, Klaas – Educational and Psychological Measurement, 2018
Reliability is usually estimated for a total score, but it can also be estimated for item scores. Item-score reliability can be useful to assess the repeatability of an individual item score in a group. Three methods to estimate item-score reliability are discussed, known as method MS, method [lambda][subscript 6], and method CA. The item-score…
Descriptors: Test Items, Test Reliability, Correlation, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Fu, Yuanshu; Wen, Zhonglin; Wang, Yang – Educational and Psychological Measurement, 2018
The maximal reliability of a congeneric measure is achieved by weighting item scores to form the optimal linear combination as the total score; it is never lower than the composite reliability of the measure when measurement errors are uncorrelated. The statistical method that renders maximal reliability would also lead to maximal criterion…
Descriptors: Test Reliability, Test Validity, Comparative Analysis, Attitude Measures
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Goldammer, Philippe; Marcoulides, George A.; Li, Tatyana; Menold, Natalja – Educational and Psychological Measurement, 2018
A readily applicable procedure is discussed that allows evaluation of the discrepancy between the popular coefficient alpha and the reliability coefficient of a scale with second-order factorial structure that is frequently of relevance in empirical educational and psychological research. The approach is developed within the framework of the…
Descriptors: Test Reliability, Factor Structure, Statistical Analysis, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Menold, Natalja; Raykov, Tenko – Educational and Psychological Measurement, 2016
This article examines the possible dependency of composite reliability on presentation format of the elements of a multi-item measuring instrument. Using empirical data and a recent method for interval estimation of group differences in reliability, we demonstrate that the reliability of an instrument need not be the same when polarity of the…
Descriptors: Test Reliability, Test Format, Test Items, Differences
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  34