NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; DiStefano, Christine; Calvocoressi, Lisa; Volker, Martin – Educational and Psychological Measurement, 2022
A class of effect size indices are discussed that evaluate the degree to which two nested confirmatory factor analysis models differ from each other in terms of fit to a set of observed variables. These descriptive effect measures can be used to quantify the impact of parameter restrictions imposed in an initially considered model and are free…
Descriptors: Effect Size, Models, Measurement Techniques, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2018
This article outlines a procedure for examining the degree to which a common factor may be dominating additional factors in a multicomponent measuring instrument consisting of binary items. The procedure rests on an application of the latent variable modeling methodology and accounts for the discrete nature of the manifest indicators. The method…
Descriptors: Measurement Techniques, Factor Analysis, Item Response Theory, Likert Scales
Peer reviewed Peer reviewed
Direct linkDirect link
McNeish, Daniel – Educational and Psychological Measurement, 2017
In behavioral sciences broadly, estimating growth models with Bayesian methods is becoming increasingly common, especially to combat small samples common with longitudinal data. Although Mplus is becoming an increasingly common program for applied research employing Bayesian methods, the limited selection of prior distributions for the elements of…
Descriptors: Models, Bayesian Statistics, Statistical Analysis, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Shih, Ching-Lin; Sun, Guo-Wei – Educational and Psychological Measurement, 2012
The DIF-free-then-DIF (DFTD) strategy consists of two steps: (a) select a set of items that are the most likely to be DIF-free and (b) assess the other items for DIF (differential item functioning) using the designated items as anchors. The rank-based method together with the computer software IRTLRDIF can select a set of DIF-free polytomous items…
Descriptors: Test Bias, Test Items, Item Response Theory, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Seock-Ho – Educational and Psychological Measurement, 2007
The procedures required to obtain the approximate posterior standard deviations of the parameters in the three commonly used item response models for dichotomous items are described and used to generate values for some common situations. The results were compared with those obtained from maximum likelihood estimation. It is shown that the use of…
Descriptors: Item Response Theory, Computation, Comparative Analysis, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E. – Educational and Psychological Measurement, 2005
Type I error rates for PARSCALE's fit statistic were examined. Data were generated to fit the partial credit or graded response model, with test lengths of 10 or 20 items. The ability distribution was simulated to be either normal or uniform. Type I error rates were inflated for the shorter test length and, for the graded-response model, also for…
Descriptors: Test Length, Item Response Theory, Psychometrics, Error of Measurement
Peer reviewed Peer reviewed
Wolfle, Lee M.; Ethington, Corinna A. – Educational and Psychological Measurement, 1986
Using data from High School and Beyond, this study empirically investigated the extent of within-variable, between-occasion error covariances among variables included in educational achievement models. Little evidence was found to support the statement that reliability estimates for social background variables are inflated because of correlated…
Descriptors: Academic Achievement, Computer Software, Correlation, Equations (Mathematics)
Peer reviewed Peer reviewed
Rothstein, Hannah R.; And Others – Educational and Psychological Measurement, 1990
A microcomputer program that computes statistical power for analyses performed by multiple regression/correlation is described. The program features a spreadsheet-like interface, outputting the effect size and value of power corresponding to the input parameters, including predictor variables, sample size, alpha, and error type. (TJH)
Descriptors: Computer Software, Correlation, Effect Size, Error of Measurement