NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…3768
What Works Clearinghouse Rating
Showing 106 to 120 of 3,768 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Sideridis, Georgios D.; Tsaousis, Ioannis; Al-Sadaawi, Abdullah – Educational and Psychological Measurement, 2019
The purpose of the present study was to apply the methodology developed by Raykov on modeling item-specific variance for the measurement of internal consistency reliability with longitudinal data. Participants were a randomly selected sample of 500 individuals who took on a professional qualifications test in Saudi Arabia over four different…
Descriptors: Test Reliability, Test Items, Longitudinal Studies, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Jordan, Pascal; Spiess, Martin – Educational and Psychological Measurement, 2019
Factor loadings and item discrimination parameters play a key role in scale construction. A multitude of heuristics regarding their interpretation are hardwired into practice--for example, neglecting low loadings and assigning items to exactly one scale. We challenge the common sense interpretation of these parameters by providing counterexamples…
Descriptors: Test Construction, Test Items, Item Response Theory, Factor Structure
Peer reviewed Peer reviewed
Direct linkDirect link
Zumbo, Bruno D.; Kroc, Edward – Educational and Psychological Measurement, 2019
Chalmers recently published a critique of the use of ordinal a[alpha] proposed in Zumbo et al. as a measure of test reliability in certain research settings. In this response, we take up the task of refuting Chalmers' critique. We identify three broad misconceptions that characterize Chalmers' criticisms: (1) confusing assumptions with…
Descriptors: Test Reliability, Statistical Analysis, Misconceptions, Mathematical Models
Peer reviewed Peer reviewed
Direct linkDirect link
Dimitrov, Dimiter M.; Luo, Yong – Educational and Psychological Measurement, 2019
An approach to scoring tests with binary items, referred to as D-scoring method, was previously developed as a classical analog to basic models in item response theory (IRT) for binary items. As some tests include polytomous items, this study offers an approach to D-scoring of such items and parallels the results with those obtained under the…
Descriptors: Scoring, Test Items, Item Response Theory, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Cao, Chunhua; Kim, Eun Sook; Chen, Yi-Hsin; Ferron, John; Stark, Stephen – Educational and Psychological Measurement, 2019
In multilevel multiple-indicator multiple-cause (MIMIC) models, covariates can interact at the within level, at the between level, or across levels. This study examines the performance of multilevel MIMIC models in estimating and detecting the interaction effect of two covariates through a simulation and provides an empirical demonstration of…
Descriptors: Hierarchical Linear Modeling, Structural Equation Models, Computation, Identification
Peer reviewed Peer reviewed
Direct linkDirect link
Ferrando, Pere Joan; Lorenzo-Seva, Urbano – Educational and Psychological Measurement, 2019
Many psychometric measures yield data that are compatible with (a) an essentially unidimensional factor analysis solution and (b) a correlated-factor solution. Deciding which of these structures is the most appropriate and useful is of considerable importance, and various procedures have been proposed to help in this decision. The only fully…
Descriptors: Validity, Models, Correlation, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Fujimoto, Ken A. – Educational and Psychological Measurement, 2019
Advancements in item response theory (IRT) have led to models for dual dependence, which control for cluster and method effects during a psychometric analysis. Currently, however, this class of models does not include one that controls for when the method effects stem from two method sources in which one source functions differently across the…
Descriptors: Bayesian Statistics, Item Response Theory, Psychometrics, Models
Peer reviewed Peer reviewed
Direct linkDirect link
DiStefano, Christine; McDaniel, Heather L.; Zhang, Liyun; Shi, Dexin; Jiang, Zhehan – Educational and Psychological Measurement, 2019
A simulation study was conducted to investigate the model size effect when confirmatory factor analysis (CFA) models include many ordinal items. CFA models including between 15 and 120 ordinal items were analyzed with mean- and variance-adjusted weighted least squares to determine how varying sample size, number of ordered categories, and…
Descriptors: Factor Analysis, Effect Size, Data, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Dowling, N. Maritza; Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2019
Longitudinal studies have steadily grown in popularity across the educational and behavioral sciences, particularly with the increased availability of technological devices that allow the easy collection of repeated measures on multiple dimensions of substantive relevance. This article discusses a procedure that can be used to evaluate population…
Descriptors: Longitudinal Studies, Older Adults, Cognitive Processes, Dementia
Peer reviewed Peer reviewed
Direct linkDirect link
De Raadt, Alexandra; Warrens, Matthijs J.; Bosker, Roel J.; Kiers, Henk A. L. – Educational and Psychological Measurement, 2019
Cohen's kappa coefficient is commonly used for assessing agreement between classifications of two raters on a nominal scale. Three variants of Cohen's kappa that can handle missing data are presented. Data are considered missing if one or both ratings of a unit are missing. We study how well the variants estimate the kappa value for complete data…
Descriptors: Interrater Reliability, Data, Statistical Analysis, Statistical Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Cetin-Berber, Dee Duygu; Sari, Halil Ibrahim; Huggins-Manley, Anne Corinne – Educational and Psychological Measurement, 2019
Routing examinees to modules based on their ability level is a very important aspect in computerized adaptive multistage testing. However, the presence of missing responses may complicate estimation of examinee ability, which may result in misrouting of individuals. Therefore, missing responses should be handled carefully. This study investigated…
Descriptors: Computer Assisted Testing, Adaptive Testing, Error of Measurement, Research Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Son, Sookyoung; Lee, Hyunjung; Jang, Yoona; Yang, Junyeong; Hong, Sehee – Educational and Psychological Measurement, 2019
The purpose of the present study is to compare nonnormal distributions (i.e., t, skew-normal, skew-t with equal skew and skew-t with unequal skew) in growth mixture models (GMMs) based on diverse conditions of a number of time points, sample sizes, and skewness for intercepts. To carry out this research, two simulation studies were conducted with…
Descriptors: Statistical Distributions, Statistical Analysis, Structural Equation Models, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Dimitrov, Dimiter M.; Marcoulides, George A.; Harrison, Michael – Educational and Psychological Measurement, 2019
Building on prior research on the relationships between key concepts in item response theory and classical test theory, this note contributes to highlighting their important and useful links. A readily and widely applicable latent variable modeling procedure is discussed that can be used for point and interval estimation of the individual person…
Descriptors: True Scores, Item Response Theory, Test Items, Test Theory
Peer reviewed Peer reviewed
Direct linkDirect link
da Silva, Marcelo A.; Liu, Ren; Huggins-Manley, Anne C.; Bazán, Jorge L. – Educational and Psychological Measurement, 2019
Multidimensional item response theory (MIRT) models use data from individual item responses to estimate multiple latent traits of interest, making them useful in educational and psychological measurement, among other areas. When MIRT models are applied in practice, it is not uncommon to see that some items are designed to measure all latent traits…
Descriptors: Item Response Theory, Matrices, Models, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Sorjonen, Kimmo; Melin, Bo; Ingre, Michael – Educational and Psychological Measurement, 2019
The present simulation study indicates that a method where the regression effect of a predictor (X) on an outcome at follow-up (Y1) is calculated while adjusting for the outcome at baseline (Y0) can give spurious findings, especially when there is a strong correlation between X and Y0 and when the test-retest correlation between Y0 and Y1 is…
Descriptors: Predictor Variables, Regression (Statistics), Correlation, Error of Measurement
Pages: 1  |  ...  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  12  |  ...  |  252