NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…3768
What Works Clearinghouse Rating
Showing 166 to 180 of 3,768 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Eun Sook; Wang, Yan; Kiefer, Sarah M. – Educational and Psychological Measurement, 2018
Studies comparing groups that are at different levels of multilevel data (namely, cross-level groups) using the same measure are not unusual such as student and teacher agreement in education or congruence between patient and physician perceptions in health research. Although establishing measurement invariance (MI) between these groups is…
Descriptors: Measurement, Grouping (Instructional Purposes), Comparative Analysis, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ren; Qian, Hong; Luo, Xiao; Woo, Ada – Educational and Psychological Measurement, 2018
Subscore reporting under item response theory models has always been a challenge partly because the test length of each subdomain is limited for precisely locating individuals on multiple continua. Diagnostic classification models (DCMs), providing a pass/fail decision and associated probability of pass on each subdomain, are promising…
Descriptors: Classification, Probability, Pass Fail Grading, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Goldammer, Philippe; Marcoulides, George A.; Li, Tatyana; Menold, Natalja – Educational and Psychological Measurement, 2018
A readily applicable procedure is discussed that allows evaluation of the discrepancy between the popular coefficient alpha and the reliability coefficient of a scale with second-order factorial structure that is frequently of relevance in empirical educational and psychological research. The approach is developed within the framework of the…
Descriptors: Test Reliability, Factor Structure, Statistical Analysis, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Johnson, Wendy; Deary, Ian J.; Bouchard, Thomas J., Jr. – Educational and Psychological Measurement, 2018
Most study samples show less variability in key variables than do their source populations due most often to indirect selection into study participation associated with a wide range of personal and circumstantial characteristics. Formulas exist to correct the distortions of population-level correlations created. Formula accuracy has been tested…
Descriptors: Correlation, Sampling, Statistical Distributions, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Cain, Meghan K.; Zhang, Zhiyong; Bergeman, C. S. – Educational and Psychological Measurement, 2018
This article serves as a practical guide to mediation design and analysis by evaluating the ability of mediation models to detect a significant mediation effect using limited data. The cross-sectional mediation model, which has been shown to be biased when the mediation is happening over time, is compared with longitudinal mediation models:…
Descriptors: Mediation Theory, Case Studies, Longitudinal Studies, Measurement Techniques
Koziol, Natalie A.; Bovaird, James A. – Educational and Psychological Measurement, 2018
Evaluations of measurement invariance provide essential construct validity evidence--a prerequisite for seeking meaning in psychological and educational research and ensuring fair testing procedures in high-stakes settings. However, the quality of such evidence is partly dependent on the validity of the resulting statistical conclusions. Type I or…
Descriptors: Computation, Tests, Error of Measurement, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A.; Li, Tenglong – Educational and Psychological Measurement, 2018
This note extends the results in the 2016 article by Raykov, Marcoulides, and Li to the case of correlated errors in a set of observed measures subjected to principal component analysis. It is shown that when at least two measures are fallible, the probability is zero for any principal component--and in particular for the first principal…
Descriptors: Factor Analysis, Error of Measurement, Correlation, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ren – Educational and Psychological Measurement, 2018
Attribute structure is an explicit way of presenting the relationship between attributes in diagnostic measurement. The specification of attribute structures directly affects the classification accuracy resulted from psychometric modeling. This study provides a conceptual framework for understanding misspecifications of attribute structures. Under…
Descriptors: Diagnostic Tests, Classification, Test Construction, Relationship
Peer reviewed Peer reviewed
Direct linkDirect link
Green, Samuel; Xu, Yuning; Thompson, Marilyn S. – Educational and Psychological Measurement, 2018
Parallel analysis (PA) assesses the number of factors in exploratory factor analysis. Traditionally PA compares the eigenvalues for a sample correlation matrix with the eigenvalues for correlation matrices for 100 comparison datasets generated such that the variables are independent, but this approach uses the wrong reference distribution. The…
Descriptors: Factor Analysis, Accuracy, Statistical Distributions, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Wind, Stefanie A.; Jones, Eli – Educational and Psychological Measurement, 2018
Previous research includes frequent admonitions regarding the importance of establishing connectivity in data collection designs prior to the application of Rasch models. However, details regarding the influence of characteristics of the linking sets used to establish connections among facets, such as locations on the latent variable, model-data…
Descriptors: Data Collection, Goodness of Fit, Computation, Networks
Peer reviewed Peer reviewed
Direct linkDirect link
Paek, Insu; Cui, Mengyao; Öztürk Gübes, Nese; Yang, Yanyun – Educational and Psychological Measurement, 2018
The purpose of this article is twofold. The first is to provide evaluative information on the recovery of model parameters and their standard errors for the two-parameter item response theory (IRT) model using different estimation methods by Mplus. The second is to provide easily accessible information for practitioners, instructors, and students…
Descriptors: Item Response Theory, Computation, Factor Analysis, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Hoofs, Huub; van de Schoot, Rens; Jansen, Nicole W. H.; Kant, IJmert – Educational and Psychological Measurement, 2018
Bayesian confirmatory factor analysis (CFA) offers an alternative to frequentist CFA based on, for example, maximum likelihood estimation for the assessment of reliability and validity of educational and psychological measures. For increasing sample sizes, however, the applicability of current fit statistics evaluating model fit within Bayesian…
Descriptors: Goodness of Fit, Bayesian Statistics, Factor Analysis, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Falk, Carl F.; Monroe, Scott – Educational and Psychological Measurement, 2018
Lagrange multiplier (LM) or score tests have seen renewed interest for the purpose of diagnosing misspecification in item response theory (IRT) models. LM tests can also be used to test whether parameters differ from a fixed value. We argue that the utility of LM tests depends on both the method used to compute the test and the degree of…
Descriptors: Item Response Theory, Matrices, Models, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ing, Marsha – Educational and Psychological Measurement, 2018
In instructional sensitivity research, it is important to evaluate the validity argument about the extent to which student performance on the assessment can be used to infer differences in instructional experiences. This study examines whether three different measures of mathematics instruction consistently identify mathematics assessments as…
Descriptors: Validity, Educational Research, Mathematics Instruction, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A.; Li, Tenglong – Educational and Psychological Measurement, 2017
The measurement error in principal components extracted from a set of fallible measures is discussed and evaluated. It is shown that as long as one or more measures in a given set of observed variables contains error of measurement, so also does any principal component obtained from the set. The error variance in any principal component is shown…
Descriptors: Error of Measurement, Factor Analysis, Research Methodology, Psychometrics
Pages: 1  |  ...  |  8  |  9  |  10  |  11  |  12  |  13  |  14  |  15  |  16  |  ...  |  252