NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…3659
What Works Clearinghouse Rating
Showing 31 to 45 of 3,659 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wind, Stefanie A.; Patil, Yogendra J. – Educational and Psychological Measurement, 2018
Recent research has explored the use of models adapted from Mokken scale analysis as a nonparametric approach to evaluating rating quality in educational performance assessments. A potential limiting factor to the widespread use of these techniques is the requirement for complete data, as practical constraints in operational assessment systems…
Descriptors: Scaling, Data, Interrater Reliability, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Francis L. – Educational and Psychological Measurement, 2018
Cluster randomized trials involving participants nested within intact treatment and control groups are commonly performed in various educational, psychological, and biomedical studies. However, recruiting and retaining intact groups present various practical, financial, and logistical challenges to evaluators and often, cluster randomized trials…
Descriptors: Multivariate Analysis, Sampling, Statistical Inference, Data Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Hsiao, Yu-Yu; Kwok, Oi-Man; Lai, Mark H. C. – Educational and Psychological Measurement, 2018
Path models with observed composites based on multiple items (e.g., mean or sum score of the items) are commonly used to test interaction effects. Under this practice, researchers generally assume that the observed composites are measured without errors. In this study, we reviewed and evaluated two alternative methods within the structural…
Descriptors: Error of Measurement, Testing, Scores, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Yan; Kim, Eun Sook; Dedrick, Robert F.; Ferron, John M.; Tan, Tony – Educational and Psychological Measurement, 2018
Wording effects associated with positively and negatively worded items have been found in many scales. Such effects may threaten construct validity and introduce systematic bias in the interpretation of results. A variety of models have been applied to address wording effects, such as the correlated uniqueness model and the correlated traits and…
Descriptors: Test Items, Test Format, Correlation, Construct Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A.; Dimitrov, Dimiter M.; Li, Tatyana – Educational and Psychological Measurement, 2018
This article extends the procedure outlined in the article by Raykov, Marcoulides, and Tong for testing congruence of latent constructs to the setting of binary items and clustering effects. In this widely used setting in contemporary educational and psychological research, the method can be used to examine if two or more homogeneous…
Descriptors: Tests, Psychometrics, Test Items, Construct Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Komboz, Basil; Strobl, Carolin; Zeileis, Achim – Educational and Psychological Measurement, 2018
Psychometric measurement models are only valid if measurement invariance holds between test takers of different groups. Global model tests, such as the well-established likelihood ratio (LR) test, are sensitive to violations of measurement invariance, such as differential item functioning and differential step functioning. However, these…
Descriptors: Item Response Theory, Models, Tests, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Matlock Cole, Ki Lynn; Turner, Ronna C.; Gitchel, W. Dent – Educational and Psychological Measurement, 2018
The generalized partial credit model (GPCM) is often used for polytomous data; however, the nominal response model (NRM) allows for the investigation of how adjacent categories may discriminate differently when items are positively or negatively worded. Ten items from three different self-reported scales were used (anxiety, depression, and…
Descriptors: Item Response Theory, Anxiety, Depression (Psychology), Self Evaluation (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Schweizer, Karl; Troche, Stefan – Educational and Psychological Measurement, 2018
In confirmatory factor analysis quite similar models of measurement serve the detection of the difficulty factor and the factor due to the item-position effect. The item-position effect refers to the increasing dependency among the responses to successively presented items of a test whereas the difficulty factor is ascribed to the wide range of…
Descriptors: Investigations, Difficulty Level, Factor Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Nicewander, W. Alan – Educational and Psychological Measurement, 2018
Spearman's correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman's equation removes all measurement error from a correlation coefficient which translates into "increasing the reliability of…
Descriptors: Error of Measurement, Correlation, Sample Size, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Andersson, Björn; Xin, Tao – Educational and Psychological Measurement, 2018
In applications of item response theory (IRT), an estimate of the reliability of the ability estimates or sum scores is often reported. However, analytical expressions for the standard errors of the estimators of the reliability coefficients are not available in the literature and therefore the variability associated with the estimated reliability…
Descriptors: Item Response Theory, Test Reliability, Test Items, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2018
This article outlines a procedure for examining the degree to which a common factor may be dominating additional factors in a multicomponent measuring instrument consisting of binary items. The procedure rests on an application of the latent variable modeling methodology and accounts for the discrete nature of the manifest indicators. The method…
Descriptors: Measurement Techniques, Factor Analysis, Item Response Theory, Likert Scales
Peer reviewed Peer reviewed
Direct linkDirect link
Olivera-Aguilar, Margarita; Rikoon, Samuel H.; Gonzalez, Oscar; Kisbu-Sakarya, Yasemin; MacKinnon, David P. – Educational and Psychological Measurement, 2018
When testing a statistical mediation model, it is assumed that factorial measurement invariance holds for the mediating construct across levels of the independent variable X. The consequences of failing to address the violations of measurement invariance in mediation models are largely unknown. The purpose of the present study was to…
Descriptors: Error of Measurement, Statistical Analysis, Factor Analysis, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ren; Huggins-Manley, Anne Corinne; Bulut, Okan – Educational and Psychological Measurement, 2018
Developing a diagnostic tool within the diagnostic measurement framework is the optimal approach to obtain multidimensional and classification-based feedback on examinees. However, end users may seek to obtain diagnostic feedback from existing item responses to assessments that have been designed under either the classical test theory or item…
Descriptors: Models, Item Response Theory, Psychometrics, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Lamprianou, Iasonas – Educational and Psychological Measurement, 2018
It is common practice for assessment programs to organize qualifying sessions during which the raters (often known as "markers" or "judges") demonstrate their consistency before operational rating commences. Because of the high-stakes nature of many rating activities, the research community tends to continuously explore new…
Descriptors: Social Networks, Network Analysis, Comparative Analysis, Innovation
Peer reviewed Peer reviewed
Direct linkDirect link
Trafimow, David – Educational and Psychological Measurement, 2018
Because error variance alternatively can be considered to be the sum of systematic variance associated with unknown variables and randomness, a tripartite assumption is proposed that total variance in the dependent variable can be partitioned into three variance components. These are variance in the dependent variable that is explained by the…
Descriptors: Statistical Analysis, Correlation, Experiments, Effect Size
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  244