NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ren; Liu, Haiyan; Shi, Dexin; Jiang, Zhehan – Educational and Psychological Measurement, 2022
Assessments with a large amount of small, similar, or often repetitive tasks are being used in educational, neurocognitive, and psychological contexts. For example, respondents are asked to recognize numbers or letters from a large pool of those and the number of correct answers is a count variable. In 1960, George Rasch developed the Rasch…
Descriptors: Classification, Models, Statistical Distributions, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Rios, Joseph A. – Educational and Psychological Measurement, 2021
Low test-taking effort as a validity threat is common when examinees perceive an assessment context to have minimal personal value. Prior research has shown that in such contexts, subgroups may differ in their effort, which raises two concerns when making subgroup mean comparisons. First, it is unclear how differential effort could influence…
Descriptors: Response Style (Tests), Statistical Analysis, Measurement, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Thompson, Yutian T.; Song, Hairong; Shi, Dexin; Liu, Zhengkui – Educational and Psychological Measurement, 2021
Conventional approaches for selecting a reference indicator (RI) could lead to misleading results in testing for measurement invariance (MI). Several newer quantitative methods have been available for more rigorous RI selection. However, it is still unknown how well these methods perform in terms of correctly identifying a truly invariant item to…
Descriptors: Measurement, Statistical Analysis, Selection, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yuan; Hau, Kit-Tai – Educational and Psychological Measurement, 2020
In large-scale low-stake assessment such as the Programme for International Student Assessment (PISA), students may skip items (missingness) which are within their ability to complete. The detection and taking care of these noneffortful responses, as a measure of test-taking motivation, is an important issue in modern psychometric models.…
Descriptors: Response Style (Tests), Motivation, Test Items, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Fu, Yuanshu; Wen, Zhonglin; Wang, Yang – Educational and Psychological Measurement, 2018
The maximal reliability of a congeneric measure is achieved by weighting item scores to form the optimal linear combination as the total score; it is never lower than the composite reliability of the measure when measurement errors are uncorrelated. The statistical method that renders maximal reliability would also lead to maximal criterion…
Descriptors: Test Reliability, Test Validity, Comparative Analysis, Attitude Measures
Peer reviewed Peer reviewed
Direct linkDirect link
Reckase, Mark D.; Xu, Jing-Ru – Educational and Psychological Measurement, 2015
How to compute and report subscores for a test that was originally designed for reporting scores on a unidimensional scale has been a topic of interest in recent years. In the research reported here, we describe an application of multidimensional item response theory to identify a subscore structure in a test designed for reporting results using a…
Descriptors: English, Language Skills, English Language Learners, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Dumenci, Levent; Yates, Phillip D. – Educational and Psychological Measurement, 2012
Estimation problems associated with the correlated-trait correlated-method (CTCM) parameterization of a multitrait-multimethod (MTMM) matrix are widely documented: the model often fails to converge; even when convergence is achieved, one or more of the parameter estimates are outside the admissible parameter space. In this study, the authors…
Descriptors: Correlation, Models, Multitrait Multimethod Techniques, Matrices