NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Deribo, Tobias; Goldhammer, Frank; Kroehne, Ulf – Educational and Psychological Measurement, 2023
As researchers in the social sciences, we are often interested in studying not directly observable constructs through assessments and questionnaires. But even in a well-designed and well-implemented study, rapid-guessing behavior may occur. Under rapid-guessing behavior, a task is skimmed shortly but not read and engaged with in-depth. Hence, a…
Descriptors: Reaction Time, Guessing (Tests), Behavior Patterns, Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Lenhard, Wolfgang; Lenhard, Alexandra – Educational and Psychological Measurement, 2021
The interpretation of psychometric test results is usually based on norm scores. We compared semiparametric continuous norming (SPCN) with conventional norming methods by simulating results for test scales with different item numbers and difficulties via an item response theory approach. Subsequently, we modeled the norm scores based on random…
Descriptors: Test Norms, Scores, Regression (Statistics), Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Sliter, Katherine A.; Zickar, Michael J. – Educational and Psychological Measurement, 2014
This study compared the functioning of positively and negatively worded personality items using item response theory. In Study 1, word pairs from the Goldberg Adjective Checklist were analyzed using the Graded Response Model. Across subscales, negatively worded items produced comparatively higher difficulty and lower discrimination parameters than…
Descriptors: Item Response Theory, Psychometrics, Personality Measures, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Stuive, Ilse; Kiers, Henk A. L.; Timmerman, Marieke E. – Educational and Psychological Measurement, 2009
A common question in test evaluation is whether an a priori assignment of items to subtests is supported by empirical data. If the analysis results indicate the assignment of items to subtests under study is not supported by data, the assignment is often adjusted. In this study the authors compare two methods on the quality of their suggestions to…
Descriptors: Simulation, Item Response Theory, Test Items, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Beckert, Troy E.; Strom, Robert D.; Strom, Paris S.; Yang, Cheng-Ta; Singh, Archana – Educational and Psychological Measurement, 2007
This study examined whether the original factor structure of the Parent Success Indicator (PSI) could be replicated with scores from generational views on both the English- and Mandarin-language versions of the instrument. The 60-item PSI was evaluated using responses from 840 Taiwanese parents (n = 429) and their 10- to 14-year-old adolescents (n…
Descriptors: Goodness of Fit, Adolescents, Factor Structure, Success
Peer reviewed Peer reviewed
Douglass, Frazier M., IV; And Others – Educational and Psychological Measurement, 1979
Classical item analysis and Rasch latent trait analysis were applied to the responses of a sample of undergraduates to two measures concerning alcoholism. Little difference in terms of practical considerations was found between the methods. (JKS)
Descriptors: Alcoholism, Comparative Analysis, Drinking, Higher Education
Peer reviewed Peer reviewed
Tsai, Fu-Ju; Suen, Hoi K. – Educational and Psychological Measurement, 1993
Six methods of scoring multiple true-false items were compared in terms of reliabilities, difficulties, and discrimination. Results suggest that, for norm-referenced score interpretations, there is insufficient evidence to support any one of the methods as superior. For criterion-referenced score interpretations, effects of scoring method must be…
Descriptors: Comparative Analysis, Criterion Referenced Tests, Difficulty Level, Guessing (Tests)