NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…20
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 20 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Jana Welling; Timo Gnambs; Claus H. Carstensen – Educational and Psychological Measurement, 2024
Disengaged responding poses a severe threat to the validity of educational large-scale assessments, because item responses from unmotivated test-takers do not reflect their actual ability. Existing identification approaches rely primarily on item response times, which bears the risk of misclassifying fast engaged or slow disengaged responses.…
Descriptors: Foreign Countries, College Students, Guessing (Tests), Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Stoevenbelt, Andrea H.; Wicherts, Jelte M.; Flore, Paulette C.; Phillips, Lorraine A. T.; Pietschnig, Jakob; Verschuere, Bruno; Voracek, Martin; Schwabe, Inga – Educational and Psychological Measurement, 2023
When cognitive and educational tests are administered under time limits, tests may become speeded and this may affect the reliability and validity of the resulting test scores. Prior research has shown that time limits may create or enlarge gender gaps in cognitive and academic testing. On average, women complete fewer items than men when a test…
Descriptors: Timed Tests, Gender Differences, Item Response Theory, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Spratto, Elisabeth M.; Leventhal, Brian C.; Bandalos, Deborah L. – Educational and Psychological Measurement, 2021
In this study, we examined the results and interpretations produced from two different IRTree models--one using paths consisting of only dichotomous decisions, and one using paths consisting of both dichotomous and polytomous decisions. We used data from two versions of an impulsivity measure. In the first version, all the response options had…
Descriptors: Comparative Analysis, Item Response Theory, Decision Making, Data Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Dimitrov, Dimiter M.; Marcoulides, George A.; Harrison, Michael – Educational and Psychological Measurement, 2019
Building on prior research on the relationships between key concepts in item response theory and classical test theory, this note contributes to highlighting their important and useful links. A readily and widely applicable latent variable modeling procedure is discussed that can be used for point and interval estimation of the individual person…
Descriptors: True Scores, Item Response Theory, Test Items, Test Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Luo, Yong; Dimitrov, Dimiter M. – Educational and Psychological Measurement, 2019
Plausible values can be used to either estimate population-level statistics or compute point estimates of latent variables. While it is well known that five plausible values are usually sufficient for accurate estimation of population-level statistics in large-scale surveys, the minimum number of plausible values needed to obtain accurate latent…
Descriptors: Item Response Theory, Monte Carlo Methods, Markov Processes, Outcome Measures
Peer reviewed Peer reviewed
Direct linkDirect link
Sideridis, Georgios; Tsaousis, Ioannis; Al Harbi, Khaleel – Educational and Psychological Measurement, 2017
The purpose of the present article was to illustrate, using an example from a national assessment, the value from analyzing the behavior of distractors in measures that engage the multiple-choice format. A secondary purpose of the present article was to illustrate four remedial actions that can potentially improve the measurement of the…
Descriptors: Multiple Choice Tests, Attention Control, Testing, Remedial Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Tay, Louis; Huang, Qiming; Vermunt, Jeroen K. – Educational and Psychological Measurement, 2016
In large-scale testing, the use of multigroup approaches is limited for assessing differential item functioning (DIF) across multiple variables as DIF is examined for each variable separately. In contrast, the item response theory with covariate (IRT-C) procedure can be used to examine DIF across multiple variables (covariates) simultaneously. To…
Descriptors: Item Response Theory, Test Bias, Simulation, College Entrance Examinations
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ou Lydia; Bridgeman, Brent; Gu, Lixiong; Xu, Jun; Kong, Nan – Educational and Psychological Measurement, 2015
Research on examinees' response changes on multiple-choice tests over the past 80 years has yielded some consistent findings, including that most examinees make score gains by changing answers. This study expands the research on response changes by focusing on a high-stakes admissions test--the Verbal Reasoning and Quantitative Reasoning measures…
Descriptors: College Entrance Examinations, High Stakes Tests, Graduate Study, Verbal Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Sari, Halil Ibrahim; Huggins, Anne Corinne – Educational and Psychological Measurement, 2015
This study compares two methods of defining groups for the detection of differential item functioning (DIF): (a) pairwise comparisons and (b) composite group comparisons. We aim to emphasize and empirically support the notion that the choice of pairwise versus composite group definitions in DIF is a reflection of how one defines fairness in DIF…
Descriptors: Test Bias, Comparative Analysis, Statistical Analysis, College Entrance Examinations
Peer reviewed Peer reviewed
Direct linkDirect link
Plieninger, Hansjörg; Meiser, Thorsten – Educational and Psychological Measurement, 2014
Response styles, the tendency to respond to Likert-type items irrespective of content, are a widely known threat to the reliability and validity of self-report measures. However, it is still debated how to measure and control for response styles such as extreme responding. Recently, multiprocess item response theory models have been proposed that…
Descriptors: Validity, Item Response Theory, Rating Scales, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Hung-Yu; Wang, Wen-Chung – Educational and Psychological Measurement, 2014
In the social sciences, latent traits often have a hierarchical structure, and data can be sampled from multiple levels. Both hierarchical latent traits and multilevel data can occur simultaneously. In this study, we developed a general class of item response theory models to accommodate both hierarchical latent traits and multilevel data. The…
Descriptors: Item Response Theory, Hierarchical Linear Modeling, Computation, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Sliter, Katherine A.; Zickar, Michael J. – Educational and Psychological Measurement, 2014
This study compared the functioning of positively and negatively worded personality items using item response theory. In Study 1, word pairs from the Goldberg Adjective Checklist were analyzed using the Graded Response Model. Across subscales, negatively worded items produced comparatively higher difficulty and lower discrimination parameters than…
Descriptors: Item Response Theory, Psychometrics, Personality Measures, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Jones, W. Paul – Educational and Psychological Measurement, 2014
A study in a university clinic/laboratory investigated adaptive Bayesian scaling as a supplement to interpretation of scores on the Mini-IPIP. A "probability of belonging" in categories of low, medium, or high on each of the Big Five traits was calculated after each item response and continued until all items had been used or until a…
Descriptors: Personality Traits, Personality Measures, Bayesian Statistics, Clinics
Peer reviewed Peer reviewed
Direct linkDirect link
Lake, Christopher J.; Withrow, Scott; Zickar, Michael J.; Wood, Nicole L.; Dalal, Dev K.; Bochinski, Joseph – Educational and Psychological Measurement, 2013
Adapting the original latitude of acceptance concept to Likert-type surveys, response latitudes are defined as the range of graded response options a person is willing to endorse. Response latitudes were expected to relate to attitude involvement such that high involvement was linked to narrow latitudes (the result of selective, careful…
Descriptors: Item Response Theory, Likert Scales, Attitude Measures, Surveys
Peer reviewed Peer reviewed
Direct linkDirect link
Seo, Dong Gi; Weiss, David J. – Educational and Psychological Measurement, 2013
The usefulness of the l[subscript z] person-fit index was investigated with achievement test data from 20 exams given to more than 3,200 college students. Results for three methods of estimating ? showed that the distributions of l[subscript z] were not consistent with its theoretical distribution, resulting in general overfit to the item response…
Descriptors: Achievement Tests, College Students, Goodness of Fit, Item Response Theory
Previous Page | Next Page »
Pages: 1  |  2