NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 43 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Fujimoto, Ken A. – Journal of Educational Measurement, 2020
Multilevel bifactor item response theory (IRT) models are commonly used to account for features of the data that are related to the sampling and measurement processes used to gather those data. These models conventionally make assumptions about the portions of the data structure that represent these features. Unfortunately, when data violate these…
Descriptors: Bayesian Statistics, Item Response Theory, Achievement Tests, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Pokropek, Artur; Marks, Gary N.; Borgonovi, Francesca – Journal of Educational Psychology, 2022
International Large-Scale Assessments (LSA) allow comparisons of education systems' effectiveness in promoting student learning in specific domains, such as reading, mathematics, and science. However, it has been argued that students' scores in International LSAs mostly reflect general cognitive ability (g). This study examines the extent to which…
Descriptors: Scores, Intelligence, International Assessment, Secondary School Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sahin, Murat Dogan; Gelbal, Selahattin – International Journal of Assessment Tools in Education, 2020
The purpose of this study was to conduct a real-time multidimensional computerized adaptive test (MCAT) using data from a previous paper-pencil test (PPT) regarding the grammar and vocabulary dimensions of an end-of-term proficiency exam conducted on students in a preparatory class at a university. An item pool was established through four…
Descriptors: Adaptive Testing, Computer Assisted Testing, Language Tests, Language Proficiency
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Chen, Hui-Fang; Jin, Kuan-Yu – Educational and Psychological Measurement, 2015
Many scales contain both positively and negatively worded items. Reverse recoding of negatively worded items might not be enough for them to function as positively worded items do. In this study, we commented on the drawbacks of existing approaches to wording effect in mixed-format scales and used bi-factor item response theory (IRT) models to…
Descriptors: Item Response Theory, Test Format, Language Usage, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Chung, Seungwon; Houts, Carrie – Measurement: Interdisciplinary Research and Perspectives, 2020
Advanced modeling of item response data through the item response theory (IRT) or item factor analysis frameworks is becoming increasingly popular. In the social and behavioral sciences, the underlying structure of tests/assessments is often multidimensional (i.e., more than 1 latent variable/construct is represented in the items). This review…
Descriptors: Item Response Theory, Evaluation Methods, Models, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Won-Chan; Kim, Stella Y.; Choi, Jiwon; Kang, Yujin – Journal of Educational Measurement, 2020
This article considers psychometric properties of composite raw scores and transformed scale scores on mixed-format tests that consist of a mixture of multiple-choice and free-response items. Test scores on several mixed-format tests are evaluated with respect to conditional and overall standard errors of measurement, score reliability, and…
Descriptors: Raw Scores, Item Response Theory, Test Format, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Reise, Steven; Moore, Tyler; Maydeu-Olivares, Alberto – Educational and Psychological Measurement, 2011
Reise, Cook, and Moore proposed a "comparison modeling" approach to assess the distortion in item parameter estimates when a unidimensional item response theory (IRT) model is imposed on multidimensional data. Central to their approach is the comparison of item slope parameter estimates from a unidimensional IRT model (a restricted model), with…
Descriptors: Item Response Theory, Models, Computation, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Cai, Li – Psychometrika, 2010
Motivated by Gibbons et al.'s (Appl. Psychol. Meas. 31:4-19, "2007") full-information maximum marginal likelihood item bifactor analysis for polytomous data, and Rijmen, Vansteelandt, and De Boeck's (Psychometrika 73:167-182, "2008") work on constructing computationally efficient estimation algorithms for latent variable…
Descriptors: Educational Assessment, Public Health, Quality of Life, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Immekus, Jason C.; Imbrie, P. K. – Educational and Psychological Measurement, 2008
Dimensionality assessment using the full-information item bifactor model for graded response data is provided. The model applies to data in which each item relates to a general factor and one group factor. Specifically, alternative model specification within item response theory (IRT) is shown to test a scale's factor structure. For illustrative…
Descriptors: Factor Structure, Measures (Individuals), Item Response Theory, Metacognition
Peer reviewed Peer reviewed
Direct linkDirect link
Stucky, Brian D.; Thissen, David; Edelen, Maria Orlando – Applied Psychological Measurement, 2013
Test developers often need to create unidimensional scales from multidimensional data. For item analysis, "marginal trace lines" capture the relation with the general dimension while accounting for nuisance dimensions and may prove to be a useful technique for creating short-form tests. This article describes the computations needed to obtain…
Descriptors: Test Construction, Test Length, Item Analysis, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Ken A. Fujimoto; Carl F. Falk – Educational and Psychological Measurement, 2024
Item response theory (IRT) models are often compared with respect to predictive performance to determine the dimensionality of rating scale data. However, such model comparisons could be biased toward nested-dimensionality IRT models (e.g., the bifactor model) when comparing those models with non-nested-dimensionality IRT models (e.g., a…
Descriptors: Item Response Theory, Rating Scales, Predictive Measurement, Bayesian Statistics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tatarinova, Galiya; Neamah, Nour Raheem; Mohammed, Aisha; Hassan, Aalaa Yaseen; Obaid, Ali Abdulridha; Ismail, Ismail Abdulwahhab; Maabreh, Hatem Ghaleb; Afif, Al Khateeb Nashaat Sultan; Viktorovna, Shvedova Irina – International Journal of Language Testing, 2023
Unidimensionality is an important assumption of measurement but it is violated very often. Most of the time, tests are deliberately constructed to be multidimensional to cover all aspects of the intended construct. In such situations, the application of unidimensional item response theory (IRT) models is not justifieddue to poor model fit and…
Descriptors: Item Response Theory, Test Items, Language Tests, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Cole, Ki; Paek, Insu – Measurement: Interdisciplinary Research and Perspectives, 2022
Statistical Analysis Software (SAS) is a widely used tool for data management analysis across a variety of fields. The procedure for item response theory (PROC IRT) is one to perform unidimensional and multidimensional item response theory (IRT) analysis for dichotomous and polytomous data. This review provides a summary of the features of PROC…
Descriptors: Item Response Theory, Computer Software, Item Analysis, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Kyung Yong – Journal of Educational Measurement, 2020
New items are often evaluated prior to their operational use to obtain item response theory (IRT) item parameter estimates for quality control purposes. Fixed parameter calibration is one linking method that is widely used to estimate parameters for new items and place them on the desired scale. This article provides detailed descriptions of two…
Descriptors: Item Response Theory, Evaluation Methods, Test Items, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Kyung Yong; Lim, Euijin; Lee, Won-Chan – International Journal of Testing, 2019
For passage-based tests, items that belong to a common passage often violate the local independence assumption of unidimensional item response theory (UIRT). In this case, ignoring local item dependence (LID) and estimating item parameters using a UIRT model could be problematic because doing so might result in inaccurate parameter estimates,…
Descriptors: Item Response Theory, Equated Scores, Test Items, Models
Previous Page | Next Page ยป
Pages: 1  |  2  |  3