Publication Date
| In 2024 | 54 |
| Since 2023 | 103 |
| Since 2020 (last 5 years) | 282 |
| Since 2015 (last 10 years) | 625 |
| Since 2005 (last 20 years) | 1408 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Researchers | 109 |
| Practitioners | 107 |
| Teachers | 45 |
| Administrators | 25 |
| Policymakers | 24 |
| Counselors | 12 |
| Parents | 7 |
| Students | 7 |
| Support Staff | 4 |
| Community | 2 |
Location
| California | 60 |
| Canada | 58 |
| United States | 52 |
| Turkey | 47 |
| Australia | 42 |
| Florida | 34 |
| Germany | 26 |
| Netherlands | 25 |
| China | 24 |
| Texas | 24 |
| United Kingdom (England) | 21 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 1 |
| Meets WWC Standards with or without Reservations | 1 |
| Does not meet standards | 1 |
Cromley, Jennifer G.; Dai, Ting; Fechter, Tia; Nelson, Frank E.; Van Boekel, Martin; Du, Yang – Grantee Submission, 2021
Making inferences and reasoning with new scientific information is critical for successful performance in biology coursework. Thus, identifying students who are weak in these skills could allow the early provision of additional support and course placement recommendations to help students develop their reasoning abilities, leading to better…
Descriptors: Science Tests, Multiple Choice Tests, Logical Thinking, Inferences
Luo, Xin; Reckase, Mark D.; He, Wei – AERA Online Paper Repository, 2016
While dichotomous item dominates the application of computerized adaptive testing (CAT), polytomous item and set-based item hold promises for being incorporated in CAT. However, how to assemble a CAT containing mixed item formats is challenging. This study investigated: (1) how the mixed CAT works compared with the dichotomous-item-based CAT; (2)…
Descriptors: Test Items, Test Format, Computer Assisted Testing, Adaptive Testing
Chen, Yi-Jui Iva; Wilson, Mark; Irey, Robin C.; Requa, Mary K. – Language Testing, 2020
Orthographic processing -- the ability to perceive, access, differentiate, and manipulate orthographic knowledge -- is essential when learning to recognize words. Despite its critical importance in literacy acquisition, the field lacks a tool to assess this essential cognitive ability. The goal of this study was to design a computer-based…
Descriptors: Orthographic Symbols, Spelling, Word Recognition, Reading Skills
Hidalgo, Ma Dolores; Benítez, Isabel; Padilla, Jose-Luis; Gómez-Benito, Juana – Sociological Methods & Research, 2017
The growing use of scales in survey questionnaires warrants the need to address how does polytomous differential item functioning (DIF) affect observed scale score comparisons. The aim of this study is to investigate the impact of DIF on the type I error and effect size of the independent samples t-test on the observed total scale scores. A…
Descriptors: Test Items, Test Bias, Item Response Theory, Surveys
Deng, Weiling; Monfils, Lora – ETS Research Report Series, 2017
Using simulated data, this study examined the impact of different levels of stringency of the valid case inclusion criterion on item response theory (IRT)-based true score equating over 5 years in the context of K-12 assessment when growth in student achievement is expected. Findings indicate that the use of the most stringent inclusion criterion…
Descriptors: Item Response Theory, Equated Scores, True Scores, Educational Assessment
Ventista, Ourania Maria – E-Learning and Digital Media, 2018
Massive Open Online Courses appear to have high attrition rates, involve students in peer-assessment with patriotic bias and promote education for already educated people. This paper suggests a formative assessment model which takes into consideration these issues. Specifically, this paper focuses on the assessment of open-format questions in…
Descriptors: Student Evaluation, Self Evaluation (Individuals), Large Group Instruction, Online Courses
Shah, Lisa; Hao, Jie; Rodriguez, Christian A.; Fallin, Rebekah; Linenberger-Cortes, Kimberly; Ray, Herman E.; Rushton, Gregory T. – Physical Review Physics Education Research, 2018
A generally agreed-upon tenant of the physics teaching community is the centrality of subject-specific expertise in effective teaching. However, studies which assess the content knowledge of incoming K-12 physics teachers in the U.S. have not yet been reported. Similarly lacking are studies on if or how the demographic makeup of aspiring physics…
Descriptors: Praxis, Physics, Expertise, Demography
Siddiqui, Ali; Sartaj, Shabana; Shah, Syed Waqar Ali – Advances in Language and Literary Studies, 2018
The language assessments and testing are the crucial aspects of teaching and learning processes. Therefore, the following study is aimed to focus on these two most important aspects with refer to a critical view on its practical aspect that is after passing through a tactful teaching process. The crucial notion of practicing language assessments…
Descriptors: English (Second Language), Second Language Learning, Language Tests, Learning Processes
Wedman, Jonathan – Scandinavian Journal of Educational Research, 2018
Gender fairness in testing can be impeded by the presence of differential item functioning (DIF), which potentially causes test bias. In this study, the presence and causes of gender-related DIF were investigated with real data from 800 items answered by 250,000 test takers. DIF was examined using the Mantel-Haenszel and logistic regression…
Descriptors: Gender Differences, College Entrance Examinations, Test Items, Vocabulary
Magis, David; Tuerlinckx, Francis; De Boeck, Paul – Journal of Educational and Behavioral Statistics, 2015
This article proposes a novel approach to detect differential item functioning (DIF) among dichotomously scored items. Unlike standard DIF methods that perform an item-by-item analysis, we propose the "LR lasso DIF method": logistic regression (LR) model is formulated for all item responses. The model contains item-specific intercepts,…
Descriptors: Test Bias, Test Items, Regression (Statistics), Scores
Assessment of Differential Item Functioning under Cognitive Diagnosis Models: The DINA Model Example
Li, Xiaomin; Wang, Wen-Chung – Journal of Educational Measurement, 2015
The assessment of differential item functioning (DIF) is routinely conducted to ensure test fairness and validity. Although many DIF assessment methods have been developed in the context of classical test theory and item response theory, they are not applicable for cognitive diagnosis models (CDMs), as the underlying latent attributes of CDMs are…
Descriptors: Test Bias, Models, Cognitive Measurement, Evaluation Methods
New Meridian Corporation, 2020
New Meridian Corporation has developed the "Quality Testing Standards and Criteria for Comparability Claims" (QTS) to provide guidance to states that are interested in including New Meridian content and would like to either keep reporting scores on the New Meridian Scale or use the New Meridian performance levels; that is, the state…
Descriptors: Testing, Standards, Comparative Analysis, Test Content
Kruse, Adam J. – Update: Applications of Research in Music Education, 2016
The findings and discussions related to cultural bias in testing have in no way been unanimous. However, the considerations of this area of inquiry may possess meaningful implications for educators of any subject. In this review of literature, I describe the issues, research, and arguments surrounding cultural bias in testing and discuss…
Descriptors: Music, Music Education, Testing, Cultural Influences
Song, Huan; Xu, Miao – ECNU Review of Education, 2019
Purpose: From the perspective of performance standards-based teacher education, this article aimed to address progress and challenges of China's teacher preparation quality assurance system. Design/Approach/Methods: This review is based on policy review and case studies. Retrieving to the existing research literature, this research sorted out the…
Descriptors: Quality Assurance, Educational Quality, Educational Policy, Standards
Lee, Yi-Hsuan; Zhang, Jinming – International Journal of Testing, 2017
Simulations were conducted to examine the effect of differential item functioning (DIF) on measurement consequences such as total scores, item response theory (IRT) ability estimates, and test reliability in terms of the ratio of true-score variance to observed-score variance and the standard error of estimation for the IRT ability parameter. The…
Descriptors: Test Bias, Test Reliability, Performance, Scores

Peer reviewed
Direct link
