Publication Date
| In 2024 | 81 |
| Since 2023 | 210 |
| Since 2020 (last 5 years) | 607 |
| Since 2015 (last 10 years) | 1471 |
| Since 2005 (last 20 years) | 2530 |
Descriptor
| Multiple Choice Tests | 4660 |
| Foreign Countries | 1228 |
| Test Items | 1111 |
| Test Construction | 944 |
| Higher Education | 638 |
| Comparative Analysis | 622 |
| Scores | 608 |
| Teaching Methods | 587 |
| Statistical Analysis | 545 |
| Test Validity | 535 |
| Test Reliability | 519 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 122 |
| Teachers | 104 |
| Researchers | 64 |
| Students | 46 |
| Administrators | 13 |
| Policymakers | 7 |
| Counselors | 3 |
| Parents | 3 |
Location
| Canada | 133 |
| Turkey | 129 |
| Australia | 122 |
| Iran | 66 |
| Indonesia | 52 |
| Germany | 46 |
| United Kingdom | 46 |
| Taiwan | 44 |
| United States | 42 |
| China | 36 |
| California | 34 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 3 |
| Meets WWC Standards with or without Reservations | 5 |
| Does not meet standards | 6 |
Stefanie A. Wind; Yuan Ge – Measurement: Interdisciplinary Research and Perspectives, 2024
Mixed-format assessments made up of multiple-choice (MC) items and constructed response (CR) items that are scored using rater judgments include unique psychometric considerations. When these item types are combined to estimate examinee achievement, information about the psychometric quality of each component can depend on that of the other. For…
Descriptors: Interrater Reliability, Test Bias, Multiple Choice Tests, Responses
Janet Mee; Ravi Pandian; Justin Wolczynski; Amy Morales; Miguel Paniagua; Polina Harik; Peter Baldwin; Brian E. Clauser – Advances in Health Sciences Education, 2024
Recent advances in automated scoring technology have made it practical to replace multiple-choice questions (MCQs) with short-answer questions (SAQs) in large-scale, high-stakes assessments. However, most previous research comparing these formats has used small examinee samples testing under low-stakes conditions. Additionally, previous studies…
Descriptors: Multiple Choice Tests, High Stakes Tests, Test Format, Test Items
Christopher Wheatley; James Wells; John Stewart – Physical Review Physics Education Research, 2024
The Brief Electricity and Magnetism Assessment (BEMA) is a multiple-choice instrument commonly used to measure introductory undergraduate students' conceptual understanding of electricity and magnetism. This study used a network analysis technique called modified module analysis-partial (MMA-P) to identify clusters of correlated responses, also…
Descriptors: Multiple Choice Tests, Energy, Magnets, Undergraduate Students
Patricia Dowsett; Nathanael Reinertsen – Australian Journal of Language and Literacy, 2023
Senior secondary Literature courses in Australia all aim, to various extents, to develop students' critical literacy skills. These aims share emphases on reading, reflecting and responding critically to texts, on critical analysis and critical ideas, and on forming interpretations informed by critical perspectives. Critical literacy is not…
Descriptors: Foreign Countries, High School Students, Literacy, Multiple Choice Tests
Mingfeng Xue; Mark Wilson – Applied Measurement in Education, 2024
Multidimensionality is common in psychological and educational measurements. This study focuses on dimensions that converge at the upper anchor (i.e. the highest acquisition status defined in a learning progression) and compares different ways of dealing with them using the multidimensional random coefficients multinomial logit model and scale…
Descriptors: Learning Trajectories, Educational Assessment, Item Response Theory, Evolution
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
The process of generating challenging and appropriate distractors for multiple-choice questions is a complex and time-consuming task. Existing methods for an automated generation have limitations in proposing challenging distractors, or they fail to effectively filter out incorrect choices that closely resemble the correct answer, share synonymous…
Descriptors: Multiple Choice Tests, Artificial Intelligence, Attention, Natural Language Processing
Berenbon, Rebecca F.; McHugh, Bridget C. – Educational Measurement: Issues and Practice, 2023
To assemble a high-quality test, psychometricians rely on subject matter experts (SMEs) to write high-quality items. However, SMEs are not typically given the opportunity to provide input on which content standards are most suitable for multiple-choice questions (MCQs). In the present study, we explored the relationship between perceived MCQ…
Descriptors: Test Items, Multiple Choice Tests, Standards, Difficulty Level
Archana Praveen Kumar; Ashalatha Nayak; Manjula Shenoy K.; Chaitanya; Kaustav Ghosh – International Journal of Artificial Intelligence in Education, 2024
Multiple Choice Questions (MCQs) are a popular assessment method because they enable automated evaluation, flexible administration and use with huge groups. Despite these benefits, the manual construction of MCQs is challenging, time-consuming and error-prone. This is because each MCQ is comprised of a question called the "stem", a…
Descriptors: Multiple Choice Tests, Test Construction, Test Items, Semantics
Harnejan K. Atwal; Kenjiro W. Quides – Journal of Microbiology & Biology Education, 2024
Many 4-year public institutions face significant pedagogical challenges due to the high ratio of students to teaching team members. To address the issue, we developed a workflow using the programming language R as a method to rapidly grade multiple-choice questions, adjust for errors, and grade answer-dependent style multiple-choice questions,…
Descriptors: Programming Languages, Public Colleges, Grading, Holistic Evaluation
Badali, Sabrina; Rawson, Katherine A.; Dunlosky, John – Educational Psychology Review, 2023
Multiple-choice practice tests are beneficial for learning, and students encounter multiple-choice questions regularly. How do students regulate their use of multiple-choice practice testing? And, how effective is students' use of multiple-choice practice testing? In the current experiments, undergraduate participants practiced German-English word…
Descriptors: Undergraduate Students, Drills (Practice), Multiple Choice Tests, Student Behavior
Gayman, C. M.; Jimenez, S. T.; Hammock, S.; Taylor, S.; Rocheleau, J. M. – Journal of Behavioral Education, 2023
Interteaching is a behavioral teaching method that has been empirically shown to increase student learning outcomes. The present study investigated the effect of combining interteaching with cumulative versus noncumulative exams in two sections of an online asynchronous class. Interteaching was used in both sections of the course. The…
Descriptors: Teaching Methods, Testing, Online Courses, Asynchronous Communication
Collier, Jessica R.; Pillai, Raunak M.; Fazio, Lisa K. – Cognitive Research: Principles and Implications, 2023
Fact-checkers want people to both read and remember their misinformation debunks. Retrieval practice is one way to increase memory, thus multiple-choice quizzes may be a useful tool for fact-checkers. We tested whether exposure to quizzes improved people's accuracy ratings for fact-checked claims and their memory for specific information within a…
Descriptors: Informed Consent, Audits (Verification), Multiple Choice Tests, Beliefs
Ersan, Ozge; Berry, Yufeng – Educational Measurement: Issues and Practice, 2023
The increasing use of computerization in the testing industry and the need for items potentially measuring higher-order skills have led educational measurement communities to develop technology-enhanced (TE) items and conduct validity studies on the use of TE items. Parallel to this goal, the purpose of this study was to collect validity evidence…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Elementary Secondary Education, Accountability
David G. Schreurs; Jaclyn M. Trate; Shalini Srinivasan; Melonie A. Teichert; Cynthia J. Luxford; Jamie L. Schneider; Kristen L. Murphy – Chemistry Education Research and Practice, 2024
With the already widespread nature of multiple-choice assessments and the increasing popularity of answer-until-correct, it is important to have methods available for exploring the validity of these types of assessments as they are developed. This work analyzes a 20-question multiple choice assessment covering introductory undergraduate chemistry…
Descriptors: Multiple Choice Tests, Test Validity, Introductory Courses, Science Tests
Sarah Alahmadi; Christine E. DeMars – Applied Measurement in Education, 2024
Large-scale educational assessments are sometimes considered low-stakes, increasing the possibility of confounding true performance level with low motivation. These concerns are amplified in remote testing conditions. To remove the effects of low effort levels in responses observed in remote low-stakes testing, several motivation filtering methods…
Descriptors: Multiple Choice Tests, Item Response Theory, College Students, Scores

Peer reviewed
Direct link
