NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 4,660 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Stefanie A. Wind; Yuan Ge – Measurement: Interdisciplinary Research and Perspectives, 2024
Mixed-format assessments made up of multiple-choice (MC) items and constructed response (CR) items that are scored using rater judgments include unique psychometric considerations. When these item types are combined to estimate examinee achievement, information about the psychometric quality of each component can depend on that of the other. For…
Descriptors: Interrater Reliability, Test Bias, Multiple Choice Tests, Responses
Peer reviewed Peer reviewed
Direct linkDirect link
Janet Mee; Ravi Pandian; Justin Wolczynski; Amy Morales; Miguel Paniagua; Polina Harik; Peter Baldwin; Brian E. Clauser – Advances in Health Sciences Education, 2024
Recent advances in automated scoring technology have made it practical to replace multiple-choice questions (MCQs) with short-answer questions (SAQs) in large-scale, high-stakes assessments. However, most previous research comparing these formats has used small examinee samples testing under low-stakes conditions. Additionally, previous studies…
Descriptors: Multiple Choice Tests, High Stakes Tests, Test Format, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Christopher Wheatley; James Wells; John Stewart – Physical Review Physics Education Research, 2024
The Brief Electricity and Magnetism Assessment (BEMA) is a multiple-choice instrument commonly used to measure introductory undergraduate students' conceptual understanding of electricity and magnetism. This study used a network analysis technique called modified module analysis-partial (MMA-P) to identify clusters of correlated responses, also…
Descriptors: Multiple Choice Tests, Energy, Magnets, Undergraduate Students
Peer reviewed Peer reviewed
Direct linkDirect link
Patricia Dowsett; Nathanael Reinertsen – Australian Journal of Language and Literacy, 2023
Senior secondary Literature courses in Australia all aim, to various extents, to develop students' critical literacy skills. These aims share emphases on reading, reflecting and responding critically to texts, on critical analysis and critical ideas, and on forming interpretations informed by critical perspectives. Critical literacy is not…
Descriptors: Foreign Countries, High School Students, Literacy, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Mingfeng Xue; Mark Wilson – Applied Measurement in Education, 2024
Multidimensionality is common in psychological and educational measurements. This study focuses on dimensions that converge at the upper anchor (i.e. the highest acquisition status defined in a learning progression) and compares different ways of dealing with them using the multidimensional random coefficients multinomial logit model and scale…
Descriptors: Learning Trajectories, Educational Assessment, Item Response Theory, Evolution
Peer reviewed Peer reviewed
Direct linkDirect link
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
The process of generating challenging and appropriate distractors for multiple-choice questions is a complex and time-consuming task. Existing methods for an automated generation have limitations in proposing challenging distractors, or they fail to effectively filter out incorrect choices that closely resemble the correct answer, share synonymous…
Descriptors: Multiple Choice Tests, Artificial Intelligence, Attention, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Berenbon, Rebecca F.; McHugh, Bridget C. – Educational Measurement: Issues and Practice, 2023
To assemble a high-quality test, psychometricians rely on subject matter experts (SMEs) to write high-quality items. However, SMEs are not typically given the opportunity to provide input on which content standards are most suitable for multiple-choice questions (MCQs). In the present study, we explored the relationship between perceived MCQ…
Descriptors: Test Items, Multiple Choice Tests, Standards, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Archana Praveen Kumar; Ashalatha Nayak; Manjula Shenoy K.; Chaitanya; Kaustav Ghosh – International Journal of Artificial Intelligence in Education, 2024
Multiple Choice Questions (MCQs) are a popular assessment method because they enable automated evaluation, flexible administration and use with huge groups. Despite these benefits, the manual construction of MCQs is challenging, time-consuming and error-prone. This is because each MCQ is comprised of a question called the "stem", a…
Descriptors: Multiple Choice Tests, Test Construction, Test Items, Semantics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Harnejan K. Atwal; Kenjiro W. Quides – Journal of Microbiology & Biology Education, 2024
Many 4-year public institutions face significant pedagogical challenges due to the high ratio of students to teaching team members. To address the issue, we developed a workflow using the programming language R as a method to rapidly grade multiple-choice questions, adjust for errors, and grade answer-dependent style multiple-choice questions,…
Descriptors: Programming Languages, Public Colleges, Grading, Holistic Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Badali, Sabrina; Rawson, Katherine A.; Dunlosky, John – Educational Psychology Review, 2023
Multiple-choice practice tests are beneficial for learning, and students encounter multiple-choice questions regularly. How do students regulate their use of multiple-choice practice testing? And, how effective is students' use of multiple-choice practice testing? In the current experiments, undergraduate participants practiced German-English word…
Descriptors: Undergraduate Students, Drills (Practice), Multiple Choice Tests, Student Behavior
Peer reviewed Peer reviewed
Direct linkDirect link
Gayman, C. M.; Jimenez, S. T.; Hammock, S.; Taylor, S.; Rocheleau, J. M. – Journal of Behavioral Education, 2023
Interteaching is a behavioral teaching method that has been empirically shown to increase student learning outcomes. The present study investigated the effect of combining interteaching with cumulative versus noncumulative exams in two sections of an online asynchronous class. Interteaching was used in both sections of the course. The…
Descriptors: Teaching Methods, Testing, Online Courses, Asynchronous Communication
Peer reviewed Peer reviewed
Direct linkDirect link
Collier, Jessica R.; Pillai, Raunak M.; Fazio, Lisa K. – Cognitive Research: Principles and Implications, 2023
Fact-checkers want people to both read and remember their misinformation debunks. Retrieval practice is one way to increase memory, thus multiple-choice quizzes may be a useful tool for fact-checkers. We tested whether exposure to quizzes improved people's accuracy ratings for fact-checked claims and their memory for specific information within a…
Descriptors: Informed Consent, Audits (Verification), Multiple Choice Tests, Beliefs
Peer reviewed Peer reviewed
Direct linkDirect link
Ersan, Ozge; Berry, Yufeng – Educational Measurement: Issues and Practice, 2023
The increasing use of computerization in the testing industry and the need for items potentially measuring higher-order skills have led educational measurement communities to develop technology-enhanced (TE) items and conduct validity studies on the use of TE items. Parallel to this goal, the purpose of this study was to collect validity evidence…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Elementary Secondary Education, Accountability
Peer reviewed Peer reviewed
Direct linkDirect link
David G. Schreurs; Jaclyn M. Trate; Shalini Srinivasan; Melonie A. Teichert; Cynthia J. Luxford; Jamie L. Schneider; Kristen L. Murphy – Chemistry Education Research and Practice, 2024
With the already widespread nature of multiple-choice assessments and the increasing popularity of answer-until-correct, it is important to have methods available for exploring the validity of these types of assessments as they are developed. This work analyzes a 20-question multiple choice assessment covering introductory undergraduate chemistry…
Descriptors: Multiple Choice Tests, Test Validity, Introductory Courses, Science Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Sarah Alahmadi; Christine E. DeMars – Applied Measurement in Education, 2024
Large-scale educational assessments are sometimes considered low-stakes, increasing the possibility of confounding true performance level with low motivation. These concerns are amplified in remote testing conditions. To remove the effects of low effort levels in responses observed in remote low-stakes testing, several motivation filtering methods…
Descriptors: Multiple Choice Tests, Item Response Theory, College Students, Scores
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  311