NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Elementary and Secondary…1
What Works Clearinghouse Rating
Showing 1 to 15 of 193 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Naima Debbar – International Journal of Contemporary Educational Research, 2024
Intelligent systems of essay grading constitute important tools for educational technologies. They can significantly replace the manual scoring efforts and provide instructional feedback as well. These systems typically include two main parts: a feature extractor and an automatic grading model. The latter is generally based on computational and…
Descriptors: Test Scoring Machines, Computer Uses in Education, Artificial Intelligence, Essay Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Zoe L. Handley; Haiping Wang – Language Assessment Quarterly, 2024
This paper explores what the measures of utterance fluency typically employed in Automatic Speech Evaluation (ASE), i.e. automated speaking assessments, tell us about oral proficiency. 60 Chinese learners of English completed the second part of the speaking section of IELTS and six tasks designed to measure the linguistic knowledge and processing…
Descriptors: Foreign Countries, Speech Evaluation, Graduate Students, Articulation (Speech)
Peer reviewed Peer reviewed
Direct linkDirect link
Casabianca, Jodi M.; Donoghue, John R.; Shin, Hyo Jeong; Chao, Szu-Fu; Choi, Ikkyu – Journal of Educational Measurement, 2023
Using item-response theory to model rater effects provides an alternative solution for rater monitoring and diagnosis, compared to using standard performance metrics. In order to fit such models, the ratings data must be sufficiently connected in order to estimate rater effects. Due to popular rating designs used in large-scale testing scenarios,…
Descriptors: Item Response Theory, Alternative Assessment, Evaluators, Research Problems
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Clodagh Carroll – European Journal of Science and Mathematics Education, 2024
With the initial COVID-19 lockdown of March 2020 in Ireland, many modules in university programmes that were designed to be delivered face-to-face were suddenly switched to remote delivery. The difficulty for both lecturers and students in replicating face-to-face interaction and the frequent lack of lecturers' visibility of students' work in such…
Descriptors: Foreign Countries, College Freshmen, Mathematics Education, Mathematics Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Chan, Kinnie Kin Yee; Bond, Trevor; Yan, Zi – Language Testing, 2023
We investigated the relationship between the scores assigned by an Automated Essay Scoring (AES) system, the Intelligent Essay Assessor (IEA), and grades allocated by trained, professional human raters to English essay writing by instigating two procedures novel to written-language assessment: the logistic transformation of AES raw scores into…
Descriptors: Computer Assisted Testing, Essays, Scoring, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Gong, Kaixuan – Asian-Pacific Journal of Second and Foreign Language Education, 2023
The extensive use of automated speech scoring in large-scale speaking assessment can be revolutionary not only to test design and rating, but also to the learning and instruction of speaking based on how students and teachers perceive and react to this technology. However, its washback remained underexplored. This mixed-method study aimed to…
Descriptors: Second Language Learning, Language Tests, English (Second Language), Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Reagan Mozer; Luke Miratrix; Jackie Relyea; Jimmy Kim – Society for Research on Educational Effectiveness, 2021
Background: In a randomized trial that collects text as an outcome, traditional approaches for assessing treatment impact require that each document first be manually coded for constructs of interest by human raters. An impact analysis can then be conducted to compare treatment and control groups, using the hand-coded scores as a measured outcome.…
Descriptors: Elementary School Students, Grade 1, Grade 2, Science Education
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zhang, Mo; Chen, Jing; Ruan, Chunyi – ETS Research Report Series, 2016
Successful detection of unusual responses is critical for using machine scoring in the assessment context. This study evaluated the utility of approaches to detecting unusual responses in automated essay scoring. Two research questions were pursued. One question concerned the performance of various prescreening advisory flags, and the other…
Descriptors: Essays, Scoring, Automation, Test Scoring Machines
Peer reviewed Peer reviewed
Direct linkDirect link
Young, Chadwick; Lo, Glenn; Young, Kaisa; Borsetta, Alberto – Physics Teacher, 2016
The multiple-choice exam remains a staple for many introductory physics courses. In the past, people have graded these by hand or even flaming needles. Today, one usually grades the exams with a form scanner that utilizes optical mark recognition (OMR). Several companies provide these scanners and particular forms, such as the eponymous…
Descriptors: Multiple Choice Tests, Open Source Technology, Grading, Introductory Courses
Peer reviewed Peer reviewed
Direct linkDirect link
Raczynski, Kevin; Cohen, Allan – Applied Measurement in Education, 2018
The literature on Automated Essay Scoring (AES) systems has provided useful validation frameworks for any assessment that includes AES scoring. Furthermore, evidence for the scoring fidelity of AES systems is accumulating. Yet questions remain when appraising the scoring performance of AES systems. These questions include: (a) which essays are…
Descriptors: Essay Tests, Test Scoring Machines, Test Validity, Evaluators
Peer reviewed Peer reviewed
Direct linkDirect link
Cohen, Yoav; Levi, Effi; Ben-Simon, Anat – Applied Measurement in Education, 2018
In the current study, two pools of 250 essays, all written as a response to the same prompt, were rated by two groups of raters (14 or 15 raters per group), thereby providing an approximation to the essay's true score. An automated essay scoring (AES) system was trained on the datasets and then scored the essays using a cross-validation scheme. By…
Descriptors: Test Validity, Automation, Scoring, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Teneqexhi, Romeo; Qirko, Margarita; Sharko, Genci; Vrapi, Fatmir; Kuneshka, Loreta – International Association for Development of the Information Society, 2017
Exams assessment is one of the most tedious work for university teachers all over the world. Multiple choice theses make exams assessment a little bit easier, but the teacher cannot prepare more than 3-4 variants; in this case, the possibility of students for cheating from one another becomes a risk for "objective assessment outcome." On…
Descriptors: Testing, Computer Assisted Testing, Test Items, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Loukina, Anastassia; Zechner, Klaus; Yoon, Su-Youn; Zhang, Mo; Tao, Jidong; Wang, Xinhao; Lee, Chong Min; Mulholland, Matthew – ETS Research Report Series, 2017
This report presents an overview of the "SpeechRater"? automated scoring engine model building and evaluation process for several item types with a focus on a low-English-proficiency test-taker population. We discuss each stage of speech scoring, including automatic speech recognition, filtering models for nonscorable responses, and…
Descriptors: Automation, Scoring, Speech Tests, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chen, Lei; Zechner, Klaus; Yoon, Su-Youn; Evanini, Keelan; Wang, Xinhao; Loukina, Anatassia; Tap, Jidong; Davis, Lawrence; Lee, Chong Min; Ma, Min; Mundowsky, Robert; Lu, Chi; Leong, Chee Wee; Gyawali, Binod – ETS Research Report Series, 2018
This research report provides an overview of the R&D efforts at Educational Testing Service related to its capability for automated scoring of nonnative spontaneous speech with the "SpeechRater"? automated scoring service since its initial version was deployed in 2006. While most aspects of this R&D work have been published in…
Descriptors: Computer Assisted Testing, Scoring, Test Scoring Machines, Speech Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Kang, Okim; Johnson, David – Language Assessment Quarterly, 2018
Suprasegmental features have received growing attention in the field of oral assessment. In this article we describe a set of computer algorithms that automatically scores the oral proficiency of non-native speakers using unconstrained English speech. The algorithms employ machine learning and 11 suprasegmental measures divided into four groups…
Descriptors: Suprasegmentals, Predictive Validity, Predictor Variables, Oral English
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  12  |  13