NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rupp, André A.; Casabianca, Jodi M.; Krüger, Maleika; Keller, Stefan; Köller, Olaf – ETS Research Report Series, 2019
In this research report, we describe the design and empirical findings for a large-scale study of essay writing ability with approximately 2,500 high school students in Germany and Switzerland on the basis of 2 tasks with 2 associated prompts, each from a standardized writing assessment whose scoring involved both human and automated components.…
Descriptors: Automation, Foreign Countries, English (Second Language), Language Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ghosh, Siddhartha; Fatima, Sameen S. – Journal of Educational Technology, 2007
Automated essay grading or scoring systems are no more a myth, but they are a reality. As of today, the human written (not hand written) essays are corrected not only by examiners/teachers but also by machines. The TOEFL exam is one of the best examples of this application. The students' essays are evaluated both by human and web based automated…
Descriptors: Foreign Countries, Essays, Grading, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Davis, Larry; Papageorgiou, Spiros – Assessment in Education: Principles, Policy & Practice, 2021
Human raters and machine scoring systems potentially have complementary strengths in evaluating language ability; specifically, it has been suggested that automated systems might be used to make consistent measurements of specific linguistic phenomena, whilst humans evaluate more global aspects of performance. We report on an empirical study that…
Descriptors: Scoring, English for Academic Purposes, Oral English, Speech Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Sinclair, Jeanne; Larson, Elizabeth Jean; Rajendram, Shakina – Language Assessment Quarterly, 2019
The number of internationally mobile students pursuing higher education increases each year, with 8 million students expected to study abroad globally by 2025 (Farrugia, 2014). Many English-dominant universities require international applicants to provide standardized test scores as evidence of English proficiency. Accordingly, millions of…
Descriptors: Graduate Students, Student Attitudes, High Stakes Tests, Standardized Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Gu, Lin; Davis, Larry; Tao, Jacob; Zechner, Klaus – Assessment in Education: Principles, Policy & Practice, 2021
Recent technology advancements have increased the prospects for automated spoken language technology to provide feedback on speaking performance. In this study we examined user perceptions of using an automated feedback system for preparing for the TOEFL iBT® test. Test takers and language teachers evaluated three types of machine-generated…
Descriptors: Audio Equipment, Test Preparation, Feedback (Response), Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Evanini, Keelan; Heilman, Michael; Wang, Xinhao; Blanchard, Daniel – ETS Research Report Series, 2015
This report describes the initial automated scoring results that were obtained using the constructed responses from the Writing and Speaking sections of the pilot forms of the "TOEFL Junior"® Comprehensive test administered in late 2011. For all of the items except one (the edit item in the Writing section), existing automated scoring…
Descriptors: Computer Assisted Testing, Automation, Language Tests, Second Language Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yao, Lili; Haberman, Shelby J.; Zhang, Mo – ETS Research Report Series, 2019
Many assessments of writing proficiency that aid in making high-stakes decisions consist of several essay tasks evaluated by a combination of human holistic scores and computer-generated scores for essay features such as the rate of grammatical errors per word. Under typical conditions, a summary writing score is provided by a linear combination…
Descriptors: Prediction, True Scores, Computer Assisted Testing, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Gong, Kaixuan – Asian-Pacific Journal of Second and Foreign Language Education, 2023
The extensive use of automated speech scoring in large-scale speaking assessment can be revolutionary not only to test design and rating, but also to the learning and instruction of speaking based on how students and teachers perceive and react to this technology. However, its washback remained underexplored. This mixed-method study aimed to…
Descriptors: Second Language Learning, Language Tests, English (Second Language), Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Monteiro, Kátia R.; Crossley, Scott A.; Kyle, Kristopher – Applied Linguistics, 2020
Lexical items that are encountered more frequently and in varying contexts have important effects on second language (L2) development because frequent and contextually diverse words are learned faster and become more entrenched in a learner's lexicon (Ellis 2002a, b). Despite evidence that L2 learners are generally exposed to non-native input,…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Benchmarking
Peer reviewed Peer reviewed
Direct linkDirect link
McPherson, Douglas – Interactive Technology and Smart Education, 2009
Purpose: The purpose of this paper is to describe how and why Texas A&M University at Qatar (TAMUQ) has developed a system aiming to effectively place students in freshman and developmental English programs. The placement system includes: triangulating data from external test scores, with scores from a panel-marked hand-written essay (HWE),…
Descriptors: Student Placement, Educational Testing, English (Second Language), Second Language Instruction
Eckert, Doug – 1996
A study of the translation process compared the decisions that native speakers (experts) and non-native speakers (non-experts) made that influenced resulting translations. Subjects were 40 students, graduate students, and faculty in a university foreign language department. English language proficiency was measured for native speakers by using the…
Descriptors: College Faculty, College Students, Comparative Analysis, Decision Making
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Loukina, Anastassia; Buzick, Heather – ETS Research Report Series, 2017
This study is an evaluation of the performance of automated speech scoring for speakers with documented or suspected speech impairments. Given that the use of automated scoring of open-ended spoken responses is relatively nascent and there is little research to date that includes test takers with disabilities, this small exploratory study focuses…
Descriptors: Automation, Scoring, Language Tests, Speech Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Bridgeman, Brent; Trapani, Catherine – Journal of Technology, Learning, and Assessment, 2010
A generic approach in automated essay scoring produces scores that have the same meaning across all prompts, existing or new, of a writing assessment. This is accomplished by using a single set of linguistic indicators (or features), a consistent way of combining and weighting these features into essay scores, and a focus on features that are not…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Test Scoring Machines
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Weigle, Sara Cushing – ETS Research Report Series, 2011
Automated scoring has the potential to dramatically reduce the time and costs associated with the assessment of complex skills such as writing, but its use must be validated against a variety of criteria for it to be accepted by test users and stakeholders. This study addresses two validity-related issues regarding the use of e-rater® with the…
Descriptors: Scoring, English (Second Language), Second Language Instruction, Automation