NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign…13
Graduate Record Examinations2
What Works Clearinghouse Rating
Showing all 13 results Save | Export
Papageorgiou, Spiros; Davis, Larry; Norris, John M.; Garcia Gomez, Pablo; Manna, Venessa F.; Monfils, Lora – Educational Testing Service, 2021
The "TOEFL® Essentials"™ test is a new English language proficiency test in the "TOEFL"® family of assessments. It measures foundational language skills and communication abilities in academic and general (daily life) contexts. The test covers the four language skills of reading, listening, writing, and speaking and is intended…
Descriptors: Language Tests, English (Second Language), Second Language Learning, Language Proficiency
Papageorgiou, Spiros; Xu, Xiaoqiu; Timpe-Laughlin, Veronika; Dugdale, Deborah M. – Educational Testing Service, 2020
The purpose of this study is to examine the appropriateness of using the "TOEFL Primary®" tests to evaluate the language abilities of students learning English as a foreign language (EFL) through an online-delivered curriculum, the VIPKid Major Course (MC). Data include student test scores on the TOEFL Primary Listening and Reading tests…
Descriptors: Alignment (Education), Language Tests, English (Second Language), Second Language Learning
Haberman, Shelby J. – Educational Testing Service, 2011
Alternative approaches are discussed for use of e-rater[R] to score the TOEFL iBT[R] Writing test. These approaches involve alternate criteria. In the 1st approach, the predicted variable is the expected rater score of the examinee's 2 essays. In the 2nd approach, the predicted variable is the expected rater score of 2 essay responses by the…
Descriptors: Writing Tests, Scoring, Essays, Language Tests
Attali, Yigal – Educational Testing Service, 2011
The e-rater[R] automated essay scoring system is used operationally in the scoring of TOEFL iBT[R] independent essays. Previous research has found support for a 3-factor structure of the e-rater features. This 3-factor structure has an attractive hierarchical linguistic interpretation with a word choice factor, a grammatical convention within a…
Descriptors: Essay Tests, Language Tests, Test Scoring Machines, Automation
Attali, Yigal – Educational Testing Service, 2011
This paper proposes an alternative content measure for essay scoring, based on the "difference" in the relative frequency of a word in high-scored versus low-scored essays. The "differential word use" (DWU) measure is the average of these differences across all words in the essay. A positive value indicates the essay is using…
Descriptors: Scoring, Essay Tests, Word Frequency, Content Analysis
Sawaki, Yasuyo; Nissan, Susan – Educational Testing Service, 2009
The study investigated the criterion-related validity of the "Test of English as a Foreign Language"[TM] Internet-based test (TOEFL[R] iBT) Listening section by examining its relationship to a criterion measure designed to reflect language-use tasks that university students encounter in everyday academic life: listening to academic…
Descriptors: Test Validity, Language Tests, English (Second Language), Computer Assisted Testing
Quinlan, Thomas; Higgins, Derrick; Wolff, Susanne – Educational Testing Service, 2009
This report evaluates the construct coverage of the e-rater[R[ scoring engine. The matter of construct coverage depends on whether one defines writing skill, in terms of process or product. Originally, the e-rater engine consisted of a large set of components with a proven ability to predict human holistic scores. By organizing these capabilities…
Descriptors: Guides, Writing Skills, Factor Analysis, Writing Tests
Xi, Xiaoming; Mollaun, Pam – Educational Testing Service, 2009
This study investigated the scoring of the Test of English as a Foreign Language[TM] Internet-based Test (TOEFL iBT[TM]) Speaking section by bilingual or multilingual speakers of English and 1 or more Indian languages. We explored the extent to which raters from India, after being trained and certified, were able to score the Speaking section for…
Descriptors: Foreign Countries, English (Second Language), Internet, Language Tests
Liu, Ou Lydia; Schedl, Mary; Malloy, Jeanne; Kong, Nan – Educational Testing Service, 2009
The TOEFL iBT[TM] has increased the length of the reading passages in the reading section compared to the passages on the TOEFL[R] computer-based test (CBT) to better approximate academic reading in North American universities, resulting in a reduced number of passages in the reading test. A concern arising from this change is whether the decrease…
Descriptors: English (Second Language), Language Tests, Internet, Computer Assisted Testing
Swain, Merrill; Huang, Li-Shih; Barkaoui, Khaled; Brooks, Lindsay; Lapkin, Sharon – Educational Testing Service, 2009
This study responds to the Test of English as a Foreign Language[TM] (TOEFL[R]) research agenda concerning the need to understand the processes and knowledge that test-takers utilize. Specifically, it investigates the strategic behaviors test-takers reported using when taking the Speaking section of the TOEFL iBT[TM] (SSTiBT). It also investigates…
Descriptors: English (Second Language), Language Tests, Internet, Speech Skills
Educational Testing Service, 2008
The Test of English as a Foreign Language[TM], better known as TOEFL[R], is designed to measure the English-language proficiency of people whose native language is not English. TOEFL scores are accepted by more than 6,000 colleges, universities, and licensing agencies in 130 countries. The test is also used by governments, and scholarship and…
Descriptors: English (Second Language), Language Proficiency, Language Tests, Computer Assisted Testing
Breland, Hunter; Lee, Yong-Won; Najarian, Michelle; Muraki, Eiji – Educational Testing Service, 2004
This investigation of the comparability of writing assessment prompts was conducted in two phases. In an exploratory Phase I, 47 writing prompts administered in the computer-based Test of English as a Foreign Language[TM] (TOEFL[R] CBT) from July through December 1998 were examined. Logistic regression procedures were used to estimate prompt…
Descriptors: Writing Evaluation, Quality Control, Gender Differences, Writing Tests
Kostin, Irene – Educational Testing Service, 2004
The purpose of this study is to explore the relationship between a set of item characteristics and the difficulty of TOEFL[R] dialogue items. Identifying characteristics that are related to item difficulty has the potential to improve the efficiency of the item-writing process The study employed 365 TOEFL dialogue items, which were coded on 49…
Descriptors: Statistical Analysis, Difficulty Level, Language Tests, English (Second Language)