NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 1 to 15 of 42 results
Peer reviewed Peer reviewed
Direct linkDirect link
Katzenberger, Irit; Meilijson, Sara – Language Testing, 2014
The Katzenberger Hebrew Language Assessment for Preschool Children (henceforth: the KHLA) is the first comprehensive, standardized language assessment tool developed in Hebrew specifically for older preschoolers (4;0-5;11 years). The KHLA is a norm-referenced, Hebrew specific assessment, based on well-established psycholinguistic principles, as…
Descriptors: Semitic Languages, Preschool Children, Language Impairments, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Kuiken, Folkert; Vedder, Ineke – Language Testing, 2014
This study investigates the relationship in L2 writing between raters' judgments of communicative adequacy and linguistic complexity by means of six-point Likert scales, and general measures of linguistic performance. The participants were 39 learners of Italian and 32 of Dutch, who wrote two short argumentative essays. The same writing tasks…
Descriptors: Writing Evaluation, Second Language Learning, Evaluators, Native Language
Peer reviewed Peer reviewed
Direct linkDirect link
Leaper, David A.; Riazi, Mehdi – Language Testing, 2014
This paper reports an investigation into how the prompt may influence the discourse of group oral tests. The group oral test, in which three or four participants are rated on their ability to discuss a prompt, is a format for assessing the spoken ability of language learners. In this study, 141 Japanese university students were videoed in 41 group…
Descriptors: Oral Language, Language Tests, Second Language Learning, Prompting
Peer reviewed Peer reviewed
Direct linkDirect link
van Compernolle, Rémi A.; Zhang, Haomin – Language Testing, 2014
The focus of this paper is on the design, administration, and scoring of a dynamically administered elicited imitation test of L2 English morphology. Drawing on Vygotskian sociocultural psychology, particularly the concepts of zone of proximal development and dynamic assessment, we argue that support provided during the elicited imitation test…
Descriptors: Alternative Assessment, Imitation, English (Second Language), Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Ling, Guangming; Mollaun, Pamela; Xi, Xiaoming – Language Testing, 2014
The scoring of constructed responses may introduce construct-irrelevant factors to a test score and affect its validity and fairness. Fatigue is one of the factors that could negatively affect human performance in general, yet little is known about its effects on a human rater's scoring quality on constructed responses. In this study, we…
Descriptors: Evaluators, Fatigue (Biology), Scoring, Performance
Peer reviewed Peer reviewed
Direct linkDirect link
Attali, Yigal; Lewis, Will; Steier, Michael – Language Testing, 2013
Automated essay scoring can produce reliable scores that are highly correlated with human scores, but is limited in its evaluation of content and other higher-order aspects of writing. The increased use of automated essay scoring in high-stakes testing underscores the need for human scoring that is focused on higher-order aspects of writing. This…
Descriptors: Scoring, Essay Tests, Reliability, High Stakes Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Jin, Tan; Mak, Barley – Language Testing, 2013
For Chinese as a second language (L2 Chinese), there has been little research into "distinguishing features" (Fulcher, 1996; Iwashita et al., 2008) used in scoring L2 Chinese speaking performance. The study reported here investigates the relationship between the distinguishing features of L2 Chinese spoken performances and the scores awarded by…
Descriptors: Second Languages, Second Language Learning, Chinese, Holistic Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Chapelle, Carol A. – Language Testing, 2012
According to Kane (2006), the argument-based framework is quite simple and involves two steps. First, specify the proposed interpretations and uses of the scores in some detail. Second, evaluate the overall plausibility of the proposed interpretations and uses. Based on experience gained in developing that validity argument, Chapelle, Enright, and…
Descriptors: Validity, Language Tests, Test Interpretation, Test Use
Peer reviewed Peer reviewed
Direct linkDirect link
Jin, Tan; Mak, Barley; Zhou, Pei – Language Testing, 2012
The fuzziness of assessing second language speaking performance raises two difficulties in scoring speaking performance: "indistinction between adjacent levels" and "overlap between scales". To address these two problems, this article proposes a new approach, "confidence scoring", to deal with such fuzziness, leading to "confidence" scores between…
Descriptors: Speech Communication, Scoring, Test Interpretation, Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Bridgeman, Brent; Powers, Donald; Stone, Elizabeth; Mollaun, Pamela – Language Testing, 2012
Scores assigned by trained raters and by an automated scoring system (SpeechRater[TM]) on the speaking section of the TOEFL iBT[TM] were validated against a communicative competence criterion. Specifically, a sample of 555 undergraduate students listened to speech samples from 184 examinees who took the Test of English as a Foreign Language…
Descriptors: Undergraduate Students, Speech Communication, Rating Scales, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Goodwin, Amanda P.; Huggins, A. Corinne; Carlo, Maria; Malabonga, Valerie; Kenyon, Dorry; Louguit, Mohammed; August, Diane – Language Testing, 2012
This study describes the development and validation of the Extract the Base test (ETB), which assesses derivational morphological awareness. Scores on this test were validated for 580 monolingual students and 373 Spanish-speaking English language learners (ELLs) in third through fifth grade. As part of the validation of the internal structure,…
Descriptors: Reading Comprehension, Speech Communication, Second Language Learning, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Xi, Xiaoming; Higgins, Derrick; Zechner, Klaus; Williamson, David – Language Testing, 2012
This paper compares two alternative scoring methods--multiple regression and classification trees--for an automated speech scoring system used in a practice environment. The two methods were evaluated on two criteria: construct representation and empirical performance in predicting human scores. The empirical performance of the two scoring models…
Descriptors: Scoring, Classification, Weighted Scores, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Pellicer-Sanchez, Ana; Schmitt, Norbert – Language Testing, 2012
Despite a number of research studies investigating the Yes-No vocabulary test format, one main question remains unanswered: What is the best scoring procedure to adjust for testee overestimation of vocabulary knowledge? Different scoring methodologies have been proposed based on the inclusion and selection of nonwords in the test. However, there…
Descriptors: Language Tests, Scoring, Reaction Time, Vocabulary Development
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Huan; Choi, Ikkyu; Schmidgall, Jonathan; Bachman, Lyle F. – Language Testing, 2012
This review departs from current practice in reviewing tests in that it employs an "argument-based approach" to test validation to guide the review (e.g. Bachman, 2005; Kane, 2006; Mislevy, Steinberg, & Almond, 2002). Specifically, it follows an approach to test development and use that Bachman and Palmer (2010) call the process of "assessment…
Descriptors: Evidence, Stakeholders, Test Construction, Test Use
Peer reviewed Peer reviewed
Direct linkDirect link
Carey, Michael D.; Mannell, Robert H.; Dunn, Peter K. – Language Testing, 2011
This study investigated factors that could affect inter-examiner reliability in the pronunciation assessment component of speaking tests. We hypothesized that the rating of pronunciation is susceptible to variation in assessment due to the amount of exposure examiners have to nonnative English accents. An inter-rater variability analysis was…
Descriptors: Oral Language, Pronunciation, Phonology, Interlanguage
Previous Page | Next Page »
Pages: 1  |  2  |  3