NotesFAQContact Us
Search Tips
ERIC Number: ED560389
Record Type: Non-Journal
Publication Date: 2013
Pages: 295
Abstractor: As Provided
Reference Count: N/A
ISBN: 978-1-3033-7724-2
Comparing Native and Non-Native Raters of US Federal Government Speaking Tests
Brooks, Rachel Lunde
ProQuest LLC, Ph.D. Dissertation, Georgetown University
Previous Language Testing research has largely reported that although many raters' characteristics affect their evaluations of language assessments (Reed & Cohen, 2001), being a native speaker or non-native speaker rater does not significantly affect final ratings (Kim, 2009). In Second Language Acquisition, some researchers conclude that performance and perception differences exist between native and non-native speakers, while others contend that there is little conclusive evidence to support end state differences. The US Federal Government requires speaking test raters to be both native and high-proficiency speakers of the test language (FBI, 2009). An exploration of how the native speaker is defined in research reveals a lack of common understanding, referring both to an ideal speaker and a native acquirer of language. This study built on previous research by expanding the breadth of proficiency levels rated to include highly articulate examinees, regrouping the raters to represent three ideas of nativeness (native/non-native speakers, speaking proficiency, and first language), and examining final and linguistic category ratings to reveal the raters' scoring construct. Thirty FBI speaking testers, native and non-native speakers of English, rated 25 English Speaking Proficiency Tests. They assigned ratings for the overall test and linguistic categories, including functions, organization, structures, vocabulary, fluency, pronunciation and social/cultural appropriateness. Using ANOVAs and MANOVAs, the results indicated no significant difference between the native and non-native speaker groups. When raters were grouped by English proficiency level, lower proficiency raters gave significantly lower ratings, both in the final and in many linguistic category ratings, although with a small effect size. Non-native speakers rated comparably to native speakers in the results, but significant differences occurred between rater groups when they were arranged by rater English speaking proficiency and first language. The results suggested that rater training organizations should consider rater proficiency level rather than whether or not they are native speakers. Additionally, they supported the theory that non-native speakers can demonstrate language acquisition equivalent to native speakers, at least when evaluating language. Finally, it was recommended that researchers and testing practitioners that use native speakers should clearly define and justify their use, or avoid the native speaker term altogether. [The dissertation citations contained here are published with the permission of ProQuest LLC. Further reproduction is prohibited without permission. Copies of dissertations may be obtained by Telephone (800) 1-800-521-0600. Web page:]
ProQuest LLC. 789 East Eisenhower Parkway, P.O. Box 1346, Ann Arbor, MI 48106. Tel: 800-521-0600; Web site:
Publication Type: Dissertations/Theses - Doctoral Dissertations
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A