NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 1 to 15 of 53 results
Peer reviewed Peer reviewed
Direct linkDirect link
Condon, William – Assessing Writing, 2013
Automated Essay Scoring (AES) has garnered a great deal of attention from the rhetoric and composition/writing studies community since the Educational Testing Service began using e-rater[R] and the "Criterion"[R] Online Writing Evaluation Service as products in scoring writing tests, and most of the responses have been negative. While the…
Descriptors: Measurement, Psychometrics, Evaluation Methods, Educational Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Wardle, Elizabeth; Roozen, Kevin – Assessing Writing, 2012
This article offers one potential response to Yancey's (1999) call for a fourth wave of writing assessment able to capture writing development in all of its complexity. Based on an ecological perspective of literate development that situates students' growth as writers across multiple engagements with writing, including those outside of school,…
Descriptors: Writing Evaluation, Writing Tests, Ecology, Writing Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Slomp, David H. – Assessing Writing, 2012
This article discusses three sets of challenges involved in the assessment of writing from a developmental perspective. These challenges include defining a workable theory of development, developing a suitable construct, and overcoming limitations in technocentric approaches to writing assessment. In North America in recent years, a burgeoning…
Descriptors: Writing (Composition), Writing Evaluation, Writing Tests, Writing Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Johnson, David; VanBrackle, Lewis – Assessing Writing, 2012
Raters of Georgia's (USA) state-mandated college-level writing exam, which is intended to ensure a minimal university-level writing competency, are trained to grade holistically when assessing these exams. A guiding principle in holistic grading is to not focus exclusively on any one aspect of writing but rather to give equal weight to style,…
Descriptors: Writing Evaluation, Linguistics, Writing Tests, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Zainal, Azlin – Assessing Writing, 2012
The present study was conducted with a twofold purpose. First, I aim to apply the socio-cognitive framework by Shaw and Weir (2007) in order to validate a summative writing test used in a Malaysian ESL secondary school context. Secondly, by applying the framework I also aim to illustrate practical ways in which teachers can gather validity…
Descriptors: Foreign Countries, Student Evaluation, Writing Tests, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Good, Jennifer M.; Osborne, Kevin; Birchfield, Kelly – Assessing Writing, 2012
Writing is complex, and assessment of writing is equally complex, particularly when considering the need to measure outcomes at the institutional level while providing meaningful data that informs curriculum reform and supports learning at the discipline-level. Using a multi-layered assessment that incorporates standardized measures of writing…
Descriptors: Curriculum Development, Writing Evaluation, Student Evaluation, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Jinyan – Assessing Writing, 2012
Using generalizability (G-) theory, this study examined the accuracy and validity of the writing scores assigned to secondary school ESL students in the provincial English examinations in Canada. The major research question that guided this study was: Are there any differences between the accuracy and construct validity of the analytic scores…
Descriptors: Foreign Countries, Generalizability Theory, Writing Evaluation, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Xu, Yun; Wu, Zunmin – Assessing Writing, 2012
This paper reports on a qualitative research study into the test-taking strategies employed in completing two picture prompt writing tasks--Situational Writing and Interpretational Writing in the Beijing Matriculation English Test. Think-aloud and retrospective interview protocols were collected from twelve Chinese students representing two key…
Descriptors: Foreign Countries, High School Students, Secondary School Teachers, Test Wiseness
Peer reviewed Peer reviewed
Direct linkDirect link
Serviss, Tricia – Assessing Writing, 2012
Drawing upon archival materials, I describe the history, design, and assessment of literacy tests from early 20th century New York state. Practitioners working with these early standardized writing tests grappled with tensions created by public Nativist sentiment, the legislation of "literacy," and calls to score the tests in standardized yet…
Descriptors: Literacy, Writing Tests, Standardized Tests, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Behizadeh, Nadia; Engelhard, George, Jr. – Assessing Writing, 2011
The purpose of this study is to examine the interactions among measurement theories, writing theories, and writing assessments in the United States from an historical perspective. The assessment of writing provides a useful framework for examining how theories influence, and in some cases fail to influence actual practice. Two research traditions…
Descriptors: Writing (Composition), Intellectual Disciplines, Writing Evaluation, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Jing; White, Sheida; McCloskey, Michael; Soroui, Jaleh; Chun, Young – Assessing Writing, 2011
This study investigated the comparability of paper and computer versions of a functional writing assessment administered to adults 16 and older. Three writing tasks were administered in both paper and computer modes to volunteers in the field test of an assessment of adult literacy in 2008. One set of analyses examined mode effects on scoring by…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Educational Technology
Peer reviewed Peer reviewed
Direct linkDirect link
Bridgeman, Brent; Trapani, Catherine; Bivens-Tatum, Jennifer – Assessing Writing, 2011
Writing task variants can increase test security in high-stakes essay assessments by substantially increasing the pool of available writing stimuli and by making the specific writing task less predictable. A given prompt (parent) may be used as the basis for one or more different variants. Six variant types based on argument essay prompts from a…
Descriptors: Writing Evaluation, Writing Tests, Tests, Writing Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Knoch, Ute – Assessing Writing, 2011
Rating scales act as the de facto test construct in a writing assessment, although inevitably as a simplification of the construct (North, 2003). However, it is often not reported how rating scales are constructed. Unless the underlying framework of a rating scale takes some account of linguistic theory and research in the definition of…
Descriptors: Writing Evaluation, Writing Tests, Rating Scales, Linguistic Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Baker, Beverly A. – Assessing Writing, 2010
In high-stakes writing assessments, rater training in the use of a rating scale does not eliminate variability in grade attribution. This realisation has been accompanied by research that explores possible sources of rater variability, such as rater background or rating scale type. However, there has been little consideration thus far of…
Descriptors: Foreign Countries, Writing Evaluation, Writing Tests, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Gebril, Atta – Assessing Writing, 2010
Integrated tasks are currently employed in a number of L2 exams since they are perceived as an addition to the writing-only task type. Given this trend, the current study investigates composite score generalizability of both reading-to-write and writing-only tasks. For this purpose, a multivariate generalizability analysis is used to investigate…
Descriptors: Scoring, Scores, Second Language Instruction, Writing Evaluation
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4