NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 16 to 30 of 172 results
Peer reviewed Peer reviewed
Direct linkDirect link
Shrestha, Prithvi; Coffin, Caroline – Assessing Writing, 2012
Supporting undergraduate students with their academic literacies has recently been a major focus in higher education in the UK. This paper explores the value of tutor mediation in the context of academic writing development among undergraduate business studies students in open and distance learning, following the dynamic assessment (DA) approach…
Descriptors: Learning Theories, Undergraduate Students, Writing (Composition), Distance Education
Peer reviewed Peer reviewed
Direct linkDirect link
Zainal, Azlin – Assessing Writing, 2012
The present study was conducted with a twofold purpose. First, I aim to apply the socio-cognitive framework by Shaw and Weir (2007) in order to validate a summative writing test used in a Malaysian ESL secondary school context. Secondly, by applying the framework I also aim to illustrate practical ways in which teachers can gather validity…
Descriptors: Foreign Countries, Student Evaluation, Writing Tests, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Wiseman, Cynthia S. – Assessing Writing, 2012
The decision-making behaviors of 8 raters when scoring 39 persuasive and 39 narrative essays written by second language learners were examined, first using Rasch analysis and then, through think aloud protocols. Results based on Rasch analysis and think aloud protocols recorded by raters as they were scoring holistically and analytically suggested…
Descriptors: Self Concept, Protocol Analysis, Scoring, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Good, Jennifer M.; Osborne, Kevin; Birchfield, Kelly – Assessing Writing, 2012
Writing is complex, and assessment of writing is equally complex, particularly when considering the need to measure outcomes at the institutional level while providing meaningful data that informs curriculum reform and supports learning at the discipline-level. Using a multi-layered assessment that incorporates standardized measures of writing…
Descriptors: Curriculum Development, Writing Evaluation, Student Evaluation, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Jinyan – Assessing Writing, 2012
Using generalizability (G-) theory, this study examined the accuracy and validity of the writing scores assigned to secondary school ESL students in the provincial English examinations in Canada. The major research question that guided this study was: Are there any differences between the accuracy and construct validity of the analytic scores…
Descriptors: Foreign Countries, Generalizability Theory, Writing Evaluation, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Xu, Yun; Wu, Zunmin – Assessing Writing, 2012
This paper reports on a qualitative research study into the test-taking strategies employed in completing two picture prompt writing tasks--Situational Writing and Interpretational Writing in the Beijing Matriculation English Test. Think-aloud and retrospective interview protocols were collected from twelve Chinese students representing two key…
Descriptors: Foreign Countries, High School Students, Secondary School Teachers, Test Wiseness
Peer reviewed Peer reviewed
Direct linkDirect link
Serviss, Tricia – Assessing Writing, 2012
Drawing upon archival materials, I describe the history, design, and assessment of literacy tests from early 20th century New York state. Practitioners working with these early standardized writing tests grappled with tensions created by public Nativist sentiment, the legislation of "literacy," and calls to score the tests in standardized yet…
Descriptors: Literacy, Writing Tests, Standardized Tests, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
DelleBovi, Betsy M. – Assessing Writing, 2012
This action research demonstrates the answer to this question: How can literacy professors provide effective training in evaluating writing to preservice graduate education students? The study examines writing assessment instruction in the context of a literacy course required of preservice teachers seeking secondary (7-12) certification in…
Descriptors: Literacy, Writing Evaluation, Graduate Students, Preservice Teachers
Peer reviewed Peer reviewed
Direct linkDirect link
Saxton, Emily; Belanger, Secret; Becker, William – Assessing Writing, 2012
The purpose of this study was to investigate the intra-rater and inter-rater reliability of the Critical Thinking Analytic Rubric (CTAR). The CTAR is composed of 6 rubric categories: interpretation, analysis, evaluation, inference, explanation, and disposition. To investigate inter-rater reliability, two trained raters scored four sets of…
Descriptors: Scoring Rubrics, Critical Thinking, Interrater Reliability, Performance Based Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Colombini, Crystal Broch; McBride, Maureen – Assessing Writing, 2012
Composition assessment scholars have exhibited uneasiness with the language of norming grounded in distaste for the psychometric assumption that achievement of consensus in a communal assessment setting is desirable even at the cost of individual pedagogical values. Responding to the problems of a "reliability" defined by homogenous agreement,…
Descriptors: Writing Evaluation, Conflict, Test Norms, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Harsch, Claudia; Martin, Guido – Assessing Writing, 2012
We explore how a local rating scale can be based on the Common European Framework CEF-proficiency scales. As part of the scale validation (Alderson, 1991; Lumley, 2002), we examine which adaptations are needed to turn CEF-proficiency descriptors into a rating scale for a local context, and to establish a practicable method to revise the initial…
Descriptors: Rating Scales, Validity, Media Adaptation, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Behizadeh, Nadia; Engelhard, George, Jr. – Assessing Writing, 2011
The purpose of this study is to examine the interactions among measurement theories, writing theories, and writing assessments in the United States from an historical perspective. The assessment of writing provides a useful framework for examining how theories influence, and in some cases fail to influence actual practice. Two research traditions…
Descriptors: Writing (Composition), Intellectual Disciplines, Writing Evaluation, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Kobrin, Jennifer L.; Deng, Hui; Shaw, Emily J. – Assessing Writing, 2011
This study investigated the relationship of prompt characteristics and response features with essay scores on the SAT Reasoning Test. A sample of essays was coded on a variety of features regarding their length and content. Analyses included descriptive statistics and computation of effect sizes, correlations between essay features and scores, and…
Descriptors: Evidence, Critical Reading, Effect Size, College Entrance Examinations
Peer reviewed Peer reviewed
Direct linkDirect link
DiPardo, Anne; Storms, Barbara A.; Selland, Makenzie – Assessing Writing, 2011
This paper describes the process by which a rubric development team affiliated with the National Writing Project negotiated difficulties and dilemmas concerning an analytic scoring category initially termed Voice and later renamed Stance. Although these labels reference an aspect of student writing that many teachers value, the challenge of…
Descriptors: Student Evaluation, Scoring Rubrics, Scoring, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Jing; White, Sheida; McCloskey, Michael; Soroui, Jaleh; Chun, Young – Assessing Writing, 2011
This study investigated the comparability of paper and computer versions of a functional writing assessment administered to adults 16 and older. Three writing tasks were administered in both paper and computer modes to volunteers in the field test of an assessment of adult literacy in 2008. One set of analyses examined mode effects on scoring by…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Educational Technology
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  12