NotesFAQContact Us
Collection
Advanced
Search Tips
50 Years of ERIC
50 Years of ERIC
The Education Resources Information Center (ERIC) is celebrating its 50th Birthday! First opened on May 15th, 1964 ERIC continues the long tradition of ongoing innovation and enhancement.

Learn more about the history of ERIC here. PDF icon

Showing 1 to 15 of 16 results
Peer reviewed Peer reviewed
Direct linkDirect link
DiPardo, Anne; Storms, Barbara A.; Selland, Makenzie – Assessing Writing, 2011
This paper describes the process by which a rubric development team affiliated with the National Writing Project negotiated difficulties and dilemmas concerning an analytic scoring category initially termed Voice and later renamed Stance. Although these labels reference an aspect of student writing that many teachers value, the challenge of…
Descriptors: Student Evaluation, Scoring Rubrics, Scoring, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Kreth, Melinda; Crawford, Mary Ann; Taylor, Marcy; Brockman, Elizabeth – Assessing Writing, 2010
We present some key findings of a four-year, two-phase writing assessment project at Central Michigan University: Phase One (2002), a survey of faculty members (n=115) and subsequent focus groups (n=14) and Phase Two (2005), an evaluation of two samples of student writing (n=635 and 632). Major findings of Phase One reported here include the…
Descriptors: Writing Evaluation, Critical Reading, Focus Groups, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Anthony, Jared Judd – Assessing Writing, 2009
Testing the hypotheses that reflective timed-essay prompts should elicit memories of meaningful experiences in students' undergraduate education, and that computer-mediated classroom experiences should be salient among those memories, a combination of quantitative and qualitative research methods paints a richer, more complex picture than either…
Descriptors: Undergraduate Study, Qualitative Research, Research Methodology, Reflection
Peer reviewed Peer reviewed
Direct linkDirect link
Worden, Dorothy L. – Assessing Writing, 2009
It is widely assumed that the constraints of timed essay exams will make it virtually impossible for students to engage in the major hallmarks of the writing process, especially revision, in testing situations. This paper presents the results of a study conducted at Washington State University in the Spring of 2008. The study examined the…
Descriptors: Timed Tests, Writing Evaluation, Writing Tests, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Condon, William – Assessing Writing, 2009
Establishing the score or the placement as the first priority in a writing assessment leads to more reductive forms of writing assessment. However, if the prompts used in a direct test of writing were generative--that is, if they asked test-takers to analyze their own experiences as writers or learners, for example--the resulting texts would be…
Descriptors: Writing Evaluation, Writing Tests, Reflection, Undergraduate Students
Peer reviewed Peer reviewed
Direct linkDirect link
Evans, Donna – Assessing Writing, 2009
This is the story of a research journey that follows the trail of a novel evaluand--"place." I examine place as mentioned by rising juniors in timed exams. Using a hybridized methodology--the qualitative approach of a hermeneutic dialectic process as described by Guba and Lincoln (1989), and the quantitative evidence of place mention--I query…
Descriptors: Student Motivation, Student Experience, Writing Evaluation, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Petersen, Jerry – Assessing Writing, 2009
Large-scale writing programs can add value to the traditional timed writing assessment by using aspects of the essays to assess the effectiveness of institutional goals, programs, and curriculums. The "six learning goals" prompt in this study represents an attempt to provide an accurate writing assessment that moves beyond scores. This paper…
Descriptors: Feedback (Response), Writing Evaluation, Student Evaluation, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Dappen, Leon; Isernhagen, Jody; Anderson, Sue – Assessing Writing, 2008
This paper is an examination of statewide district writing achievement gain data from the Nebraska Statewide Writing Assessment system and implications for statewide assessment writing models. The writing assessment program is used to gain compliance with the United States No Child Left Behind Law (NCLB), a federal effort to influence school…
Descriptors: Writing Evaluation, Student Evaluation, Federal Legislation, Writing Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Anson, Chris M. – Assessing Writing, 2006
Writing across the curriculum (WAC) programs had their genesis in grass-roots efforts to promote attention to writing in all disciplinary areas. At first based on generic faculty-development activities with little regard to systemic and institutional concerns, WAC programs are now more often engaged in assessment and research of writing,…
Descriptors: Writing Across the Curriculum, Program Effectiveness, Program Development, Program Implementation
Peer reviewed Peer reviewed
Direct linkDirect link
Hunter, Darryl; Mayenga, Charles; Gambell, Trevor – Assessing Writing, 2006
Classroom assessment of writing is considered from an anthropological perspective as practitioners' tool use. Pan Canadian data from a 2002 English teacher questionnaire (N = 4070) about self-reported assessment practices were analyzed in terms of tool choice and use by secondary teachers of different experience and qualification levels. Four…
Descriptors: Feedback (Response), Large Group Instruction, Homework, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Burke, Jennifer N.; Cizek, Gregory J. – Assessing Writing, 2006
This study was conducted to gather evidence regarding effects of the mode of writing (handwritten vs. word-processed) on compositional quality in a sample of sixth grade students. Questionnaire data and essay scores were gathered to examine the effect of composition mode on essay scores of students of differing computer skill levels. The study was…
Descriptors: Computer Assisted Testing, High Stakes Tests, Writing Processes, Grade 6
Peer reviewed Peer reviewed
Direct linkDirect link
Rutz, Carol; Lauer-Glebov, Jacqulyn – Assessing Writing, 2005
Using recent experience at Carleton College in Minnesota as a case history, the authors offer a model for assessment that provides more flexibility than the well-known assessment feedback loop, which assumes a linear progression within a hierarchical administrative structure. The proposed model is based on a double helix, with values and feedback…
Descriptors: Feedback (Response), Graduation Requirements, Faculty Development, College Faculty
Peer reviewed Peer reviewed
Direct linkDirect link
Brown, Gavin T. L.; Glasswell, Kath; Harland, Don – Assessing Writing, 2004
Accuracy in the scoring of writing is critical if standardized tasks are to be used in a national assessment scheme. Three approaches to establishing accuracy (i.e., consensus, consistency, and measurement) exist and commonly large-scale assessment programs of primary school writing demonstrate adjacent agreement consensus rates of between 80% and…
Descriptors: Writing Evaluation, Student Evaluation, Educational Assessment, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Condon, William; Kelly-Riley, Diane – Assessing Writing, 2004
Washington State University (WSU), has developed two large-scale assessment programs to evaluate student learning outcomes. The largest, the Writing Assessment Program, diagnoses student writing abilities at entry and mid-career to determine the type of support needed to navigate the expectations of our writing-rich curriculum. The second, the…
Descriptors: Writing Evaluation, Student Evaluation, Writing Tests, Writing Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Slomp, David H.; Fuite, Jim – Assessing Writing, 2004
Specialists in the field of large-scale, high-stakes writing assessment have, over the last forty years alternately discussed the issue of maximizing either reliability or validity in test design. Factors complicating the debate--such as Messick's (1989) expanded definition of validity, and the ethical implications of testing--are explored. An…
Descriptors: Information Theory, Writing Evaluation, Writing Tests, Test Validity
Previous Page | Next Page ยป
Pages: 1  |  2