Publication Date
| In 2015 | 0 |
| Since 2014 | 0 |
| Since 2011 (last 5 years) | 5 |
| Since 2006 (last 10 years) | 13 |
| Since 1996 (last 20 years) | 16 |
Descriptor
| Evaluation Criteria | 19 |
| Writing Evaluation | 17 |
| Student Evaluation | 8 |
| Evaluation Methods | 7 |
| Writing Instruction | 6 |
| Essays | 4 |
| Higher Education | 4 |
| Scoring | 4 |
| College Instruction | 3 |
| Educational Technology | 3 |
| More ▼ | |
Source
| Assessing Writing | 19 |
Author
| Black, Laurel | 1 |
| Bradshaw, William S. | 1 |
| Brown, Annie | 1 |
| Bruning, Roger H. | 1 |
| Deane, Paul | 1 |
| Dempsey, Michael S. | 1 |
| Elbow, Peter | 1 |
| Esfandiari, Rajab | 1 |
| Flach, Jennifer | 1 |
| Gambell, Trevor | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 19 |
| Reports - Research | 8 |
| Reports - Evaluative | 6 |
| Opinion Papers | 2 |
| Reports - Descriptive | 2 |
| Collected Works - General | 1 |
| Information Analyses | 1 |
Education Level
| Higher Education | 10 |
| Postsecondary Education | 9 |
| Elementary Secondary Education | 6 |
| Secondary Education | 3 |
| Grade 12 | 1 |
Audience
| Teachers | 3 |
Showing 1 to 15 of 19 results
Esfandiari, Rajab; Myford, Carol M. – Assessing Writing, 2013
We compared three assessor types (self-assessors, peer-assessors, and teacher assessors) to determine whether they differed in the levels of severity they exercised when rating essays. We analyzed the ratings of 194 assessors who evaluated 188 essays that students enrolled in two state-run universities in Iran wrote. The assessors employed a…
Descriptors: Foreign Countries, Severity (of Disability), Essays, Gender Differences
Ramineni, Chaitanya; Williamson, David M. – Assessing Writing, 2013
In this paper, we provide an overview of psychometric procedures and guidelines Educational Testing Service (ETS) uses to evaluate automated essay scoring for operational use. We briefly describe the e-rater system, the procedures and criteria used to evaluate e-rater, implications for a range of potential uses of e-rater, and directions for…
Descriptors: Educational Testing, Guidelines, Scoring, Psychometrics
Deane, Paul – Assessing Writing, 2013
This paper examines the construct measured by automated essay scoring (AES) systems. AES systems measure features of the text structure, linguistic structure, and conventional print form of essays; as such, the systems primarily measure text production skills. In the current state-of-the-art, AES provide little direct evidence about such matters…
Descriptors: Scoring, Essays, Text Structure, Writing (Composition)
Wiseman, Cynthia S. – Assessing Writing, 2012
The decision-making behaviors of 8 raters when scoring 39 persuasive and 39 narrative essays written by second language learners were examined, first using Rasch analysis and then, through think aloud protocols. Results based on Rasch analysis and think aloud protocols recorded by raters as they were scoring holistically and analytically suggested…
Descriptors: Self Concept, Protocol Analysis, Scoring, Item Response Theory
Morozov, Andrew – Assessing Writing, 2011
Mixed quantitative and qualitative methods were used to assess student attitudes toward the assessment criteria for higher-order critical and analytical thinking skills in writing-intensive curricula in first-year, general education courses. The courses varied in emphasis on critical thinking in the criteria used to assess writing. The analysis is…
Descriptors: Constructivism (Learning), Student Attitudes, Critical Thinking, Evaluation Methods
Rezaei, Ali Reza; Lovorn, Michael – Assessing Writing, 2010
This experimental project investigated the reliability and validity of rubrics in assessment of students' written responses to a social science "writing prompt". The participants were asked to grade one of the two samples of writing assuming it was written by a graduate student. In fact both samples were prepared by the authors. The first sample…
Descriptors: Spelling, Sentence Structure, Punctuation, Social Sciences
Hyland, Theresa Ann – Assessing Writing, 2009
Current concerns about academic plagiarism in student writing assume qualitative and quantitative differences in the writing of students for whom English is a first language (EL1) and English is a second language (EL2), but lack precision in measuring those differences. I examined the citation practices of EL1 and EL2 students in a timed writing…
Descriptors: Intellectual Property, Prior Learning, Rating Scales, Citations (References)
Dempsey, Michael S.; PytlikZillig, Lisa M.; Bruning, Roger H. – Assessing Writing, 2009
Writing is a highly valued skill that is often neglected in the classroom; one reason is that teachers often do not receive adequate training in writing assessment and instruction. Teachers, particularly preservice teachers, need practice making detailed assessments of student writing and to build their confidence for assessing student writing,…
Descriptors: Feedback (Response), Preservice Teacher Education, Preservice Teachers, Writing Evaluation
Slomp, David H. – Assessing Writing, 2008
Test-based accountability programs are designed to promote improved standards of teaching and learning within the systems of education that they are connected to. Brenan [Brenan, A. L. (2006). "Perspectives on the evolution and future of educational measurement." In: Robert (Ed.), "Educational measurement" (4th ed., pp. 1-16). Westport, CT:…
Descriptors: Writing Evaluation, Writing Tests, Evaluation Criteria, Program Effectiveness
Zhao, Cecilia Guanfang; Llosa, Lorena – Assessing Writing, 2008
Despite the debate among writing researchers about its viability as a pedagogical tool in writing instruction [e.g., Helms-Park, R., & Stapleton, P. (2003). "Questioning the importance of individualized voice in undergraduate L2 argumentative writing: An empirical study with pedagogical implications." "Journal of Second Language Writing," 12 (3),…
Descriptors: Writing Evaluation, Second Language Instruction, Writing Tests, Writing Instruction
Hunter, Darryl; Mayenga, Charles; Gambell, Trevor – Assessing Writing, 2006
Classroom assessment of writing is considered from an anthropological perspective as practitioners' tool use. Pan Canadian data from a 2002 English teacher questionnaire (N = 4070) about self-reported assessment practices were analyzed in terms of tool choice and use by secondary teachers of different experience and qualification levels. Four…
Descriptors: Feedback (Response), Large Group Instruction, Homework, Student Evaluation
Martin, Deb; Penrod, Diane – Assessing Writing, 2006
In this article, the authors argue that evaluation courses grounded in assessment theory and situated within a cultural context of actual workplace practices enhances student learning. Giving students the tools that assessors, both formal and informal, use helps them learn to anticipate and generate those tools for writing situations they will…
Descriptors: Student Evaluation, Undergraduate Students, Work Environment, Writing Evaluation
Elbow, Peter – Assessing Writing, 2006
Inoue [Inoue, A. B. (2005). "Community-based assessment pedagogy." "Assessing Writing: An International Journal, 3", 208-238] sets up a radically experimental writing class as a kind of laboratory of assessment. He seeks to avoid the standard situation where a teacher unilaterally assesses and grades student writing, using only his or her own…
Descriptors: Feedback (Response), Evaluation Methods, Evaluation Criteria, Educational Environment
Self-Assessment of Writing in Independent Language Learning Programs: The Value of Annotated Samples
Brown, Annie – Assessing Writing, 2005
The availability of multi-media technologies in education has made the option of independent learning increasingly attractive. Whilst independent learning presents learners with a more flexible learning context, it also presents new challenges in assessment in that the onus is placed upon the learners themselves to monitor and evaluate their own…
Descriptors: Independent Study, Evaluation Methods, Language Skills, Self Evaluation (Individuals)
Sudweeks, Richard R.; Reeve, Suzanne; Bradshaw, William S. – Assessing Writing, 2004
A pilot study was conducted to evaluate and improve the rating procedure proposed for use in a research effort designed to assess the essay writing ability of college sophomores. Generalizability theory and the Many-Facet Rasch Model were each used to (a) estimate potential sources of error in the rating, (b) to obtain reliability estimates, and…
Descriptors: Generalizability Theory, College Students, Writing Ability, Writing Evaluation
Previous Page | Next Page ยป
Pages: 1 | 2
Peer reviewed
Direct link
