NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 17 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Klobucar, Andrew; Elliot, Norbert; Deess, Perry; Rudniy, Oleksandr; Joshi, Kamal – Assessing Writing, 2013
This study investigated the use of automated essay scoring (AES) to identify at-risk students enrolled in a first-year university writing course. An application of AES, the "Criterion"[R] Online Writing Evaluation Service was evaluated through a methodology focusing on construct modelling, response processes, disaggregation, extrapolation,…
Descriptors: Writing Evaluation, Scoring, Writing Instruction, Essays
Peer reviewed Peer reviewed
Direct linkDirect link
Ramineni, Chaitanya; Williamson, David M. – Assessing Writing, 2013
In this paper, we provide an overview of psychometric procedures and guidelines Educational Testing Service (ETS) uses to evaluate automated essay scoring for operational use. We briefly describe the e-rater system, the procedures and criteria used to evaluate e-rater, implications for a range of potential uses of e-rater, and directions for…
Descriptors: Educational Testing, Guidelines, Scoring, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Condon, William – Assessing Writing, 2013
Automated Essay Scoring (AES) has garnered a great deal of attention from the rhetoric and composition/writing studies community since the Educational Testing Service began using e-rater[R] and the "Criterion"[R] Online Writing Evaluation Service as products in scoring writing tests, and most of the responses have been negative. While the…
Descriptors: Measurement, Psychometrics, Evaluation Methods, Educational Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Weigle, Sara Cushing – Assessing Writing, 2013
This article presents considerations for using automated scoring systems to evaluate second language writing. A distinction is made between English language learners in English-medium educational systems and those studying English in their own countries for a variety of purposes, and between learning-to-write and writing-to-learn in a second…
Descriptors: Scoring, Second Language Learning, Second Languages, English Language Learners
Peer reviewed Peer reviewed
Direct linkDirect link
Deane, Paul – Assessing Writing, 2013
This paper examines the construct measured by automated essay scoring (AES) systems. AES systems measure features of the text structure, linguistic structure, and conventional print form of essays; as such, the systems primarily measure text production skills. In the current state-of-the-art, AES provide little direct evidence about such matters…
Descriptors: Scoring, Essays, Text Structure, Writing (Composition)
Peer reviewed Peer reviewed
Direct linkDirect link
Ramineni, Chaitanya – Assessing Writing, 2013
In this paper, I describe the design and evaluation of automated essay scoring (AES) models for an institution's writing placement program. Information was gathered on admitted student writing performance at a science and technology research university in the northeastern United States. Under timed conditions, first-year students (N = 879) were…
Descriptors: Validity, Comparative Analysis, Internet, Student Placement
Peer reviewed Peer reviewed
Direct linkDirect link
Kobrin, Jennifer L.; Deng, Hui; Shaw, Emily J. – Assessing Writing, 2011
This study investigated the relationship of prompt characteristics and response features with essay scores on the SAT Reasoning Test. A sample of essays was coded on a variety of features regarding their length and content. Analyses included descriptive statistics and computation of effect sizes, correlations between essay features and scores, and…
Descriptors: Evidence, Critical Reading, Effect Size, College Entrance Examinations
Peer reviewed Peer reviewed
Direct linkDirect link
McCurry, Doug – Assessing Writing, 2010
This article considers the claim that machine scoring of writing test responses agrees with human readers as much as humans agree with other humans. These claims about the reliability of machine scoring of writing are usually based on specific and constrained writing tasks, and there is reason for asking whether machine scoring of writing requires…
Descriptors: Writing Tests, Scoring, Interrater Reliability, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Anthony, Jared Judd – Assessing Writing, 2009
Testing the hypotheses that reflective timed-essay prompts should elicit memories of meaningful experiences in students' undergraduate education, and that computer-mediated classroom experiences should be salient among those memories, a combination of quantitative and qualitative research methods paints a richer, more complex picture than either…
Descriptors: Undergraduate Study, Qualitative Research, Research Methodology, Reflection
Peer reviewed Peer reviewed
Direct linkDirect link
Worden, Dorothy L. – Assessing Writing, 2009
It is widely assumed that the constraints of timed essay exams will make it virtually impossible for students to engage in the major hallmarks of the writing process, especially revision, in testing situations. This paper presents the results of a study conducted at Washington State University in the Spring of 2008. The study examined the…
Descriptors: Timed Tests, Writing Evaluation, Writing Tests, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Evans, Donna – Assessing Writing, 2009
This is the story of a research journey that follows the trail of a novel evaluand--"place." I examine place as mentioned by rising juniors in timed exams. Using a hybridized methodology--the qualitative approach of a hermeneutic dialectic process as described by Guba and Lincoln (1989), and the quantitative evidence of place mention--I query…
Descriptors: Student Motivation, Student Experience, Writing Evaluation, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Petersen, Jerry – Assessing Writing, 2009
Large-scale writing programs can add value to the traditional timed writing assessment by using aspects of the essays to assess the effectiveness of institutional goals, programs, and curriculums. The "six learning goals" prompt in this study represents an attempt to provide an accurate writing assessment that moves beyond scores. This…
Descriptors: Feedback (Response), Writing Evaluation, Student Evaluation, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
He, Ling; Shi, Ling – Assessing Writing, 2008
The present study interviewed 16 international students (13 from Mainland China and 3 from Taiwan) in a Canadian university to explore their perceptions and experiences of two standardized English writing tests: the TWE (Test of Written English) and the essay task in LPI (English Language Proficiency Index). In Western Canada, TWE is used as an…
Descriptors: Student Attitudes, Writing Tests, Foreign Countries, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Burke, Jennifer N.; Cizek, Gregory J. – Assessing Writing, 2006
This study was conducted to gather evidence regarding effects of the mode of writing (handwritten vs. word-processed) on compositional quality in a sample of sixth grade students. Questionnaire data and essay scores were gathered to examine the effect of composition mode on essay scores of students of differing computer skill levels. The study was…
Descriptors: Computer Assisted Testing, High Stakes Tests, Writing Processes, Grade 6
Peer reviewed Peer reviewed
Direct linkDirect link
East, Martin – Assessing Writing, 2006
Writing assessment essentially juxtaposes two elements: how "good writing" is to be defined, and how "good measurement" of that writing is to be carried out. The timed test is often used in large-scale L2 writing assessments because it is considered to provide reliable measurement. It is, however, highly inauthentic. One way of enhancing…
Descriptors: Writing Evaluation, Writing Tests, Timed Tests, Dictionaries
Previous Page | Next Page ยป
Pages: 1  |  2