NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
ERIC Number: ED603835
Record Type: Non-Journal
Publication Date: 2019
Pages: 14
Abstractor: As Provided
ISBN: N/A
ISSN: EISSN-
EISSN: N/A
Automated Summarization Evaluation (ASE) Using Natural Language Processing Tools
Crossley, Scott A.; Kim, Minkyung; Allen, Laura K.; McNamara, Danielle S.
Grantee Submission, Paper presented at the International Conference on Artificial Intelligence in Education (AIED) (2019)
Summarization is an effective strategy to promote and enhance learning and deep comprehension of texts. However, summarization is seldom implemented by teachers in classrooms because the manual evaluation of students' summaries requires time and effort. This problem has led to the development of automated models of summarization quality. However, these models often rely on features derived from expert ratings of student summarizations of specific source texts and are therefore not generalizable to summarizations of new texts. Further, many of the models rely of proprietary tools that are not freely or publicly available, rendering replications difficult. In this study, we introduce an automated summarization evaluation (ASE) model that depends strictly on features of the source text or the summary, allowing for a purely textbased model of quality. This model effectively classifies summaries as either low or high quality with an accuracy above 80%. Importantly, the model was developed on a large number of source texts allowing for generalizability across texts. Further, the features used in this study are freely and publicly available affording replication. [This paper was published in: S. Isotani et al. (Eds.), "AIED 2019" (pp. 84-95). Switzerland: Springer.]
Publication Type: Speeches/Meeting Papers; Reports - Descriptive
Education Level: N/A
Audience: N/A
Language: English
Sponsor: Institute of Education Sciences (ED)
Authoring Institution: N/A
IES Funded: Yes
Grant or Contract Numbers: R305A180261