NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
Peer reviewed Peer reviewed
Direct linkDirect link
ERIC Number: ED636233
Record Type: Non-Journal
Publication Date: 2023-Oct-14
Pages: 16
Abstractor: As Provided
ISBN: N/A
ISSN: N/A
EISSN: N/A
Automated Assessment of Comprehension Strategies from Self-Explanations Using LLMs
Nicula, Bogdan; Dascalu, Mihai; Arner, Tracy; Balyan, Renu; McNamara, Danielle S.
Grantee Submission
Text comprehension is an essential skill in today's information-rich world, and self-explanation practice helps students improve their understanding of complex texts. This study was centered on leveraging open-source Large Language Models (LLMs), specifically FLAN-T5, to automatically assess the comprehension strategies employed by readers while understanding Science, Technology, Engineering, and Mathematics (STEM) texts. The experiments relied on a corpus of three datasets (N = 11,833) with self-explanations annotated on 4 dimensions: 3 comprehension strategies (i.e., bridging, elaboration, and paraphrasing) and overall quality. Besides FLAN-T5, we also considered GPT3.5-turbo to establish a stronger baseline. Our experiments indicated that the performance improved with fine-tuning, having a larger LLM model, and providing examples via the prompt. Our best model considered a pretrained FLAN-T5 XXL model and obtained a weighted F1-score of 0.721, surpassing the 0.699 F1-score previously obtained using smaller models (i.e., RoBERTa).?[This is the online first version of an article published in "Information."]
Publication Type: Reports - Research
Education Level: N/A
Audience: N/A
Language: English
Sponsor: Institute of Education Sciences (ED); Department of Education (ED); National Science Foundation (NSF), Division of Information and Intelligent Systems (IIS); National Science Foundation (NSF)
Authoring Institution: N/A
IES Funded: Yes
Grant or Contract Numbers: R305A130124; R305A190063; REC0241144; 0735682