NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Assessments and Surveys
Flesch Kincaid Grade Level…3
What Works Clearinghouse Rating
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Nicula, Bogdan; Dascalu, Mihai; Arner, Tracy; Balyan, Renu; McNamara, Danielle S. – Grantee Submission, 2023
Text comprehension is an essential skill in today's information-rich world, and self-explanation practice helps students improve their understanding of complex texts. This study was centered on leveraging open-source Large Language Models (LLMs), specifically FLAN-T5, to automatically assess the comprehension strategies employed by readers while…
Descriptors: Reading Comprehension, Language Processing, Models, STEM Education
Crossley, Scott A.; Kim, Minkyung; Allen, Laura K.; McNamara, Danielle S. – Grantee Submission, 2019
Summarization is an effective strategy to promote and enhance learning and deep comprehension of texts. However, summarization is seldom implemented by teachers in classrooms because the manual evaluation of students' summaries requires time and effort. This problem has led to the development of automated models of summarization quality. However,…
Descriptors: Automation, Writing Evaluation, Natural Language Processing, Artificial Intelligence
Nicula, Bogdan; Dascalu, Mihai; Newton, Natalie N.; Orcutt, Ellen; McNamara, Danielle S. – Grantee Submission, 2021
Learning to paraphrase supports both writing ability and reading comprehension, particularly for less skilled learners. As such, educational tools that integrate automated evaluations of paraphrases can be used to provide timely feedback to enhance learner paraphrasing skills more efficiently and effectively. Paraphrase identification is a popular…
Descriptors: Computational Linguistics, Feedback (Response), Classification, Learning Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Balyan, Renu; McCarthy, Kathryn S.; McNamara, Danielle S. – International Journal of Artificial Intelligence in Education, 2020
For decades, educators have relied on readability metrics that tend to oversimplify dimensions of text difficulty. This study examines the potential of applying advanced artificial intelligence methods to the educational problem of assessing text difficulty. The combination of hierarchical machine learning and natural language processing (NLP) is…
Descriptors: Natural Language Processing, Artificial Intelligence, Man Machine Systems, Classification
Balyan, Renu; McCarthy, Kathryn S.; McNamara, Danielle S. – Grantee Submission, 2020
For decades, educators have relied on readability metrics that tend to oversimplify dimensions of text difficulty. This study examines the potential of applying advanced artificial intelligence methods to the educational problem of assessing text difficulty. The combination of hierarchical machine learning and natural language processing (NLP) is…
Descriptors: Natural Language Processing, Artificial Intelligence, Man Machine Systems, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Fang, Ying; Roscoe, Rod D.; McNamara, Danielle S. – Grantee Submission, 2023
Artificial Intelligence (AI) based assessments are commonly used in a variety of settings including business, healthcare, policing, manufacturing, and education. In education, AI-based assessments undergird intelligent tutoring systems as well as many tools used to evaluate students and, in turn, guide learning and instruction. This chapter…
Descriptors: Artificial Intelligence, Computer Assisted Testing, Student Evaluation, Evaluation Methods
Botarleanu, Robert-Mihai; Dascalu, Mihai; Allen, Laura K.; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2022
Automated scoring of student language is a complex task that requires systems to emulate complex and multi-faceted human evaluation criteria. Summary scoring brings an additional layer of complexity to automated scoring because it involves two texts of differing lengths that must be compared. In this study, we present our approach to automate…
Descriptors: Automation, Scoring, Documentation, Likert Scales
Botarleanu, Robert-Mihai; Dascalu, Mihai; Allen, Laura K.; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2021
Text summarization is an effective reading comprehension strategy. However, summary evaluation is complex and must account for various factors including the summary and the reference text. This study examines a corpus of approximately 3,000 summaries based on 87 reference texts, with each summary being manually scored on a 4-point Likert scale.…
Descriptors: Computer Assisted Testing, Scoring, Natural Language Processing, Computer Software
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Balyan, Renu; McCarthy, Kathryn S.; McNamara, Danielle S. – Grantee Submission, 2018
While hierarchical machine learning approaches have been used to classify texts into different content areas, this approach has, to our knowledge, not been used in the automated assessment of text difficulty. This study compared the accuracy of four classification machine learning approaches (flat, one-vs-one, one-vs-all, and hierarchical) using…
Descriptors: Artificial Intelligence, Classification, Comparative Analysis, Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Banawan, Michelle P.; Shin, Jinnie; Arner, Tracy; Balyan, Renu; Leite, Walter L.; McNamara, Danielle S. – Grantee Submission, 2023
Academic discourse communities and learning circles are characterized by collaboration, sharing commonalities in terms of social interactions and language. The discourse of these communities is composed of jargon, common terminologies, and similarities in how they construe and communicate meaning. This study examines the extent to which discourse…
Descriptors: Algebra, Discourse Analysis, Semantics, Syntax
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Balyan, Renu; McCarthy, Kathryn S.; McNamara, Danielle S. – International Educational Data Mining Society, 2017
This study examined how machine learning and natural language processing (NLP) techniques can be leveraged to assess the interpretive behavior that is required for successful literary text comprehension. We compared the accuracy of seven different machine learning classification algorithms in predicting human ratings of student essays about…
Descriptors: Artificial Intelligence, Natural Language Processing, Reading Comprehension, Literature
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Balyan, Renu; McCarthy, Kathryn S.; McNamara, Danielle S. – Grantee Submission, 2017
This study examined how machine learning and natural language processing (NLP) techniques can be leveraged to assess the interpretive behavior that is required for successful literary text comprehension. We compared the accuracy of seven different machine learning classification algorithms in predicting human ratings of student essays about…
Descriptors: Artificial Intelligence, Natural Language Processing, Reading Comprehension, Literature
Dascalu, Maria-Dorinela; Ruseti, Stefan; Dascalu, Mihai; McNamara, Danielle S.; Carabas, Mihai; Rebedea, Traian – Grantee Submission, 2021
The COVID-19 pandemic has changed the entire world, while the impact and usage of online learning environments has greatly increased. This paper presents a new version of the ReaderBench framework, grounded in Cohesion Network Analysis, which can be used to evaluate the online activity of students as a plug-in feature to Moodle. A Recurrent Neural…
Descriptors: COVID-19, Pandemics, Integrated Learning Systems, School Closing
Nicula, Bogdan; Perret, Cecile A.; Dascalu, Mihai; McNamara, Danielle S. – Grantee Submission, 2020
Theories of discourse argue that comprehension depends on the coherence of the learner's mental representation. Our aim is to create a reliable automated representation to estimate readers' level of comprehension based on different productions, namely self-explanations and answers to open-ended questions. Previous work relied on Cohesion Network…
Descriptors: Network Analysis, Reading Comprehension, Automation, Artificial Intelligence
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kopp, Kristopher J.; Johnson, Amy M.; Crossley, Scott A.; McNamara, Danielle S. – Grantee Submission, 2017
An NLP algorithm was developed to assess question quality to inform feedback on questions generated by students within iSTART (an intelligent tutoring system that teaches reading strategies). A corpus of 4575 questions was coded using a four-level taxonomy. NLP indices were calculated for each question and machine learning was used to predict…
Descriptors: Reading Comprehension, Reading Instruction, Intelligent Tutoring Systems, Reading Strategies
Previous Page | Next Page ยป
Pages: 1  |  2