NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 35 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Corlatescu, Dragos; Watanabe, Micah; Ruseti, Stefan; Dascalu, Mihai; McNamara, Danielle S. – Grantee Submission, 2023
Reading comprehension is essential for both knowledge acquisition and memory reinforcement. Automated modeling of the comprehension process provides insights into the efficacy of specific texts as learning tools. This paper introduces an improved version of the Automated Model of Comprehension, version 3.0 (AMoC v3.0). AMoC v3.0 is based on two…
Descriptors: Reading Comprehension, Models, Concept Mapping, Graphs
Peer reviewed Peer reviewed
Direct linkDirect link
Fang, Ying; Li, Tong; Huynh, Linh; Christhilf, Katerina; Roscoe, Rod D.; McNamara, Danielle S. – Grantee Submission, 2023
Literacy assessment is essential for effective literacy instruction and training. However, traditional paper-based literacy assessments are typically decontextualized and may cause stress and anxiety for test takers. In contrast, serious games and game environments allow for the assessment of literacy in more authentic and engaging ways, which has…
Descriptors: Literacy, Student Evaluation, Educational Games, Literacy Education
Crossley, Scott A.; Kim, Minkyung; Allen, Laura K.; McNamara, Danielle S. – Grantee Submission, 2019
Summarization is an effective strategy to promote and enhance learning and deep comprehension of texts. However, summarization is seldom implemented by teachers in classrooms because the manual evaluation of students' summaries requires time and effort. This problem has led to the development of automated models of summarization quality. However,…
Descriptors: Automation, Writing Evaluation, Natural Language Processing, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Shin, Jinnie; Balyan, Renu; Banawan, Michelle P.; Arner, Tracy; Leite, Walter L.; McNamara, Danielle S. – Grantee Submission, 2023
Despite the proliferation of video-based instruction and its benefits--such as promoting student autonomy and self-paced learning--the complexities of online teaching remain a challenge. To be effective, educators require extensive training in digital teaching methodologies. As such, there's a pressing need to examine and comprehend the…
Descriptors: Algebra, Mathematics Instruction, Video Technology, Personal Autonomy
Peer reviewed Peer reviewed
McNamara, Danielle S.; Arner, Tracy; Reilley, Elizabeth; Alvarado, Paul; Clark, Chani; Fikes, Thomas; Hale, Annie; Weigele, Betheny – Grantee Submission, 2022
Accounting for complex interactions between contextual variables and learners' individual differences in aptitudes and background requires building the means to connect and access learner data at large scales, across time, and in multiple contexts. This paper describes the ASU Learning@Scale (L@S) project to develop a digital learning network…
Descriptors: Electronic Learning, Educational Technology, Networks, Learning Analytics
Nicula, Bogdan; Dascalu, Mihai; Newton, Natalie N.; Orcutt, Ellen; McNamara, Danielle S. – Grantee Submission, 2021
Learning to paraphrase supports both writing ability and reading comprehension, particularly for less skilled learners. As such, educational tools that integrate automated evaluations of paraphrases can be used to provide timely feedback to enhance learner paraphrasing skills more efficiently and effectively. Paraphrase identification is a popular…
Descriptors: Computational Linguistics, Feedback (Response), Classification, Learning Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Balyan, Renu; McCarthy, Kathryn S.; McNamara, Danielle S. – International Journal of Artificial Intelligence in Education, 2020
For decades, educators have relied on readability metrics that tend to oversimplify dimensions of text difficulty. This study examines the potential of applying advanced artificial intelligence methods to the educational problem of assessing text difficulty. The combination of hierarchical machine learning and natural language processing (NLP) is…
Descriptors: Natural Language Processing, Artificial Intelligence, Man Machine Systems, Classification
Balyan, Renu; McCarthy, Kathryn S.; McNamara, Danielle S. – Grantee Submission, 2020
For decades, educators have relied on readability metrics that tend to oversimplify dimensions of text difficulty. This study examines the potential of applying advanced artificial intelligence methods to the educational problem of assessing text difficulty. The combination of hierarchical machine learning and natural language processing (NLP) is…
Descriptors: Natural Language Processing, Artificial Intelligence, Man Machine Systems, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Fang, Ying; Roscoe, Rod D.; McNamara, Danielle S. – Grantee Submission, 2023
Artificial Intelligence (AI) based assessments are commonly used in a variety of settings including business, healthcare, policing, manufacturing, and education. In education, AI-based assessments undergird intelligent tutoring systems as well as many tools used to evaluate students and, in turn, guide learning and instruction. This chapter…
Descriptors: Artificial Intelligence, Computer Assisted Testing, Student Evaluation, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jackson, G. Tanner; Boonthum-Denecke, Chutima; McNamara, Danielle S. – Grantee Submission, 2015
Intelligent Tutoring Systems (ITSs) are situated in a potential struggle between effective pedagogy and system enjoyment and engagement. iSTART, a reading strategy tutoring system in which students practice generating self-explanations and using reading strategies, employs two devices to engage the user. The first is natural language processing…
Descriptors: Natural Language Processing, Feedback (Response), Intelligent Tutoring Systems, Reading Strategies
Botarleanu, Robert-Mihai; Dascalu, Mihai; Allen, Laura K.; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2022
Automated scoring of student language is a complex task that requires systems to emulate complex and multi-faceted human evaluation criteria. Summary scoring brings an additional layer of complexity to automated scoring because it involves two texts of differing lengths that must be compared. In this study, we present our approach to automate…
Descriptors: Automation, Scoring, Documentation, Likert Scales
Öncel, Püren; Flynn, Lauren E.; Sonia, Allison N.; Barker, Kennis E.; Lindsay, Grace C.; McClure, Caleb M.; McNamara, Danielle S.; Allen, Laura K. – Grantee Submission, 2021
Automated Writing Evaluation systems have been developed to help students improve their writing skills through the automated delivery of both summative and formative feedback. These systems have demonstrated strong potential in a variety of educational contexts; however, they remain limited in their personalization and scope. The purpose of the…
Descriptors: Computer Assisted Instruction, Writing Evaluation, Formative Evaluation, Summative Evaluation
Allen, Laura K.; Mills, Caitlin; Perret, Cecile; McNamara, Danielle S. – Grantee Submission, 2019
This study examines the extent to which instructions to self-explain vs. "other"-explain a text lead readers to produce different forms of explanations. Natural language processing was used to examine the content and characteristics of the explanations produced as a function of instruction condition. Undergraduate students (n = 146)…
Descriptors: Language Processing, Science Instruction, Computational Linguistics, Teaching Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Allen, Laura K.; Grasser, Arthur C.; McNamara, Danielle S. – Grantee Submission, 2023
Assessments of natural language can provide vast information about individuals' thoughts and cognitive process, but they often rely on time-intensive human scoring, deterring researchers from collecting these sources of data. Natural language processing (NLP) gives researchers the opportunity to implement automated textual analyses across a…
Descriptors: Psychological Studies, Natural Language Processing, Automation, Research Methodology
Botarleanu, Robert-Mihai; Dascalu, Mihai; Allen, Laura K.; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2021
Text summarization is an effective reading comprehension strategy. However, summary evaluation is complex and must account for various factors including the summary and the reference text. This study examines a corpus of approximately 3,000 summaries based on 87 reference texts, with each summary being manually scored on a 4-point Likert scale.…
Descriptors: Computer Assisted Testing, Scoring, Natural Language Processing, Computer Software
Previous Page | Next Page »
Pages: 1  |  2  |  3