Publication Date
In 2024 | 0 |
Since 2023 | 0 |
Since 2020 (last 5 years) | 0 |
Since 2015 (last 10 years) | 3 |
Since 2005 (last 20 years) | 3 |
Descriptor
Feedback (Response) | 3 |
Intelligent Tutoring Systems | 3 |
Reading Comprehension | 3 |
Difficulty Level | 2 |
Inferences | 2 |
Metacognition | 2 |
Outcomes of Education | 2 |
Prompting | 2 |
Scores | 2 |
Self Evaluation (Individuals) | 2 |
Accuracy | 1 |
More ▼ |
Author
Johnson, Amy M. | 3 |
McNamara, Danielle S. | 3 |
Guerrero, Tricia A. | 2 |
Likens, Aaron D. | 2 |
McCarthy, Kathryn S. | 2 |
Crossley, Scott A. | 1 |
Kopp, Kristopher J. | 1 |
Publication Type
Reports - Research | 3 |
Journal Articles | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
McCarthy, Kathryn S.; Likens, Aaron D.; Johnson, Amy M.; Guerrero, Tricia A.; McNamara, Danielle S. – Grantee Submission, 2018
Research suggests that promoting metacognitive awareness can increase performance in, and learning from, intelligent tutoring systems (ITSs). The current work examines the effects of two metacognitive prompts within iSTART, a reading comprehension strategy ITS in which students practice writing quality self-explanations. In addition to comparing…
Descriptors: Metacognition, Difficulty Level, Prompting, Intelligent Tutoring Systems
McCarthy, Kathryn S.; Likens, Aaron D.; Johnson, Amy M.; Guerrero, Tricia A.; McNamara, Danielle S. – International Journal of Artificial Intelligence in Education, 2018
Research suggests that promoting metacognitive awareness can increase performance in, and learning from, intelligent tutoring systems (ITSs). The current work examines the effects of two metacognitive prompts within iSTART, a reading comprehension strategy ITS in which students practice writing quality self-explanations. In addition to comparing…
Descriptors: Metacognition, Difficulty Level, Prompting, Intelligent Tutoring Systems
Kopp, Kristopher J.; Johnson, Amy M.; Crossley, Scott A.; McNamara, Danielle S. – Grantee Submission, 2017
An NLP algorithm was developed to assess question quality to inform feedback on questions generated by students within iSTART (an intelligent tutoring system that teaches reading strategies). A corpus of 4575 questions was coded using a four-level taxonomy. NLP indices were calculated for each question and machine learning was used to predict…
Descriptors: Reading Comprehension, Reading Instruction, Intelligent Tutoring Systems, Reading Strategies