NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 16 to 30 of 40 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tong Li; Sarah D. Creer; Tracy Arner; Rod D. Roscoe; Laura K. Allen; Danielle S. McNamara – Grantee Submission, 2022
Automated writing evaluation (AWE) tools can facilitate teachers' analysis of and feedback on students' writing. However, increasing evidence indicates that writing instructors experience challenges in implementing AWE tools successfully. For this reason, our development of the Writing Analytics Tool (WAT) has employed a participatory approach…
Descriptors: Automation, Writing Evaluation, Learning Analytics, Participatory Research
Renu Balyan; Danielle S. McNamara; Scott A. Crossley; William Brown; Andrew J. Karter; Dean Schillinger – Grantee Submission, 2022
Online patient portals that facilitate communication between patient and provider can improve patients' medication adherence and health outcomes. The effectiveness of such web-based communication measures can be influenced by the health literacy (HL) of a patient. In the context of diabetes, low HL is associated with severe hypoglycemia and high…
Descriptors: Computational Linguistics, Patients, Physicians, Information Security
Reese Butterfuss; Rod D. Roscoe; Laura K. Allen; Kathryn S. McCarthy; Danielle S. McNamara – Grantee Submission, 2022
The present study examined the extent to which adaptive feedback and just-in-time writing strategy instruction improved the quality of high school students' persuasive essays in the context of the Writing Pal (W-Pal). W-Pal is a technology-based writing tool that integrates automated writing evaluation into an intelligent tutoring system. Students…
Descriptors: High School Students, Writing Evaluation, Writing Instruction, Feedback (Response)
Danielle S. McNamara; Panayiota Kendeou – Grantee Submission, 2022
We propose a framework designed to guide the development of automated writing practice and formative evaluation and feedback for young children (K-5 th grade) -- the early Automated Writing Evaluation (early-AWE) Framework. e-AWE is grounded on the fundamental assumption that e-AWE is needed for young developing readers, but must incorporate…
Descriptors: Writing Evaluation, Automation, Formative Evaluation, Feedback (Response)
Kathryn S. McCarthy; Rod D. Roscoe; Laura K. Allen; Aaron D. Likens; Danielle S. McNamara – Grantee Submission, 2022
The benefits of writing strategy feedback are well established. This study examined the extent to which adding spelling and grammar checkers support writing and revision in comparison to providing writing strategy feedback alone. High school students (n = 119) wrote and revised six persuasive essays in Writing Pal, an automated writing evaluation…
Descriptors: High School Students, Automation, Writing Evaluation, Computer Software
Kathryn S. McCarthy; Eleanor F. Yan; Laura K. Allen; Allison N. Sonia; Joseph P. Magliano; Danielle S. McNamara – Grantee Submission, 2022
Few studies have explored how general skills in both reading and writing influence performance on integrated, source-based writing. The goal of the present study was to consider the relative contributions of reading and writing ability on multiple-document integrative reading and writing tasks. Students in the U.S. (n=94) completed two tasks in…
Descriptors: Individual Differences, Reading Skills, Writing Skills, Reading Strategies
Joseph P. Magliano; Lauren Flynn; Daniel P. Feller; Kathryn S. McCarthy; Danielle S. McNamara; Laura Allen – Grantee Submission, 2022
The goal of this study was to assess the relationships between computational approaches to analyzing constructed responses made during reading and individual differences in the foundational skills of reading in college readers. We also explored if these relationships were consistent across texts and samples collected at different institutions and…
Descriptors: Semantics, Computational Linguistics, Individual Differences, Reading Materials
Maria-Dorinela Dascalu; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara; Stefan Trausan-Matu – Grantee Submission, 2022
The use of technology as a facilitator in learning environments has become increasingly prevalent with the global pandemic caused by COVID-19. As such, computer-supported collaborative learning (CSCL) gains a wider adoption in contrast to traditional learning methods. At the same time, the need for automated tools capable of assessing and…
Descriptors: Computational Linguistics, Longitudinal Studies, Technology Uses in Education, Teaching Methods
Danielle S. McNamara; Tracy Arner; Reese Butterfuss; Debshila Basu Mallick; Andrew S. Lan; Rod D. Roscoe; Henry L. Roediger; Richard G. Baraniuk – Grantee Submission, 2022
The learning sciences inherently involve interdisciplinary research with an overarching objective of advancing theories of learning and to inform the design and implementation of effective instructional methods and learning technologies. In these endeavors, learning sciences encompass diverse constructs, measures, processes, and outcomes…
Descriptors: Artificial Intelligence, Learning Processes, Learning Motivation, Educational Research
Anna E. Mason; Jason L. G. Braasch; Daphne Greenberg; Erica D. Kessler; Laura K. Allen; Danielle S. McNamara – Grantee Submission, 2022
This study examined the extent to which prior beliefs and reading instructions impacted elements of a reader's mental representation of multiple texts. College students' beliefs about childhood vaccinations were assessed before reading two anti-vaccine and two pro-vaccine texts. Participants in the experimental condition read for the purpose of…
Descriptors: College Students, Beliefs, Immunization Programs, Vocabulary
Robert-Mihai Botarleanu; Mihai Dascalu; Scott Andrew Crossley; Danielle S. McNamara – Grantee Submission, 2022
The ability to express yourself concisely and coherently is a crucial skill, both for academic purposes and professional careers. An important aspect to consider in writing is an adequate segmentation of ideas, which in turn requires a proper understanding of where to place paragraph breaks. However, these decisions are often performed…
Descriptors: Paragraph Composition, Text Structure, Automation, Identification
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Danielle S. McNamara; Tracy Arner; Elizabeth Reilley; Paul Alvarado; Chani Clark; Thomas Fikes; Annie Hale; Betheny Weigele – Grantee Submission, 2022
Accounting for complex interactions between contextual variables and learners' individual differences in aptitudes and background requires building the means to connect and access learner data at large scales, across time, and in multiple contexts. This paper describes the ASU Learning@Scale (L@S) project to develop a digital learning network…
Descriptors: Electronic Learning, Educational Technology, Networks, Learning Analytics
Kathryn S. McCarthy; Christian Soto; Cecilia Malbrán; Liliana Fonseca; Marian Simian; Danielle S. McNamara – Grantee Submission, 2018
Interactive Strategy Training for Active Reading and Thinking en Español, or iSTART-E, is a new intelligent tutoring system (ITS) that provides reading comprehension strategy training for Spanish speakers. This paper reports on studies evaluating the efficacy of iSTART-E in real-world classrooms in two different Spanish-speaking countries. In…
Descriptors: Reading Comprehension, Reading Instruction, Spanish Speaking, Intelligent Tutoring Systems
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mihai Dascalu; Scott A. Crossley; Danielle S. McNamara; Philippe Dessus; Stefan Trausan-Matu – Grantee Submission, 2018
A critical task for tutors is to provide learners with suitable reading materials in terms of difficulty. The challenge of this endeavor is increased by students' individual variability and the multiple levels in which complexity can vary, thus arguing for the necessity of automated systems to support teachers. This chapter describes…
Descriptors: Reading Materials, Difficulty Level, Natural Language Processing, Artificial Intelligence
Stefan Ruseti; Mihai Dascalu; Amy M. Johnson; Renu Balyan; Kristopher J. Kopp; Danielle S. McNamara – Grantee Submission, 2018
This study assesses the extent to which machine learning techniques can be used to predict question quality. An algorithm based on textual complexity indices was previously developed to assess question quality to provide feedback on questions generated by students within iSTART (an intelligent tutoring system that teaches reading strategies). In…
Descriptors: Questioning Techniques, Artificial Intelligence, Networks, Classification
Pages: 1  |  2  |  3