NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
Peer reviewed Peer reviewed
Direct linkDirect link
ERIC Number: EJ1086120
Record Type: Journal
Publication Date: 2015-Dec
Pages: 13
Abstractor: As Provided
ISBN: N/A
ISSN: ISSN-1092-4388
EISSN: N/A
Integration of Partial Information within and across Modalities: Contributions to Spoken and Written Sentence Recognition
Smith, Kimberly G.; Fogerty, Daniel
Journal of Speech, Language, and Hearing Research, v58 n6 p1805-1817 Dec 2015
Purpose: This study evaluated the extent to which partial spoken or written information facilitates sentence recognition under degraded unimodal and multimodal conditions. Method: Twenty young adults with typical hearing completed sentence recognition tasks in unimodal and multimodal conditions across 3 proportions of preservation. In the unimodal condition, performance was examined when only interrupted text or interrupted speech stimuli were available. In the multimodal condition, performance was examined when both interrupted text and interrupted speech stimuli were concurrently presented. Sentence recognition scores were obtained from simultaneous and delayed response conditions. Results: Significantly better performance was obtained for unimodal speech-only compared with text-only conditions across all proportions preserved. The multimodal condition revealed better performance when responses were delayed. During simultaneous responses, participants received equal benefit from speech information when the text was moderately and significantly degraded. The benefit from text in degraded auditory environments occurred only when speech was highly degraded. Conclusions: The speech signal, compared with text, is robust against degradation likely due to its continuous, versus discrete, features. Allowing time for offline linguistic processing is beneficial for the recognition of partial sensory information in unimodal and multimodal conditions. Despite the perceptual differences between the 2 modalities, the results highlight the utility of multimodal speech + text signals.
American Speech-Language-Hearing Association (ASHA). 10801 Rockville Pike, Rockville, MD 20852. Tel: 800-638-8255; Fax: 301-571-0457; e-mail: subscribe@asha.org; Web site: http://jslhr.asha.org
Publication Type: Journal Articles; Reports - Research
Education Level: N/A
Audience: N/A
Language: English
Sponsor: National Institute on Deafness and Other Communication Disorders (NIDCD)
Authoring Institution: N/A
Grant or Contract Numbers: R03DC012506