NotesFAQContact Us
Collection
Advanced
Search Tips
Peer reviewed Peer reviewed
Direct linkDirect link
ERIC Number: EJ1067950
Record Type: Journal
Publication Date: 2015-Jun
Pages: 10
Abstractor: As Provided
Reference Count: N/A
ISBN: N/A
ISSN: ISSN-1092-4388
Effects of Context Type on Lipreading and Listening Performance and Implications for Sentence Processing
Spehar, Brent; Goebel, Stacey; Tye-Murray, Nancy
Journal of Speech, Language, and Hearing Research, v58 n3 p1093-1102 Jun 2015
Purpose: This study compared the use of 2 different types of contextual cues (sentence based and situation based) in 2 different modalities (visual only and auditory only). Method: Twenty young adults were tested with the Illustrated Sentence Test (Tye-Murray, Hale, Spehar, Myerson, & Sommers, 2014) and the Speech Perception in Noise Test (Bilger, Nuetzel, Rabinowitz, & Rzeczkowski, 1984; Kalikow, Stevens, & Elliott, 1977) in the 2 modalities. The Illustrated Sentences Test presents sentences with no context and sentences accompanied by picture-based situational context cues. The Speech Perception in Noise Test presents sentences with low sentence-based context and sentences with high sentence-based context. Results: Participants benefited from both types of context and received more benefit when testing occurred in the visual-only modality than when it occurred in the auditory-only modality. Participants' use of sentence-based context did not correlate with use of situation-based context. Cue usage did not correlate between the 2 modalities. Conclusions: The ability to use contextual cues appears to be dependent on the type of cue and the presentation modality of the target word(s). In a theoretical sense, the results suggest that models of word recognition and sentence processing should incorporate the influence of multiple sources of information and recognize that the 2 types of context have different influences on speech perception. In a clinical sense, the results suggest that aural rehabilitation programs might provide training to optimize use of both kinds of contextual cues.
American Speech-Language-Hearing Association (ASHA). 10801 Rockville Pike, Rockville, MD 20852. Tel: 800-638-8255; Fax: 301-571-0457; e-mail: subscribe@asha.org; Web site: http://jslhr.asha.org
Publication Type: Journal Articles; Reports - Research
Education Level: N/A
Audience: N/A
Language: English
Sponsor: National Institutes of Health (DHHS)
Authoring Institution: N/A
IES Grant or Contract Numbers: AG018029