Publication Date
| In 2015 | 0 |
| Since 2014 | 0 |
| Since 2011 (last 5 years) | 0 |
| Since 2006 (last 10 years) | 16 |
| Since 1996 (last 20 years) | 18 |
Descriptor
| Eye Movements | 9 |
| Language Processing | 7 |
| Word Recognition | 7 |
| Cognitive Processes | 6 |
| Nouns | 4 |
| Semantics | 4 |
| Visual Stimuli | 4 |
| Cues | 3 |
| Dictionaries | 3 |
| Experimental Psychology | 3 |
| More ▼ | |
Source
| Cognition | 11 |
| Journal of Experimental… | 7 |
Author
| Tanenhaus, Michael K. | 18 |
| Aslin, Richard N. | 6 |
| Arnold, Jennifer E. | 2 |
| Creel, Sarah C. | 2 |
| Dahan, Delphine | 2 |
| Magnuson, James S. | 2 |
| Revill, Kathleen Pirog | 2 |
| Salverda, Anne Pier | 2 |
| Bennetto, Loisa | 1 |
| Brown-Schmidt, Sarah | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 18 |
| Reports - Research | 13 |
| Reports - Evaluative | 4 |
| Reports - Descriptive | 1 |
Education Level
| Higher Education | 1 |
Audience
Showing 1 to 15 of 18 results
Silverman, Laura B.; Bennetto, Loisa; Campana, Ellen; Tanenhaus, Michael K. – Cognition, 2010
This study examined iconic gesture comprehension in autism, with the goal of assessing whether cross-modal processing difficulties impede speech-and-gesture integration. Participants were 19 adolescents with high functioning autism (HFA) and 20 typical controls matched on age, gender, verbal IQ, and socio-economic status (SES). Gesture…
Descriptors: Comparative Analysis, Eye Movements, Autism, Human Body
Grodner, Daniel J.; Klein, Natalie M.; Carbary, Kathleen M.; Tanenhaus, Michael K. – Cognition, 2010
Scalar inferences are commonly generated when a speaker uses a weaker expression rather than a stronger alternative, e.g., "John ate some of the apples" implies that he did not eat them all. This article describes a visual-world study investigating how and when perceivers compute these inferences. Participants followed spoken instructions…
Descriptors: Inferences, Context Effect, Nouns, Data Interpretation
Cook, Susan Wagner; Tanenhaus, Michael K. – Cognition, 2009
We explored how speakers and listeners use hand gestures as a source of perceptual-motor information during naturalistic communication. After solving the Tower of Hanoi task either with real objects or on a computer, speakers explained the task to listeners. Speakers' hand gestures, but not their speech, reflected properties of the particular…
Descriptors: Nonverbal Communication, Interpersonal Communication, Listening, Audiences
Kaiser, Elsi; Runner, Jeffrey T.; Sussman, Rachel S.; Tanenhaus, Michael K. – Cognition, 2009
We present four experiments on the interpretation of pronouns and reflexives in picture noun phrases with and without possessors (e.g. "Andrew's picture of him/himself, the picture of him/himself"). The experiments (two off-line studies and two visual-world eye-tracking experiments) investigate how syntactic and semantic factors guide the…
Descriptors: Language Patterns, Semantics, Nouns, Syntax
Watson, Duane G.; Arnold, Jennifer E.; Tanenhaus, Michael K. – Cognition, 2008
Importance and predictability each have been argued to contribute to acoustic prominence. To investigate whether these factors are independent or two aspects of the same phenomenon, naive participants played a verbal variant of Tic Tac Toe. Both importance and predictability contributed independently to the acoustic prominence of a word, but in…
Descriptors: Acoustics, Language Processing, Prediction, Games
Creel, Sarah C.; Aslin, Richard N.; Tanenhaus, Michael K. – Cognition, 2008
Two experiments used the head-mounted eye-tracking methodology to examine the time course of lexical activation in the face of a non-phonemic cue, talker variation. We found that lexical competition was attenuated by consistent talker differences between words that would otherwise be lexical competitors. In Experiment 1, some English cohort…
Descriptors: Vocabulary, Cues, Cognitive Processes, Eye Movements
Magnuson, James S.; Tanenhaus, Michael K.; Aslin, Richard N. – Cognition, 2008
In many domains of cognitive processing there is strong support for bottom-up priority and delayed top-down (contextual) integration. We ask whether this applies to supra-lexical context that could potentially constrain lexical access. Previous findings of early context integration in word recognition have typically used constraints that can be…
Descriptors: Nouns, Word Recognition, Cognitive Processes, Form Classes (Languages)
Clayards, Meghan; Tanenhaus, Michael K.; Aslin, Richard N.; Jacobs, Robert A. – Cognition, 2008
Listeners are exquisitely sensitive to fine-grained acoustic detail within phonetic categories for sounds and words. Here we show that this sensitivity is optimal given the probabilistic nature of speech cues. We manipulated the probability distribution of one probabilistic cue, voice onset time (VOT), which differentiates word initial labial…
Descriptors: Cues, Probability, Auditory Perception, Articulation (Speech)
Heller, Daphna; Grodner, Daniel; Tanenhaus, Michael K. – Cognition, 2008
We used the contrastive expectation associated with scalar adjectives to examine whether listeners are sensitive to the distinction between common and privileged information during real-time reference resolution. Our results show that listeners used this distinction to narrow the set of potential referents to objects with contrasts in common…
Descriptors: Language Processing, Listening Skills, Perspective Taking, Form Classes (Languages)
Brown-Schmidt, Sarah; Gunlogson, Christine; Tanenhaus, Michael K. – Cognition, 2008
Two experiments examined the role of common ground in the production and on-line interpretation of wh-questions such as "What's above the cow with shoes?" Experiment 1 examined unscripted conversation, and found that speakers consistently use wh-questions to inquire about information known only to the addressee. Addressees were sensitive to this…
Descriptors: Privacy, Discourse Analysis, Sentences, Interaction
Salverda, Anne Pier; Dahan, Delphine; Tanenhaus, Michael K.; Crosswhite, Katherine; Masharov, Mikhail; McDonough, Joyce – Cognition, 2007
Eye movements were monitored as participants followed spoken instructions to manipulate one of four objects pictured on a computer screen. Target words occurred in utterance-medial (e.g., "Put the cap next to the square") or utterance-final position (e.g., "Now click on the cap"). Displays consisted of the target picture (e.g., a cap), a…
Descriptors: Eye Movements, Word Recognition, Language Processing, Phonetics
Salverda, Anne Pier; Tanenhaus, Michael K. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2010
Two visual-world experiments evaluated the time course and use of orthographic information in spoken-word recognition using printed words as referents. Participants saw 4 words on a computer screen and listened to spoken sentences instructing them to click on one of the words (e.g., "Click on the word bead"). The printed words appeared 200 ms…
Descriptors: Sentences, Word Recognition, Universities, Undergraduate Students
Revill, Kathleen Pirog; Tanenhaus, Michael K.; Aslin, Richard N. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2009
Reports an error in "Context and spoken word recognition in a novel lexicon" by Kathleen Pirog Revill, Michael K. Tanenhaus and Richard N. Aslin ("Journal of Experimental Psychology: Learning, Memory, and Cognition," 2008[Sep], Vol 34[5], 1207-1223). Figure 9 was inadvertently duplicated as Figure 10. Figure 9 in the original article was correct.…
Descriptors: Semantics, Eye Movements, Competition, Word Recognition
Revill, Kathleen Pirog; Tanenhaus, Michael K.; Aslin, Richard N. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2008
Three eye movement studies with novel lexicons investigated the role of semantic context in spoken word recognition, contrasting 3 models: restrictive access, access-selection, and continuous integration. Actions directed at novel shapes caused changes in motion (e.g., looming, spinning) or state (e.g., color, texture). Across the experiments,…
Descriptors: Semantics, Eye Movements, Competition, Word Recognition
Arnold, Jennifer E.; Kam, Carla L. Hudson; Tanenhaus, Michael K. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2007
Eye-tracking and gating experiments examined reference comprehension with fluent (Click on the red. . .) and disfluent (Click on [pause] thee uh red . . .) instructions while listeners viewed displays with 2 familiar (e.g., ice cream cones) and 2 unfamiliar objects (e.g., squiggly shapes). Disfluent instructions made unfamiliar objects more…
Descriptors: Inferences, Attribution Theory, Visual Stimuli, Instructional Effectiveness
Previous Page | Next Page ยป
Pages: 1 | 2
Peer reviewed
Direct link
