Gestures have an important role both in communication and individuals’ cognitive processes. Yet, theories disagree on representational basis of gestures such that whether these are based on amodal semantic or mainly visual representations. To examine co-speech gesture production, studies commonly use visual presentation of events such as showing video clips and route description on a map. However, using only visual modality to present events may create a modality-specific bias, leading speakers to depend heavily on visual representations as they conceptualize the events in their minds.
This project investigates the influence of perceptual modality on motion event perception by comparing gesture production for the same motion events that are presented as either visual or auditory. It further investigates how obligatory reliance on non-visual senses in motion event perception changes the way that speakers gesture by comparing sighted and blind individuals. This project aims to reveal unknown interactions among modality of perception, mental imagery, and gesture and to contribute to our existing knowledge about multimodal utterances and their link to cognition and language interface.
Center for Language Studies (CLS) position Radboud University Nijmegen (2017 - 2021)