Pragmatic and Multimodal Alignment in Social Interaction
Marlou Rasenberg
In social interaction, we try to share our ideas or thoughts with our conversation partner, or to coordinate our actions when working on a task together. We often succeed in doing so, though at times we miss to grasp the perspective of the other, which can lead to misunderstandings.
How can we account for such differences? What might seem to be ordinary, everyday dialogues, are in fact sophisticated joint actions in which conversation partners take turns to interactively and incrementally build up mutual understanding. While turns have long been treated as speech-only constructs, in their most common realisation (face-to-face social interaction) they are multimodal, consisting of both verbal and non-verbal resources, such as eye gaze and gestures.
In this project we investigate whether certain characteristics of interactions can explain different levels and degrees of mutual understanding. The project focuses on interactive phenomena such as repair and backchanneling, as well as gestures. We will also investigate the role of cross-speaker repetition of lexical phrases and gestures (i.e., lexical and gestural alignment).
To do so, we are using various behavioural methods. Furthermore, the project is linked to linguistic, neurobiological and theoretical investigations into the same research topic, as it is embedded within a larger team science project of the Language in Interaction consortium. More information about this project can be found here.