Ramscar, M. (Michael)
- Publications
- item.page.relationships.isContributorAdvisorOfPublication
- item.page.relationships.isContributorOfPublication
1 results
Search Results
Now showing 1 - 1 of 1
- Quantifying the speech-gesture relation with massive multimodal datasets: informativity in time expressions(Public Library of Science, 2020) Pagan-Canovas, C. (Cristobal); Olza-Moreno, I. (Inés); Valenzuela, J. (Javier); Alcaraz-Carrión, D. (Daniel); Ramscar, M. (Michael)The development of large-scale corpora has led to a quantum leap in our understanding of speech in recent years. By contrast, the analysis of massive datasets has so far had a lim- ited impact on the study of gesture and other visual communicative behaviors. We utilized the UCLA-Red Hen Lab multi-billion-word repository of video recordings, all of them showing communicative behavior that was not elicited in a lab, to quantify speech-gesture co-occur- rence frequency for a subset of linguistic expressions in American English. First, we objec- tively establish a systematic relationship in the high degree of co-occurrence between gesture and speech in our subset of expressions, which consists of temporal phrases. Sec- ond, we show that there is a systematic alignment between the informativity of co-speech gestures and that of the verbal expressions with which they co-occur. By exposing deep, systematic relations between the modalities of gesture and speech, our results pave the way for the data-driven integration of multimodal behavior into our understanding of human communication.