Full metadata record
DC Field | Value | Language |
---|---|---|
dc.creator | Pagan-Canovas, C. (Cristobal) | - |
dc.creator | Valenzuela, J. (Javier) | - |
dc.creator | Alcaraz-Carrión, D. (Daniel) | - |
dc.creator | Olza-Moreno, I. (Inés) | - |
dc.creator | Ramscar, M. (Michael) | - |
dc.date.accessioned | 2020-08-06T10:21:22Z | - |
dc.date.available | 2020-08-06T10:21:22Z | - |
dc.date.issued | 2020 | - |
dc.identifier.citation | Pagán Cánovas C, Valenzuela J, Alcaraz Carrión D, Olza I, Ramscar M (2020) Quantifying the speech-gesture relation with massive multimodal datasets: Informativity in time expressions. PLoS ONE 15(6): e0233892 | es_ES |
dc.identifier.issn | 1932-6203 | - |
dc.identifier.uri | https://hdl.handle.net/10171/59144 | - |
dc.description.abstract | The development of large-scale corpora has led to a quantum leap in our understanding of speech in recent years. By contrast, the analysis of massive datasets has so far had a lim- ited impact on the study of gesture and other visual communicative behaviors. We utilized the UCLA-Red Hen Lab multi-billion-word repository of video recordings, all of them showing communicative behavior that was not elicited in a lab, to quantify speech-gesture co-occur- rence frequency for a subset of linguistic expressions in American English. First, we objec- tively establish a systematic relationship in the high degree of co-occurrence between gesture and speech in our subset of expressions, which consists of temporal phrases. Sec- ond, we show that there is a systematic alignment between the informativity of co-speech gestures and that of the verbal expressions with which they co-occur. By exposing deep, systematic relations between the modalities of gesture and speech, our results pave the way for the data-driven integration of multimodal behavior into our understanding of human communication. | es_ES |
dc.description.sponsorship | Funding support was provided by two I + D Knowledge Generation Grants from Spain’s Ministry of Science and Innovation and FEDER/UE funds, one to C.P.C. and J.V. (ref. PGC2018-097658-B-I00) and another to I.O. (ref. PGC2018-095703-B-I00); a EURIAS Fellowship from NetIAS and the Netherlands Institute for Advanced Study (C.P.C.); a Ramón y Cajal grant (C.P.C.); an Arts and Humanities Research Council doctoral scheme scholarship (D.A.C.); a fellowship from the SRUK On the move postdoctoral research program (D.A.C.). | es_ES |
dc.language.iso | eng | es_ES |
dc.publisher | Public Library of Science | es_ES |
dc.rights | info:eu-repo/semantics/openAccess | es_ES |
dc.subject | Human communication | es_ES |
dc.subject | Speech | es_ES |
dc.subject | Gesture | es_ES |
dc.subject | Visual communicative behavior | es_ES |
dc.title | Quantifying the speech-gesture relation with massive multimodal datasets: informativity in time expressions | es_ES |
dc.type | info:eu-repo/semantics/article | es_ES |
dc.editorial.note | This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. | es_ES |
dc.identifier.doi | https://doi.org/10.1371/journal.pone.0233892 | es_ES |
Files in This Item:
Statistics and impact
Items in Dadun are protected by copyright, with all rights reserved, unless otherwise indicated.