Full metadata record
DC FieldValueLanguage
dc.creatorPagan-Canovas, C. (Cristobal)-
dc.creatorValenzuela, J. (Javier)-
dc.creatorAlcaraz-Carrión, D. (Daniel)-
dc.creatorOlza-Moreno, I. (Inés)-
dc.creatorRamscar, M. (Michael)-
dc.date.accessioned2020-08-06T10:21:22Z-
dc.date.available2020-08-06T10:21:22Z-
dc.date.issued2020-
dc.identifier.citationPagán Cánovas C, Valenzuela J, Alcaraz Carrión D, Olza I, Ramscar M (2020) Quantifying the speech-gesture relation with massive multimodal datasets: Informativity in time expressions. PLoS ONE 15(6): e0233892es_ES
dc.identifier.issn1932-6203-
dc.identifier.urihttps://hdl.handle.net/10171/59144-
dc.description.abstractThe development of large-scale corpora has led to a quantum leap in our understanding of speech in recent years. By contrast, the analysis of massive datasets has so far had a lim- ited impact on the study of gesture and other visual communicative behaviors. We utilized the UCLA-Red Hen Lab multi-billion-word repository of video recordings, all of them showing communicative behavior that was not elicited in a lab, to quantify speech-gesture co-occur- rence frequency for a subset of linguistic expressions in American English. First, we objec- tively establish a systematic relationship in the high degree of co-occurrence between gesture and speech in our subset of expressions, which consists of temporal phrases. Sec- ond, we show that there is a systematic alignment between the informativity of co-speech gestures and that of the verbal expressions with which they co-occur. By exposing deep, systematic relations between the modalities of gesture and speech, our results pave the way for the data-driven integration of multimodal behavior into our understanding of human communication.es_ES
dc.description.sponsorshipFunding support was provided by two I + D Knowledge Generation Grants from Spain’s Ministry of Science and Innovation and FEDER/UE funds, one to C.P.C. and J.V. (ref. PGC2018-097658-B-I00) and another to I.O. (ref. PGC2018-095703-B-I00); a EURIAS Fellowship from NetIAS and the Netherlands Institute for Advanced Study (C.P.C.); a Ramón y Cajal grant (C.P.C.); an Arts and Humanities Research Council doctoral scheme scholarship (D.A.C.); a fellowship from the SRUK On the move postdoctoral research program (D.A.C.).es_ES
dc.language.isoenges_ES
dc.publisherPublic Library of Sciencees_ES
dc.rightsinfo:eu-repo/semantics/openAccesses_ES
dc.subjectHuman communicationes_ES
dc.subjectSpeeches_ES
dc.subjectGesturees_ES
dc.subjectVisual communicative behaviores_ES
dc.titleQuantifying the speech-gesture relation with massive multimodal datasets: informativity in time expressionses_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.editorial.noteThis is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.es_ES
dc.identifier.doihttps://doi.org/10.1371/journal.pone.0233892es_ES

Files in This Item:
Thumbnail
File
Olza_Plos_2020.pdf
Description
Size
1.29 MB
Format
Adobe PDF


Statistics and impact
0 citas en
0 citas en

Items in Dadun are protected by copyright, with all rights reserved, unless otherwise indicated.