Multimodal recognition of frustration during game-play with deep neural networks

Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10045/127596
Registro completo de metadatos
Registro completo de metadatos
Campo DCValorIdioma
dc.contributorReconocimiento de Formas e Inteligencia Artificiales_ES
dc.contributor.authorFuente Torres, Carlos de la-
dc.contributor.authorCastellanos, Francisco J.-
dc.contributor.authorValero-Mas, Jose J.-
dc.contributor.authorCalvo-Zaragoza, Jorge-
dc.contributor.otherUniversidad de Alicante. Departamento de Lenguajes y Sistemas Informáticoses_ES
dc.date.accessioned2022-09-27T07:30:09Z-
dc.date.available2022-09-27T07:30:09Z-
dc.date.issued2022-09-27-
dc.identifier.citationMultimedia Tools and Applications. 2023, 82: 13617-13636. https://doi.org/10.1007/s11042-022-13762-7es_ES
dc.identifier.issn1380-7501 (Print)-
dc.identifier.issn1573-7721 (Online)-
dc.identifier.urihttp://hdl.handle.net/10045/127596-
dc.description.abstractFrustration, which is one aspect of the field of emotional recognition, is of particular interest to the video game industry as it provides information concerning each individual player’s level of engagement. The use of non-invasive strategies to estimate this emotion is, therefore, a relevant line of research with a direct application to real-world scenarios. While several proposals regarding the performance of non-invasive frustration recognition can be found in literature, they usually rely on hand-crafted features and rarely exploit the potential inherent to the combination of different sources of information. This work, therefore, presents a new approach that automatically extracts meaningful descriptors from individual audio and video sources of information using Deep Neural Networks (DNN) in order to then combine them, with the objective of detecting frustration in Game-Play scenarios. More precisely, two fusion modalities, namely decision-level and feature-level, are presented and compared with state-of-the-art methods, along with different DNN architectures optimized for each type of data. Experiments performed with a real-world audiovisual benchmarking corpus revealed that the multimodal proposals introduced herein are more suitable than those of a unimodal nature, and that their performance also surpasses that of other state-of-the–art approaches, with error rate improvements of between 40% and 90%.es_ES
dc.description.sponsorshipOpen Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. The first author acknowledges the support from the Spanish “Ministerio de Educación y Formación Profesional” through grant 20CO1/000966. The second and third authors acknowledge support from the “Programa I+D+i de la Generalitat Valenciana” through grants ACIF/2019/042 and APOSTD/2020/256, respectively.es_ES
dc.languageenges_ES
dc.publisherSpringer Naturees_ES
dc.rights© The Author(s) 2022. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.es_ES
dc.subjectMultimodales_ES
dc.subjectAudiovisuales_ES
dc.subjectNeural networkes_ES
dc.subjectEmotiones_ES
dc.subjectFrustrationes_ES
dc.titleMultimodal recognition of frustration during game-play with deep neural networkses_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.peerreviewedsies_ES
dc.identifier.doi10.1007/s11042-022-13762-7-
dc.relation.publisherversionhttps://doi.org/10.1007/s11042-022-13762-7es_ES
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses_ES
Aparece en las colecciones:INV - GRFIA - Artículos de Revistas

Archivos en este ítem:
Archivos en este ítem:
Archivo Descripción TamañoFormato 
Thumbnailde-la-Fuente_etal_2023_MultimedToolsAppl.pdf1,14 MBAdobe PDFAbrir Vista previa


Este ítem está licenciado bajo Licencia Creative Commons Creative Commons