Multimodal recognition of frustration during game-play with deep neural networks

Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10045/127596
Información del item - Informació de l'item - Item information
Título: Multimodal recognition of frustration during game-play with deep neural networks
Autor/es: Fuente Torres, Carlos de la | Castellanos, Francisco J. | Valero-Mas, Jose J. | Calvo-Zaragoza, Jorge
Grupo/s de investigación o GITE: Reconocimiento de Formas e Inteligencia Artificial
Centro, Departamento o Servicio: Universidad de Alicante. Departamento de Lenguajes y Sistemas Informáticos
Palabras clave: Multimodal | Audiovisual | Neural network | Emotion | Frustration
Fecha de publicación: 27-sep-2022
Editor: Springer Nature
Cita bibliográfica: Multimedia Tools and Applications. 2023, 82: 13617-13636. https://doi.org/10.1007/s11042-022-13762-7
Resumen: Frustration, which is one aspect of the field of emotional recognition, is of particular interest to the video game industry as it provides information concerning each individual player’s level of engagement. The use of non-invasive strategies to estimate this emotion is, therefore, a relevant line of research with a direct application to real-world scenarios. While several proposals regarding the performance of non-invasive frustration recognition can be found in literature, they usually rely on hand-crafted features and rarely exploit the potential inherent to the combination of different sources of information. This work, therefore, presents a new approach that automatically extracts meaningful descriptors from individual audio and video sources of information using Deep Neural Networks (DNN) in order to then combine them, with the objective of detecting frustration in Game-Play scenarios. More precisely, two fusion modalities, namely decision-level and feature-level, are presented and compared with state-of-the-art methods, along with different DNN architectures optimized for each type of data. Experiments performed with a real-world audiovisual benchmarking corpus revealed that the multimodal proposals introduced herein are more suitable than those of a unimodal nature, and that their performance also surpasses that of other state-of-the–art approaches, with error rate improvements of between 40% and 90%.
Patrocinador/es: Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. The first author acknowledges the support from the Spanish “Ministerio de Educación y Formación Profesional” through grant 20CO1/000966. The second and third authors acknowledge support from the “Programa I+D+i de la Generalitat Valenciana” through grants ACIF/2019/042 and APOSTD/2020/256, respectively.
URI: http://hdl.handle.net/10045/127596
ISSN: 1380-7501 (Print) | 1573-7721 (Online)
DOI: 10.1007/s11042-022-13762-7
Idioma: eng
Tipo: info:eu-repo/semantics/article
Derechos: © The Author(s) 2022. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Revisión científica: si
Versión del editor: https://doi.org/10.1007/s11042-022-13762-7
Aparece en las colecciones:INV - GRFIA - Artículos de Revistas

Archivos en este ítem:
Archivos en este ítem:
Archivo Descripción TamañoFormato 
Thumbnailde-la-Fuente_etal_2023_MultimedToolsAppl.pdf1,14 MBAdobe PDFAbrir Vista previa


Este ítem está licenciado bajo Licencia Creative Commons Creative Commons