Weakly supervised human skin segmentation using guidance attention mechanisms

Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10045/137170
Información del item - Informació de l'item - Item information
Título: Weakly supervised human skin segmentation using guidance attention mechanisms
Autor/es: Hashemifard, Kooshan | Climent-Pérez, Pau | Flórez-Revuelta, Francisco
Grupo/s de investigación o GITE: Ambient Intelligence for Active and Healthy Ageing - Entornos Inteligentes para un Envejecimiento Activo y Saludable (AmI4AHA)
Centro, Departamento o Servicio: Universidad de Alicante. Departamento de Tecnología Informática y Computación
Palabras clave: Skin segmentation | Attention mechanism | Weakly supervised training | Deep neural networks
Fecha de publicación: 13-sep-2023
Editor: Springer Nature
Cita bibliográfica: Multimedia Tools and Applications. 2024, 83: 31177-31194. https://doi.org/10.1007/s11042-023-16590-5
Resumen: Human skin segmentation is a crucial task in computer vision and biometric systems, yet it poses several challenges such as variability in skin colour, pose, and illumination. This paper presents a robust data-driven skin segmentation method for a single image that addresses these challenges through the integration of contextual information and efficient network design. In addition to robustness and accuracy, the integration into real-time systems requires a careful balance between computational power, speed, and performance. The proposed method incorporates two attention modules, Body Attention and Skin Attention, that utilize contextual information to improve segmentation results. These modules draw attention to the desired areas, focusing on the body boundaries and skin pixels, respectively. Additionally, an efficient network architecture is employed in the encoder part to minimize computational power while retaining high performance. To handle the issue of noisy labels in skin datasets, the proposed method uses a weakly supervised training strategy, relying on the Skin Attention module. The results of this study demonstrate that the proposed method is comparable to, or outperforms, state-of-the-art methods on benchmark datasets.
Patrocinador/es: This work is part of the visuAAL project on Privacy-Aware and Acceptable Video-Based Technologies and Services for Active and Assisted Living (https://www.visuaal-itn.eu/). This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 861091.
URI: http://hdl.handle.net/10045/137170
ISSN: 1380-7501 (Print) | 1573-7721 (Online)
DOI: 10.1007/s11042-023-16590-5
Idioma: eng
Tipo: info:eu-repo/semantics/article
Derechos: © The Author(s) 2023. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Revisión científica: si
Versión del editor: https://doi.org/10.1007/s11042-023-16590-5
Aparece en las colecciones:Investigaciones financiadas por la UE
INV - AmI4AHA - Artículos de Revistas

Archivos en este ítem:
Archivos en este ítem:
Archivo Descripción TamañoFormato 
ThumbnailHashemifard_etal_2024_MultimedToolsAppl.pdf2,07 MBAdobe PDFAbrir Vista previa


Este ítem está licenciado bajo Licencia Creative Commons Creative Commons