Multi-modal Authentication Model for Occluded Faces in a Challenging Environment
Empreu sempre aquest identificador per citar o enllaçar aquest ítem
http://hdl.handle.net/10045/142589
Títol: | Multi-modal Authentication Model for Occluded Faces in a Challenging Environment |
---|---|
Autors: | Jeong, Dahye | Choi, Eunbeen | Ahn, Hyeongjin | Martinez-Martin, Ester | Park, Eunil | Pobil, Angel P. del |
Grups d'investigació o GITE: | Robótica y Visión Tridimensional (RoViT) |
Centre, Departament o Servei: | Universidad de Alicante. Departamento de Ciencia de la Computación e Inteligencia Artificial |
Paraules clau: | Human authentication | User identification | Multimodalities | Face | Voice | Demographic information |
Data de publicació: | 30-d’abril-2024 |
Editor: | IEEE |
Citació bibliogràfica: | IEEE Transactions on Emerging Topics in Computational Intelligence. 2024. https://doi.org/10.1109/TETCI.2024.3390058 |
Resum: | Authentication systems are crucial in the digital era, providing reliable protection of personal information. Most authentication systems rely on a single modality, such as the face, fingerprints, or password sensors. In the case of an authentication system based on a single modality, there is a problem in that the performance of the authentication is degraded when the information of the corresponding modality is covered. Especially, face identification does not work well due to the mask in a COVID-19 situation. In this paper, we focus on the multi-modality approach to improve the performance of occluded face identification. Multi-modal authentication systems are crucial in building a robust authentication system because they can compensate for the lack of modality in the uni-modal authentication system. In this light, we propose DemoID, a multi-modal authentication system based on face and voice for human identification in a challenging environment. Moreover, we build a demographic module to efficiently handle the demographic information of individual faces. The experimental results showed an accuracy of 99% when using all modalities and an overall improvement of 5.41%–10.77% relative to uni-modal face models. Furthermore, our model demonstrated the highest performance compared to existing multi-modal models and also showed promising results on the real-world dataset constructed for this study. |
Patrocinadors: | This work was supported in part by Basic Science Research Program through the National Research Foundation of Korea, funded by the Ministry of Education under Grant NRF-2022R1A6A3A13063417, in part by the Government of the Republic of Korea (MSIT), and in part by the National Research Foundation of Korea under Grant NRF-2023K2A9A1A01098773. |
URI: | http://hdl.handle.net/10045/142589 |
ISSN: | 2471-285X |
DOI: | 10.1109/TETCI.2024.3390058 |
Idioma: | eng |
Tipus: | info:eu-repo/semantics/article |
Drets: | © 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission |
Revisió científica: | si |
Versió de l'editor: | https://doi.org/10.1109/TETCI.2024.3390058 |
Apareix a la col·lecció: | INV - RoViT - Artículos de Revistas |
Arxius per aquest ítem:
Arxiu | Descripció | Tamany | Format | |
---|---|---|---|---|
Jeong_etal_2024_IEEE-TETCI_accepted.pdf | Accepted Manuscript (acceso abierto) | 1,32 MB | Adobe PDF | Obrir Vista prèvia |
Tots els documents dipositats a RUA estan protegits per drets d'autors. Alguns drets reservats.