Evaluating the Robustness of Deep Learning Models against Adversarial Attacks: An Analysis with FGSM, PGD and CW

Empreu sempre aquest identificador per citar o enllaçar aquest ítem http://hdl.handle.net/10045/139780
Registre complet
Registre complet
Camp Dublin Core Valor Idioma
dc.contributorAdvanced deveLopment and empIrical research on Software (ALISoft)es_ES
dc.contributor.authorVillegas-Ch, William-
dc.contributor.authorJaramillo-Alcázar, Angel-
dc.contributor.authorLuján-Mora, Sergio-
dc.contributor.otherUniversidad de Alicante. Departamento de Lenguajes y Sistemas Informáticoses_ES
dc.date.accessioned2024-01-16T09:15:37Z-
dc.date.available2024-01-16T09:15:37Z-
dc.date.issued2024-01-16-
dc.identifier.citationVillegas-Ch W, Jaramillo-Alcázar A, Luján-Mora S. Evaluating the Robustness of Deep Learning Models against Adversarial Attacks: An Analysis with FGSM, PGD and CW. Big Data and Cognitive Computing. 2024; 8(1):8. https://doi.org/10.3390/bdcc8010008es_ES
dc.identifier.issn2504-2289-
dc.identifier.urihttp://hdl.handle.net/10045/139780-
dc.description.abstractThis study evaluated the generation of adversarial examples and the subsequent robustness of an image classification model. The attacks were performed using the Fast Gradient Sign method, the Projected Gradient Descent method, and the Carlini and Wagner attack to perturb the original images and analyze their impact on the model’s classification accuracy. Additionally, image manipulation techniques were investigated as defensive measures against adversarial attacks. The results highlighted the model’s vulnerability to conflicting examples: the Fast Gradient Signed Method effectively altered the original classifications, while the Carlini and Wagner method proved less effective. Promising approaches such as noise reduction, image compression, and Gaussian blurring were presented as effective countermeasures. These findings underscore the importance of addressing the vulnerability of machine learning models and the need to develop robust defenses against adversarial examples. This article emphasizes the urgency of addressing the threat posed by harmful standards in machine learning models, highlighting the relevance of implementing effective countermeasures and image manipulation techniques to mitigate the effects of adversarial attacks. These efforts are crucial to safeguarding model integrity and trust in an environment marked by constantly evolving hostile threats. An average 25% decrease in accuracy was observed for the VGG16 model when exposed to the Fast Gradient Signed Method and Projected Gradient Descent attacks, and an even more significant 35% decrease with the Carlini and Wagner method.es_ES
dc.languageenges_ES
dc.publisherMDPIes_ES
dc.rights© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).es_ES
dc.subjectAdversary exampleses_ES
dc.subjectRobustness of modelses_ES
dc.subjectCountermeasureses_ES
dc.titleEvaluating the Robustness of Deep Learning Models against Adversarial Attacks: An Analysis with FGSM, PGD and CWes_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.peerreviewedsies_ES
dc.identifier.doi10.3390/bdcc8010008-
dc.relation.publisherversionhttps://doi.org/10.3390/bdcc8010008es_ES
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses_ES
Apareix a la col·lecció: INV - ALISoft - Artículos de Revistas

Arxius per aquest ítem:
Arxius per aquest ítem:
Arxiu Descripció Tamany Format  
ThumbnailVillegas-Ch_etal_2024_BigDataCognComput.pdf1,53 MBAdobe PDFObrir Vista prèvia


Tots els documents dipositats a RUA estan protegits per drets d'autors. Alguns drets reservats.