Álvarez, Enrique, Alvarez, Rafael, Cazorla, Miguel Studying the Transferability of Non-Targeted Adversarial Attacks E. Álvarez, R. Álvarez and M. Cazorla, "Studying the Transferability of Non-Targeted Adversarial Attacks," 2021 International Joint Conference on Neural Networks (IJCNN), Shenzhen, China, 2021, pp. 1-6, doi: 10.1109/IJCNN52387.2021.9534138 URI: http://hdl.handle.net/10045/138459 DOI: 10.1109/IJCNN52387.2021.9534138 ISSN: 2161-4407 ISBN: 978-1-6654-3900-8 Abstract: There is no doubt that the use of machine learning is increasing every day. Its applications include self-driving cars, malware detection, recommendation systems and many other fields. Although the broad scope of this technology highlights the importance of its reliability, it has been shown that machine learning models can be vulnerable to adversarial attacks. In this paper, we study a property of these attacks called transferability across different architectures and models, measuring how these attacks transfer based on a specific number of parameters among three adversarial attacks: Fast Gradient Sign Method, Projected Gradient Descent and HopSkipJumpAttack. Keywords:Deep Learning, Adversarial Attacks, Convolutional Neural Networks IEEE info:eu-repo/semantics/conferenceObject