Exploiting the Relationship Between Visual and Textual Features in Social Networks for Image Classification with Zero-Shot Deep Learning

Empreu sempre aquest identificador per citar o enllaçar aquest ítem http://hdl.handle.net/10045/118582
Información del item - Informació de l'item - Item information
Títol: Exploiting the Relationship Between Visual and Textual Features in Social Networks for Image Classification with Zero-Shot Deep Learning
Autors: Lucas, Luis | Tomás, David | Garcia-Rodriguez, Jose
Grups d'investigació o GITE: Procesamiento del Lenguaje y Sistemas de Información (GPLSI) | Arquitecturas Inteligentes Aplicadas (AIA)
Centre, Departament o Servei: Universidad de Alicante. Departamento de Tecnología Informática y Computación | Universidad de Alicante. Departamento de Lenguajes y Sistemas Informáticos | Universidad de Alicante. Instituto Universitario de Investigación Informática
Paraules clau: Multimodal classification | CLIP | Zero-shot classification | Unsupervised machine learning | Social media
Àrees de coneixement: Arquitectura y Tecnología de Computadores | Lenguajes y Sistemas Informáticos
Data de publicació: 23-de setembre-2021
Editor: Springer, Cham
Citació bibliogràfica: Lucas L., Tomás D., Garcia-Rodriguez J. (2022) Exploiting the Relationship Between Visual and Textual Features in Social Networks for Image Classification with Zero-Shot Deep Learning. In: Sanjurjo González H., Pastor López I., García Bringas P., Quintián H., Corchado E. (eds) 16th International Conference on Soft Computing Models in Industrial and Environmental Applications (SOCO 2021). SOCO 2021. Advances in Intelligent Systems and Computing, vol 1401. Springer, Cham. https://doi.org/10.1007/978-3-030-87869-6_35
Resum: One of the main issues related to unsupervised machine learning is the cost of processing and extracting useful information from large datasets. In this work, we propose a classifier ensemble based on the transferable learning capabilities of the CLIP neural network architecture in multimodal environments (image and text) from social media. For this purpose, we used the InstaNY100K dataset and proposed a validation approach based on sampling techniques. Our experiments, based on image classification tasks according to the labels of the Places dataset, are performed by first considering only the visual part, and then adding the associated texts as support. The results obtained demonstrated that trained neural networks such as CLIP can be successfully applied to image classification with little fine-tuning, and considering the associated texts to the images can help to improve the accuracy depending on the goal. The results demonstrated what seems to be a promising research direction.
Patrocinadors: This work was funded by the University of Alicante UAPOSTCOVID19-10 grant for “Collecting and publishing open data for the revival of the tourism sector post-COVID-19” project.
URI: http://hdl.handle.net/10045/118582
ISBN: 978-3-030-87868-9 | 978-3-030-87869-6
DOI: 10.1007/978-3-030-87869-6_35
Idioma: eng
Tipus: info:eu-repo/semantics/conferenceObject
Drets: © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022
Revisió científica: si
Versió de l'editor: https://doi.org/10.1007/978-3-030-87869-6_35
Apareix a la col·lecció: INV - GPLSI - Comunicaciones a Congresos, Conferencias, etc.
INV - AIA - Comunicaciones a Congresos, Conferencias, etc.

Arxius per aquest ítem:
Arxius per aquest ítem:
Arxiu Descripció Tamany Format  
ThumbnailLucas_etal_2021_SOCO_final.pdfVersión final (acceso restringido)698,19 kBAdobe PDFObrir     Sol·licitar una còpia
ThumbnailLucas_etal_2021_SOCO_preprint.pdfPreprint (acceso abierto)774,4 kBAdobe PDFObrir Vista prèvia


Tots els documents dipositats a RUA estan protegits per drets d'autors. Alguns drets reservats.