Explainability techniques applied to road traffic forecasting using Graph Neural Network models

Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10045/135268
Registro completo de metadatos
Registro completo de metadatos
Campo DCValorIdioma
dc.contributorAnálisis y Visualización de Datos en Redes (ANVIDA)es_ES
dc.contributorGrupo de Investigación en Tecnologías Inteligentes para el Aprendizaje (Smart Learning)es_ES
dc.contributor.authorGarcía-Sigüenza, Javier-
dc.contributor.authorLlorens Largo, Faraón-
dc.contributor.authorTortosa, Leandro-
dc.contributor.authorVicent, Jose F.-
dc.contributor.otherUniversidad de Alicante. Departamento de Ciencia de la Computación e Inteligencia Artificiales_ES
dc.date.accessioned2023-06-19T10:23:56Z-
dc.date.available2023-06-19T10:23:56Z-
dc.date.issued2023-06-16-
dc.identifier.citationInformation Sciences. 2023, 645: 119320. https://doi.org/10.1016/j.ins.2023.119320es_ES
dc.identifier.issn0020-0255 (Print)-
dc.identifier.issn1872-6291 (Online)-
dc.identifier.urihttp://hdl.handle.net/10045/135268-
dc.description.abstractIn recent years, several new Artificial Intelligence methods have been developed to make models more explainable and interpretable. The techniques essentially deal with the implementation of transparency and traceability of black box machine learning methods. Black box refers to the inability to explain why the model turns the input into the output, which may be problematic in some fields. To overcome this problem, our approach provides a comprehensive combination of predictive and explainability techniques. Firstly, we compared statistical regression, classic machine learning and deep learning models, reaching the conclusion that models based on deep learning exhibit greater accuracy. Of the great variety of deep learning models, the best predictive model in spatio-temporal traffic datasets was found to be the Adaptive Graph Convolutional Recurrent Network. Regarding the explainability technique, GraphMask shows a notably higher fidelity metric than other methods. The integration of both techniques was tested by means of experimental results, concluding that our approach improves deep learning model accuracy, making such models more transparent and interpretable. It allows us to discard up to 95% of the nodes used, facilitating an analysis of its behavior and thus improving the understanding of the model.es_ES
dc.languageenges_ES
dc.publisherElsevieres_ES
dc.rights© 2023 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).es_ES
dc.subjectGraph neural networkses_ES
dc.subjectDeep learninges_ES
dc.subjectData analysises_ES
dc.subjectExplainabilityes_ES
dc.subjectTraffic flowes_ES
dc.titleExplainability techniques applied to road traffic forecasting using Graph Neural Network modelses_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.peerreviewedsies_ES
dc.identifier.doi10.1016/j.ins.2023.119320-
dc.relation.publisherversionhttps://doi.org/10.1016/j.ins.2023.119320es_ES
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses_ES
Aparece en las colecciones:INV - Smart Learning - Artículos de Revistas
INV - ANVIDA - Artículos de Revistas

Archivos en este ítem:
Archivos en este ítem:
Archivo Descripción TamañoFormato 
ThumbnailGarcia-Siguenza_etal_2023_InformatSci.pdf6,25 MBAdobe PDFAbrir Vista previa


Este ítem está licenciado bajo Licencia Creative Commons Creative Commons