Automatic annotation of protected attributes to support fairness optimization

Please use this identifier to cite or link to this item: http://hdl.handle.net/10045/140588
Información del item - Informació de l'item - Item information
Title: Automatic annotation of protected attributes to support fairness optimization
Authors: Consuegra-Ayala, Juan Pablo | Gutiérrez, Yoan | Almeida-Cruz, Yudivian | Palomar, Manuel
Research Group/s: Procesamiento del Lenguaje y Sistemas de Información (GPLSI)
Center, Department or Service: Universidad de Alicante. Departamento de Lenguajes y Sistemas Informáticos | Universidad de Alicante. Instituto Universitario de Investigación Informática
Keywords: Fairness | Gender Bias | Natural Language Processing | Corpus | Optimization
Issue Date: 5-Feb-2024
Publisher: Elsevier
Citation: Information Sciences. 2024, 663: 120188. https://doi.org/10.1016/j.ins.2024.120188
Abstract: Recent research has shown that the unaware automation of high-risk decision-making tasks can result in unfair decisions being made. The most common approaches to address this problem adopt definitions of fairness based on protected attributes. Precise annotation of protected attributes enables the application of bias mitigation techniques to commonly unlabeled kinds of data (e.g., images, text, etc.). This paper proposes a framework to automatically annotate protected attributes in data collections. The framework focuses on providing a single interface to annotate protected attributes of different types (e.g., gender, race, etc.) and from different kinds of data. Internally, the framework coordinates multiple sensors to produce the final annotation. Several sensors for textual data are proposed. An optimization search technique is designed to tune the framework to specific domains. Additionally, a small dataset of movie reviews —annotated with gender and sentiment— was created. The evaluation in datasets of texts from diverse domains shows the quality of the annotations and their effectiveness to be used as a proxy to estimate fairness in datasets and machine learning models. The source code is available online for the research community.
Sponsor: This research has been partially funded by the University of Alicante and the University of Havana, the Spanish Ministry of Science and Innovation, the Generalitat Valenciana, and the European Regional Development Fund (ERDF) through the following funding: At the national level, the following projects were granted: TRIVIAL (PID2021-122263OB-C22); CORTEX (PID2021-123956OB-I00); CLEARTEXT (TED2021-130707B-I00); and SOCIALTRUST (PDC2022-133146-C22), funded by MCIN/AEI/10.13039/501100011033 and, as appropriate, by ERDF A way of making Europe, by the European Union or by the European Union NextGenerationEU/PRTR. Also, the VIVES: “Pla de Tecnologies de la Llengua per al valencià” project (2022/TL22/00215334) from the Projecte Estratègic per a la Recuperació i Transformació Econòmica (PERTE). At regional level, the Generalitat Valenciana (Conselleria d'Educacio, Investigacio, Cultura i Esport), granted funding for NL4DISMIS (CIPROM/2021/21). Moreover, it was backed by the work of two COST Actions: CA19134 - “Distributed Knowledge Graphs” and CA19142 - “Leading Platform for European Citizens, Industries, Academia, and Policymakers in Media Accessibility”.
URI: http://hdl.handle.net/10045/140588
ISSN: 0020-0255 (Print) | 1872-6291 (Online)
DOI: 10.1016/j.ins.2024.120188
Language: eng
Type: info:eu-repo/semantics/article
Rights: © 2024 Elsevier Inc.
Peer Review: si
Publisher version: https://doi.org/10.1016/j.ins.2024.120188
Appears in Collections:INV - GPLSI - Artículos de Revistas

Files in This Item:
Files in This Item:
File Description SizeFormat 
ThumbnailConsuegra-Ayala_etal_2024_InfoSci_accepted.pdfEmbargo 24 meses (acceso abierto: 6 febr. 2026)1,53 MBAdobe PDFOpen    Request a copy
ThumbnailConsuegra-Ayala_etal_2024_InfoSci_final.pdfVersión final (acceso restringido)1,1 MBAdobe PDFOpen    Request a copy


Items in RUA are protected by copyright, with all rights reserved, unless otherwise indicated.