Rethinking Data Augmentation for Low-Resource Neural Machine Translation: A Multi-Task Learning Approach

Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10045/121939
Información del item - Informació de l'item - Item information
Título: Rethinking Data Augmentation for Low-Resource Neural Machine Translation: A Multi-Task Learning Approach
Autor/es: Sánchez-Cartagena, Víctor M. | Esplà-Gomis, Miquel | Pérez-Ortiz, Juan Antonio | Sánchez-Martínez, Felipe
Grupo/s de investigación o GITE: Transducens
Centro, Departamento o Servicio: Universidad de Alicante. Departamento de Lenguajes y Sistemas Informáticos
Palabras clave: Neural machine translation | Data augmentation | Multi-task learning approach
Área/s de conocimiento: Lenguajes y Sistemas Informáticos
Fecha de publicación: nov-2021
Editor: Association for Computational Linguistics
Cita bibliográfica: Víctor M. Sánchez-Cartagena, Miquel Esplà-Gomis, Juan Antonio Pérez-Ortiz, and Felipe Sánchez-Martínez. 2021. Rethinking Data Augmentation for Low-Resource Neural Machine Translation: A Multi-Task Learning Approach. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8502–8516, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. https://doi.org/10.18653/v1/2021.emnlp-main.669
Resumen: In the context of neural machine translation, data augmentation (DA) techniques may be used for generating additional training samples when the available parallel data are scarce. Many DA approaches aim at expanding the support of the empirical data distribution by generating new sentence pairs that contain infrequent words, thus making it closer to the true data distribution of parallel sentences. In this paper, we propose to follow a completely different approach and present a multi-task DA approach in which we generate new sentence pairs with transformations, such as reversing the order of the target sentence, which produce unfluent target sentences. During training, these augmented sentences are used as auxiliary tasks in a multi-task framework with the aim of providing new contexts where the target prefix is not informative enough to predict the next word. This strengthens the encoder and forces the decoder to pay more attention to the source representations of the encoder. Experiments carried out on six low-resource translation tasks show consistent improvements over the baseline and over DA methods aiming at extending the support of the empirical data distribution. The systems trained with our approach rely more on the source tokens, are more robust against domain shift and suffer less hallucinations.
Patrocinador/es: Work funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement number 825299, project Global Under-Resourced Media Translation (GoURMET); and by Generalitat Valenciana through project GV/2021/064. The computational resources used for the experiments were funded by the European Regional Development Fund through project IDIFEDER/2020/003.
URI: http://hdl.handle.net/10045/121939
ISBN: 978-1-955917-09-4
DOI: 10.18653/v1/2021.emnlp-main.669
Idioma: eng
Tipo: info:eu-repo/semantics/conferenceObject
Derechos: © 2021 Association for Computational Linguistics. Licensed on a Creative Commons Attribution 4.0 International License.
Revisión científica: si
Versión del editor: https://doi.org/10.18653/v1/2021.emnlp-main.669
Aparece en las colecciones:INV - TRANSDUCENS - Comunicaciones a Congresos, Conferencias, etc.
Investigaciones financiadas por la UE

Archivos en este ítem:
Archivos en este ítem:
Archivo Descripción TamañoFormato 
ThumbnailSanchez-Cartagena_etal_2021_ProceedingsEMNLP.pdf1,29 MBAdobe PDFAbrir Vista previa


Este ítem está licenciado bajo Licencia Creative Commons Creative Commons