The challenging task of summary evaluation: an overview

Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10045/71549
Registro completo de metadatos
Registro completo de metadatos
Campo DCValorIdioma
dc.contributorProcesamiento del Lenguaje y Sistemas de Información (GPLSI)es_ES
dc.contributor.authorLloret, Elena-
dc.contributor.authorPlaza Morales, Laura-
dc.contributor.authorAker, Ahmet-
dc.contributor.otherUniversidad de Alicante. Departamento de Lenguajes y Sistemas Informáticoses_ES
dc.date.accessioned2017-11-29T07:27:17Z-
dc.date.available2017-11-29T07:27:17Z-
dc.date.issued2017-09-02-
dc.identifier.citationLanguage Resources & Evaluation. 2017. doi:10.1007/s10579-017-9399-2es_ES
dc.identifier.issn1574-020X (Print)-
dc.identifier.issn1574-0218 (Online)-
dc.identifier.urihttp://hdl.handle.net/10045/71549-
dc.description.abstractEvaluation is crucial in the research and development of automatic summarization applications, in order to determine the appropriateness of a summary based on different criteria, such as the content it contains, and the way it is presented. To perform an adequate evaluation is of great relevance to ensure that automatic summaries can be useful for the context and/or application they are generated for. To this end, researchers must be aware of the evaluation metrics, approaches, and datasets that are available, in order to decide which of them would be the most suitable to use, or to be able to propose new ones, overcoming the possible limitations that existing methods may present. In this article, a critical and historical analysis of evaluation metrics, methods, and datasets for automatic summarization systems is presented, where the strengths and weaknesses of evaluation efforts are discussed and the major challenges to solve are identified. Therefore, a clear up-to-date overview of the evolution and progress of summarization evaluation is provided, giving the reader useful insights into the past, present and latest trends in the automatic evaluation of summaries.es_ES
dc.description.sponsorshipThis research is partially funded by the European Commission under the Seventh (FP7 - 2007- 2013) Framework Programme for Research and Technological Development through the SAM (FP7-611312) project; by the Spanish Government through the projects VoxPopuli (TIN2013-47090-C3-1-P) and Vemodalen (TIN2015-71785-R), the Generalitat Valenciana through project DIIM2.0 (PROMETEOII/2014/001), and the Universidad Nacional de Educación a Distancia through the project “Modelado y síntesis automática de opiniones de usuario en redes sociales” (2014-001-UNED-PROY).es_ES
dc.languageenges_ES
dc.publisherSpringer Science+Business Media B.V.es_ES
dc.rights© Springer Science+Business Media B.V. 2017es_ES
dc.subjectText summarizationes_ES
dc.subjectEvaluationes_ES
dc.subjectContent evaluationes_ES
dc.subjectReadabilityes_ES
dc.subjectTask-based evaluationes_ES
dc.subject.otherLenguajes y Sistemas Informáticoses_ES
dc.titleThe challenging task of summary evaluation: an overviewes_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.peerreviewedsies_ES
dc.identifier.doi10.1007/s10579-017-9399-2-
dc.relation.publisherversionhttp://dx.doi.org/10.1007/s10579-017-9399-2es_ES
dc.identifier.cvIDA9190760-
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses_ES
dc.relation.projectIDinfo:eu-repo/grantAgreement/EC/FP7/611312es_ES
Aparece en las colecciones:Investigaciones financiadas por la UE
INV - GPLSI - Artículos de Revistas

Archivos en este ítem:
Archivos en este ítem:
Archivo Descripción TamañoFormato 
Thumbnail2017_Lloret_etal_LangResources&Evaluation_final.pdfVersión final (acceso restringido)837,79 kBAdobe PDFAbrir    Solicitar una copia
Thumbnail2017_Lloret_etal_LangResources&Evaluation_preprint.pdfPreprint (acceso abierto)1,93 MBAdobe PDFAbrir Vista previa


Todos los documentos en RUA están protegidos por derechos de autor. Algunos derechos reservados.