Retrieving Music Semantics from Optical Music Recognition by Machine Translation

Empreu sempre aquest identificador per citar o enllaçar aquest ítem http://hdl.handle.net/10045/109930
Registre complet
Registre complet
Camp Dublin Core Valor Idioma
dc.contributorReconocimiento de Formas e Inteligencia Artificiales_ES
dc.contributor.authorThomae, Martha E.-
dc.contributor.authorRíos-Vila, Antonio-
dc.contributor.authorCalvo-Zaragoza, Jorge-
dc.contributor.authorRizo, David-
dc.contributor.authorIñesta, José M.-
dc.contributor.otherUniversidad de Alicante. Departamento de Lenguajes y Sistemas Informáticoses_ES
dc.date.accessioned2020-10-26T11:44:41Z-
dc.date.available2020-10-26T11:44:41Z-
dc.date.issued2020-
dc.identifier.citationThomae, Martha E., et al. “Retrieving Music Semantics from Optical Music Recognition by Machine Translation”. In: De Luca, Elsa; Flanders, Julia (Eds.). Music Encoding Conference Proceedings 26-29 May, 2020 Tufts University, Boston (USA), pp. 19-24. https://doi.org/10.17613/605z-nt78es_ES
dc.identifier.urihttp://hdl.handle.net/10045/109930-
dc.description.abstractIn this paper, we apply machine translation techniques to solve one of the central problems in the field of optical music recognition: extracting the semantics of a sequence of music characters. So far, this problem has been approached through heuristics and grammars, which are not generalizable solutions. We borrowed the seq2seq model and the attention mechanism from machine translation to address this issue. Given its example-based learning, the model proposed is meant to apply to different notations provided there is enough training data. The model was tested on the PrIMuS dataset of common Western music notation incipits. Its performance was satisfactory for the vast majority of examples, flawlessly extracting the musical meaning of 85% of the incipits in the test set—mapping correctly series of accidentals into key signatures, pairs of digits into time signatures, combinations of digits and rests into multi-measure rests, detecting implicit accidentals, etc.es_ES
dc.description.sponsorshipThis work is supported by the Spanish Ministry HISPAMUS project TIN2017-86576-R, partially funded by the EU, and by CIRMMT’s Inter-Centre Research Exchange Funding and McGill’s Graduate Mobility Award.es_ES
dc.languageenges_ES
dc.publisherTufts Universityes_ES
dc.rightsCreative Commons Attribution-NonCommercial-NoDerivatives Licensees_ES
dc.subjectMusic semanticses_ES
dc.subjectOptical music recognitiones_ES
dc.subjectMachine translationes_ES
dc.subject.otherLenguajes y Sistemas Informáticoses_ES
dc.titleRetrieving Music Semantics from Optical Music Recognition by Machine Translationes_ES
dc.typeinfo:eu-repo/semantics/conferenceObjectes_ES
dc.peerreviewedsies_ES
dc.identifier.doi10.17613/605z-nt78-
dc.relation.publisherversionhttps://doi.org/10.17613/605z-nt78es_ES
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses_ES
dc.relation.projectIDinfo:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2013-2016/TIN2017-86576-R-
Apareix a la col·lecció: INV - GRFIA - Comunicaciones a Congresos, Conferencias, etc.

Arxius per aquest ítem:
Arxius per aquest ítem:
Arxiu Descripció Tamany Format  
ThumbnailThomae_etal_2020_Music_encoding_conference_proceedings.pdf545,63 kBAdobe PDFObrir Vista prèvia


Aquest ítem està subjecte a una llicència de Creative Commons Llicència Creative Commons Creative Commons