Enhancing perception for the visually impaired with deep learning techniques and low-cost wearable sensors

Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10045/108843
Registro completo de metadatos
Registro completo de metadatos
Campo DCValorIdioma
dc.contributorRobótica y Visión Tridimensional (RoViT)es_ES
dc.contributor.authorBauer, Zuria-
dc.contributor.authorDominguez, Alejandro-
dc.contributor.authorCruz, Edmanuel-
dc.contributor.authorGomez-Donoso, Francisco-
dc.contributor.authorOrts-Escolano, Sergio-
dc.contributor.authorCazorla, Miguel-
dc.contributor.otherUniversidad de Alicante. Departamento de Ciencia de la Computación e Inteligencia Artificiales_ES
dc.contributor.otherUniversidad de Alicante. Instituto Universitario de Investigación Informáticaes_ES
dc.date.accessioned2020-09-04T10:32:34Z-
dc.date.available2020-09-04T10:32:34Z-
dc.date.issued2020-09-
dc.identifier.citationPattern Recognition Letters. 2020, 137: 27-36. https://doi.org/10.1016/j.patrec.2019.03.008es_ES
dc.identifier.issn0167-8655 (Print)-
dc.identifier.issn1872-7344 (Online)-
dc.identifier.urihttp://hdl.handle.net/10045/108843-
dc.description.abstractAs estimated by the World Health Organization, there are millions of people who lives with some form of vision impairment. As a consequence, some of them present mobility problems in outdoor environments. With the aim of helping them, we propose in this work a system which is capable of delivering the position of potential obstacles in outdoor scenarios. Our approach is based on non-intrusive wearable devices and focuses also on being low-cost. First, a depth map of the scene is estimated from a color image, which provides 3D information of the environment. Then, an urban object detector is in charge of detecting the semantics of the objects in the scene. Finally, the three-dimensional and semantic data is summarized in a simpler representation of the potential obstacles the users have in front of them. This information is transmitted to the user through spoken or haptic feedback. Our system is able to run at about 3.8 fps and achieved a 87.99% mean accuracy in obstacle presence detection. Finally, we deployed our system in a pilot test which involved an actual person with vision impairment, who validated the effectiveness of our proposal for improving its navigation capabilities in outdoors.es_ES
dc.description.sponsorshipThis work has been supported by the Spanish Government TIN2016-76515R Grant, supported with Feder funds, the University of Alicante project GRE16-19, and by the Valencian Government project GV/2018/022. Edmanuel Cruz is funded by a Panamenian grant for PhD studies IFARHU & SENACYT 270-2016-207. This work has also been supported by a Spanish grant for PhD studies ACIF/2017/243. Thanks also to Nvidia for the generous donation of a Titan Xp and a Quadro P6000.es_ES
dc.languageenges_ES
dc.publisherElsevieres_ES
dc.rights© 2019 Elsevier B.V.es_ES
dc.subjectVisual impaired assistantes_ES
dc.subjectDeep learninges_ES
dc.subjectOutdoorses_ES
dc.subjectDepth from monocular frameses_ES
dc.subject.otherCiencia de la Computación e Inteligencia Artificiales_ES
dc.titleEnhancing perception for the visually impaired with deep learning techniques and low-cost wearable sensorses_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.peerreviewedsies_ES
dc.identifier.doi10.1016/j.patrec.2019.03.008-
dc.relation.publisherversionhttps://doi.org/10.1016/j.patrec.2019.03.008es_ES
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses_ES
dc.relation.projectIDinfo:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2013-2016/TIN2016-76515-R-
Aparece en las colecciones:INV - RoViT - Artículos de Revistas

Archivos en este ítem:
Archivos en este ítem:
Archivo Descripción TamañoFormato 
ThumbnailBauer_etal_2020_PatternRecognLett_final.pdfVersión final (acceso restringido)2,6 MBAdobe PDFAbrir    Solicitar una copia
ThumbnailBauer_etal_2020_PatternRecognLett_revised.pdfVersión revisada (acceso abierto)12,53 MBAdobe PDFAbrir Vista previa


Todos los documentos en RUA están protegidos por derechos de autor. Algunos derechos reservados.