Enhancing perception for the visually impaired with deep learning techniques and low-cost wearable sensors
Empreu sempre aquest identificador per citar o enllaçar aquest ítem
http://hdl.handle.net/10045/108843
Títol: | Enhancing perception for the visually impaired with deep learning techniques and low-cost wearable sensors |
---|---|
Autors: | Bauer, Zuria | Dominguez, Alejandro | Cruz, Edmanuel | Gomez-Donoso, Francisco | Orts-Escolano, Sergio | Cazorla, Miguel |
Grups d'investigació o GITE: | Robótica y Visión Tridimensional (RoViT) |
Centre, Departament o Servei: | Universidad de Alicante. Departamento de Ciencia de la Computación e Inteligencia Artificial | Universidad de Alicante. Instituto Universitario de Investigación Informática |
Paraules clau: | Visual impaired assistant | Deep learning | Outdoors | Depth from monocular frames |
Àrees de coneixement: | Ciencia de la Computación e Inteligencia Artificial |
Data de publicació: | de setembre-2020 |
Editor: | Elsevier |
Citació bibliogràfica: | Pattern Recognition Letters. 2020, 137: 27-36. https://doi.org/10.1016/j.patrec.2019.03.008 |
Resum: | As estimated by the World Health Organization, there are millions of people who lives with some form of vision impairment. As a consequence, some of them present mobility problems in outdoor environments. With the aim of helping them, we propose in this work a system which is capable of delivering the position of potential obstacles in outdoor scenarios. Our approach is based on non-intrusive wearable devices and focuses also on being low-cost. First, a depth map of the scene is estimated from a color image, which provides 3D information of the environment. Then, an urban object detector is in charge of detecting the semantics of the objects in the scene. Finally, the three-dimensional and semantic data is summarized in a simpler representation of the potential obstacles the users have in front of them. This information is transmitted to the user through spoken or haptic feedback. Our system is able to run at about 3.8 fps and achieved a 87.99% mean accuracy in obstacle presence detection. Finally, we deployed our system in a pilot test which involved an actual person with vision impairment, who validated the effectiveness of our proposal for improving its navigation capabilities in outdoors. |
Patrocinadors: | This work has been supported by the Spanish Government TIN2016-76515R Grant, supported with Feder funds, the University of Alicante project GRE16-19, and by the Valencian Government project GV/2018/022. Edmanuel Cruz is funded by a Panamenian grant for PhD studies IFARHU & SENACYT 270-2016-207. This work has also been supported by a Spanish grant for PhD studies ACIF/2017/243. Thanks also to Nvidia for the generous donation of a Titan Xp and a Quadro P6000. |
URI: | http://hdl.handle.net/10045/108843 |
ISSN: | 0167-8655 (Print) | 1872-7344 (Online) |
DOI: | 10.1016/j.patrec.2019.03.008 |
Idioma: | eng |
Tipus: | info:eu-repo/semantics/article |
Drets: | © 2019 Elsevier B.V. |
Revisió científica: | si |
Versió de l'editor: | https://doi.org/10.1016/j.patrec.2019.03.008 |
Apareix a la col·lecció: | INV - RoViT - Artículos de Revistas |
Arxius per aquest ítem:
Arxiu | Descripció | Tamany | Format | |
---|---|---|---|---|
Bauer_etal_2020_PatternRecognLett_final.pdf | Versión final (acceso restringido) | 2,6 MB | Adobe PDF | Obrir Sol·licitar una còpia |
Bauer_etal_2020_PatternRecognLett_revised.pdf | Versión revisada (acceso abierto) | 12,53 MB | Adobe PDF | Obrir Vista prèvia |
Tots els documents dipositats a RUA estan protegits per drets d'autors. Alguns drets reservats.