UnrealROX: an extremely photorealistic virtual reality environment for robotics simulations and synthetic data generation

Empreu sempre aquest identificador per citar o enllaçar aquest ítem http://hdl.handle.net/10045/113501
Información del item - Informació de l'item - Item information
Títol: UnrealROX: an extremely photorealistic virtual reality environment for robotics simulations and synthetic data generation
Autors: Martínez González, Pablo | Oprea, Sergiu | Garcia-Garcia, Alberto | Jover-Álvarez, Álvaro | Orts-Escolano, Sergio | Garcia-Rodriguez, Jose
Titular/s del dret: Universidad de Alicante
Grups d'investigació o GITE: Arquitecturas Inteligentes Aplicadas (AIA) | Robótica y Visión Tridimensional (RoViT)
Centre, Departament o Servei: Universidad de Alicante. Departamento de Tecnología Informática y Computación
Paraules clau: Data Generator | Photorealism | Deep Learning | Robotics
Àrees de coneixement: Arquitectura y Tecnología de Computadores
Data de publicació: 2019
Editor: Springer-Verlag
Resum: Data-driven algorithms have surpassed traditional techniques in almost every aspect in robotic vision problems. Such algorithms need vast amounts of quality data to be able to work properly after their training process. Gathering and annotating that sheer amount of data in the real world is a time-consuming and error-prone task. Those problems limit scale and quality. Synthetic data generation has become increasingly popular since it is faster to generate and automatic to annotate. However, most of the current datasets and environments lack realism, interactions, and details from the real world. UnrealROX is an environment built over Unreal Engine 4 which aims to reduce that reality gap by leveraging hyperrealistic indoor scenes that are explored by robot agents which also interact with objects in a visually realistic manner in that simulated world. Photorealistic scenes and robots are rendered by Unreal Engine into a virtual reality headset which captures gaze so that a human operator can move the robot and use controllers for the robotic hands; scene information is dumped on a per-frame basis so that it can be reproduced offline to generate raw data and ground truth annotations. This virtual reality environment enables robotic vision researchers to generate realistic and visually plausible data with full ground truth for a wide variety of problems such as class and instance semantic segmentation, object detection, depth estimation, visual grasping, and navigation.
Patrocinadors: Instituto Universitario de Investigación en Informática. This work has been also funded by the Spanish Government TIN2016-76515-R Grant for the COMBAHO project, supported with Feder funds. This work has also been supported by three Spanish national grants for Ph.D. studies (FPU15/04516, FPU17/00166, and ACIF/2018/197), by the University of Alicante Project GRE16-19, and by the Valencian Government Project GV/2018/022. Experiments were made possible by a generous hardware donation from NVIDIA.
URI: https://github.com/3dperceptionlab/unrealrox | http://hdl.handle.net/10045/113501
DOI: 10.1007/s10055-019-00399-5
Idioma: eng
Tipus: software
Drets: © Universitat d'Alacant / Universidad de Alicante. Distribuido bajo licencia Creative Commons Reconocimiento-NoComercial-CompartirIgual 4.0
Revisió científica: no
Versió de l'editor: https://doi.org/10.1007/s10055-019-00399-5
Apareix a la col·lecció: Registro de Programas de Ordenador y Bases de Datos

Arxius per aquest ítem:
Arxius per aquest ítem:
Arxiu Descripció Tamany Format  
ThumbnailGitHub-3dperceptionlab_unrealrox.pdf185,72 kBAdobe PDFObrir Vista prèvia


Aquest ítem està subjecte a una llicència de Creative Commons Llicència Creative Commons Creative Commons