Learning in real robots from environment interaction

Please use this identifier to cite or link to this item: http://hdl.handle.net/10045/21361
Información del item - Informació de l'item - Item information
Title: Learning in real robots from environment interaction
Authors: Quintía Vidal, Pablo | Iglesias Rodríguez, Roberto | Rodríguez González, Miguel Ángel | Vázquez Regueiro, Carlos | Valdés Villarrubia, Fernando
Keywords: Continuous robot learning | Robot adaptation | Learning from environment interaction | Reinforcement learning
Knowledge Area: Ciencia de la Computación e Inteligencia Artificial
Issue Date: Jan-2012
Publisher: Red de Agentes Físicos
Citation: QUINTÍA, P., et al. “Learning in real robots from environment interaction”. Journal of Physical Agents. Vol. 6, No. 1 (Jan. 2012). ISSN 1888-0258, pp. 43-51
Abstract: This article describes a proposal to achieve fast robot learning from its interaction with the environment. Our proposal will be suitable for continuous learning procedures as it tries to limit the instability that appears every time the robot encounters a new situation it had not seen before. On the other hand, the user will not have to establish a degree of exploration (usual in reinforcement learning) and that would prevent continual learning procedures. Our proposal will use an ensemble of learners able to combine dynamic programming and reinforcement learning to predict when a robot will make a mistake. This information will be used to dynamically evolve a set of control policies that determine the robot actions.
Sponsor: This work was supported by the research grants TIN2009-07737 and INCITE08PXIB262202PR.
URI: http://hdl.handle.net/10045/21361 | http://dx.doi.org/10.14198/JoPha.2012.6.1.06
ISSN: 1888-0258
DOI: 10.14198/JoPha.2012.6.1.06
Language: eng
Type: info:eu-repo/semantics/article
Peer Review: si
Appears in Collections:Journal of Physical Agents - 2012, Vol. 6, No. 1

Files in This Item:
Files in This Item:
File Description SizeFormat 
ThumbnailJoPha_6_1_06.pdf3,35 MBAdobe PDFOpen Preview


This item is licensed under a Creative Commons License Creative Commons