A. Fachantidis, A. Di Nuovo, A. Cangelosi, I. Vlahavas, “Model-based reinforcement learning for humanoids: A study on forming rewards with the iCub platform”, Computational Intelligence, Cognitive Algorithms, Mind, and Brain (CCMB), 2013 IEEE SSCI, IEEE, pp. 87-93, Singapore, 2013.
Technological advancements in robotics and cognitive science are contributing to the development of the field of cognitive robotics. Modern robotic platforms are able to exhibit the ability to learn and reason about complex tasks and to follow behavioural goals in complex environments. Nevertheless, many challenges still exist. One of these great challenges is to equip these robots with cognitive systems that allow them to deal with less constrained situations, beyond constrained scenarios as in industrial robotics. In this work we explore the application of the Reinforcement Learning (RL) paradigm to study the autonomous development of robot controllers without a priori supervised learning. Such a model-based RL architecture is discussed for the cognitive implications of applying RL in humanoid robots. To this end we show a developmental framework for RL in robotics and its implementation and testing for the iCub robotic platform in two novel experimental scenarios. In particular we focus on iCub simulation experiments with comparisons between internal perception-based reward signals and external ones, in order to compare learning performance of the robot guided by its own perception of action’s outcomes with the one when the robot has its actions externally evaluated.