Ago ergo sum! (I act thus I exist).

According to the embodied approach to cognition, the development of intelligent behaviour depends strongly on active exploration and sensorimotor interactions with the world. There is indeed strong evidence that sensorimotor knowledge of possible action outcomes increases adaptability and prediction of behaviour under uncertain circumstances, from simple actions like moving or grasping an object to more complex social behaviour such as empathizing with other agents or coordinating the behaviour of multiple agents. However, it is not clear how multimodal sensorimotor temporal and spatial associations could be integrated into a single representation of the self or individual sense of agency or ownership. What is a sensorimotor self? How can sensorimotor learning ground the development of a self? And what could be the benefits of having an integrated bodily self-model in order to support effective actions in an uncertain world (e.g., through prediction)?

This interdisciplinary workshop brings together roboticists, psychologists and cognitive scientist to address these questions and to discuss the challenges and possibilities arising for building real life robots that are able to develop their own bodily self-schema and use it for interaction. The background assumption is that by modelling and assessing the development of a sensorimotor self-schema in a synthetic setting, i.e. in a robot, we will clarify and gain new insights of the underlying mechanisms and processes as well as their interrelation.

The understanding of these mechanisms could also pave the way for improving robot interaction capabilities. Current advanced sensorimotor learning combined with new multi-modal sensing devices make now possible that robots acquire their perceptual model. However, the utility of building such models towards real interaction in unstructured environments is still controversial, due to its lack of efficiency and scalability compared to non-developmental approaches. Thus, this workshop will also examine current computational models for active construction of the self-perceptual schema and their reusability for incremental interaction.