We explore the problem of visual attention learning in unconstrained environments where many visual regions may attract the visual attention of the robot. In our approach, overall visual field is treated as a state space where all regions have equal probability to be attended. When the caregiver grabs or points a particular object region, region properties, such as proximity, are associated with a reward signal, and attention learning is achieved via reinforcement learning. In order to speed up the learning process, we consider a state space reduction technique in which our robot uses a bottom-up attention model based on 3D depth saliency and 2D feature saliency, to construct a new state space that is used to achieve attention learning in real time.
Christian I. Penaloza, Yasushi Mae, Kenichi Ohara, and Tatsuo Arai: "Robot Attention Learning in Unconstrained Environments", GCOE Cognitive Neuroscience Robotics Workshop - (Poster). Nagoya, Japan. Jan 19 - 20, 2013.
We explore the problem of attention models for robot tutoring as related to the cognitive development of infants. We discuss the factors that have an important influence in the attention of infants and the way these factors can be taken into consideration to develop robot attention models that simulate infants’ cognitive stimuli. In particular, we focus on the attention given to objects that appear closer to the infant when they are shown by an adult. Using the distance of an object as an important factor to increase visual attention, our model uses depth information along with the well-known Bottom-Up Visual Attention Model Based on Saliency in order to increase attention accuracy even if non-salient feature objects are shown to the robot or if tutoring activity takes place under clutter environments. Our model also considers the presence or absence of a human tutor to decide whether a tutoring activity might take place. Experimental results suggest that depth information is a key factor to emulate effective attention of infants.
Christian I. Penaloza, Yasushi Mae, Kenichi Ohara, and Tatsuo Arai: "Using Depth to Increase Robot Visual Attention Accuracy during Tutoring", IEEE International Conference on Humanoid Robots - Workshop of Developmental Robotics, Osaka, Japan. November 29- December 1st, 2012.