Document details

Vision based context categorization for all-terrain robot

Author(s): Chaínho, David Alexandre Calado Pereira cv logo 1

Date: 2010

Persistent ID: http://hdl.handle.net/10362/5677

Origin: Repositório Institucional da UNL


Description
This dissertation presents a model to allow an autonomous robot to incrementally learn associations between the global context in which it is immersed and the most important behaviours used by the robot in that specific context. In a way, the robot learns what opportunities can a given environment provide in terms of behaviour (e.g.,obstacle avoidance, trail following). The proposed model aims at helping the robot prioritising its perceptual resources, and consequently contributes to improve its visual capabilities or skills. In order to capture the global context, a gist mechanism is used to obtain a global descriptor of the scene. The focus on affordances,rather than on objects, i.e., associating context with behaviour instead on the objects that activate the behaviours, enables a self-supervised learning mechanism without assuming the existence of symbolic object representations, thus facilitating the integration of the model on a developmental framework. The focus on affordances also contributes to our understanding on the role of sensorimotor coordination in the organisation of adaptive behaviour. Positive results are obtained with a physical experiment in a natural environment, where a handheld camera was transported as if it was being carried by an actual robot with a set of predefined behaviours, such as obstacle avoidance, trail following, and wandering. Dissertation presented at the Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa to obtain the Master degree in Electrical and Computer Engineering.
Document Type Master Thesis
Language English
Advisor(s) Oliveira, José
delicious logo  facebook logo  linkedin logo  twitter logo 
degois logo
mendeley logo

Related documents

No related documents