Imitation learning through self-exploration : from body-babbling to visuomotor association / Farhan Dawood
Mirror neurons are visuo-motor neurons found in primates and thought to be significant for imitation learning. The proposition that mirror neurons result from associative learning while the neonate observes his own actions has received noteworthy empirical support. Imitation learning through self-e...
Saved in:
Main Author: | |
---|---|
Format: | Thesis |
Published: |
2015
|
Subjects: | |
Online Access: | http://studentsrepo.um.edu.my/5918/1/FarhanDawood_WHA110032.pdf http://studentsrepo.um.edu.my/5918/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Universiti Malaya |
Summary: | Mirror neurons are visuo-motor neurons found in primates and thought to be significant for imitation learning. The proposition that mirror neurons result from associative
learning while the neonate observes his own actions has received noteworthy empirical support. Imitation learning through self-exploration is essential in development of sensorimotor skills in infants. Self-exploration is regarded as a procedure by which infants become perceptually observant to their own body and engage in a perceptual communication with themselves. It is assume that crude sense of self is the prerequisite for social interaction rather than an outcome of it. However, role of mirror neuron in encoding the perspective from which the motor acts of others are seen have not been addressed in
relation to humanoid robots. In this thesis, I present a computational model for development of mirror neuron system based on the hypothesis that infants acquire mirror neuron
system by sensorimotor associative learning through self-exploration empowering it to understand the perceived action by taking into account the view-dependency of neurons
as a probable outcome of their associative connectivity.
In our mirror experiment, a humanoid robot stands in front of a glass mirror in order to obtain the associative relationship between his own motor generated actions and his
own visual body-image. First, the continuous flow of motion patterns is segmented into motion primitives by identifying the boundaries of actions through Incremental Kernel Slow Feature Analysis. The segmentation model directly operates on the images acquired from the robot’s vision sensor (camera) without requiring any kinematic model of the demonstrator. After segmentation, the spatio-temporal motion sequences are learned incrementally through Topological Gaussian Adaptive Resonance Hidden Markov Model.
Later, a visuo-motor association is developed through novel Topological Gaussian Adaptive Resonance Associative Memory. The learning model dynamically generates the topoiii
logical structure in a self-stabilizing manner. Finally, after learning, the robot partner performs a similar action in front of the robot and the robot recalls the corresponding motor command from the memory. In the learning process the network first forms mapping from
each motor representation onto visual representation from the self-exploratory perspective. Afterwards, the representation of the motor commands is learned to be associated with all possible visual perspectives. The complete architecture was evaluated by simulation
experiments performed on DARwIn-OP humanoid robot. The results show that the imitation learning algorithm is able to incrementally learn and associate the observed motion patterns based on the segmentation of motion primitives. |
---|