Tactile classification with supervise autoencoder and joint learning

Tactile sensing, or sense of touch, is one of the essential perception modalities for human beings. It provides abundant information about the environment upon contact, such as force, vibration, temperature, and so on. However, unlike standard RGB images in the computer vision field, abstruse data f...

全面介紹

Saved in:
書目詳細資料
主要作者: Gao, Ruihan
其他作者: Lin Zhiping
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2020
主題:
在線閱讀:https://hdl.handle.net/10356/138711
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:Tactile sensing, or sense of touch, is one of the essential perception modalities for human beings. It provides abundant information about the environment upon contact, such as force, vibration, temperature, and so on. However, unlike standard RGB images in the computer vision field, abstruse data format and variations in sensor design pose obstacles to intelligent tactile learning on a large scale. In this report, we propose a recurrent autoencoder unit with a distinct header network to compress the raw input data to a latent space embedding that represents spatial and temporal information in a compact form. In addition, we also propose a joint training framework to take advantage of different sensors that prove to complement each other. The results demonstrate improvement in terms of both classification accuracy and learning efficiency, compared to the state-do-the-art baseline methods. The work was written as a conference paper submitted to the International Conference on Intelligent Robots and Systems (IROS) 2020. The experimental data have also been prepared for exploratory research collaboration in the area of neuromorphic computing, which also conduces to another submission to IROS 2020.