Incremental learning framework for indoor scene recognition
This paper presents a novel framework for online incremental place recognition in an indoor environment. The framework addresses the scenario in which scene images are gradually obtained during long-term operation in the real-world indoor environment. Multiple users may interact with the classificat...
Saved in:
Main Authors: | , , |
---|---|
Format: | Conference Proceeding |
Published: |
2018
|
Online Access: | https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84893409703&origin=inward http://cmuir.cmu.ac.th/jspui/handle/6653943832/47418 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Chiang Mai University |
Summary: | This paper presents a novel framework for online incremental place recognition in an indoor environment. The framework addresses the scenario in which scene images are gradually obtained during long-term operation in the real-world indoor environment. Multiple users may interact with the classification system and confirm either current or past prediction results; the system then immediately updates itself to improve the classification system. This framework is based on the proposed n-value self-organizing and incremental neural network (n-SOINN), which has been derived by modifying the original SOINN to be appropriate for use in scene recognition. The evaluation was performed on the standard MIT 67-category indoor scene dataset and shows that the proposed framework achieves the same accuracy as that of the state-of-the-art offline method, while the computation time of the proposed framework is significantly faster and fully incremental update is allowed. Additionally, a small extra set of training samples is incrementally given to the system to simulate the incremental learning situation. The result shows that the proposed framework can leverage such additional samples and achieve the state-of-the-art result. Copyright © 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. |
---|