2016年
Online Joint Learning of Object Concepts and Language Model using Multimodal Hierarchical Dirichlet Process
2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016)
- ,
- ,
- ,
- 開始ページ
- 2636
- 終了ページ
- 2642
- 記述言語
- 英語
- 掲載種別
- 研究論文(国際会議プロシーディングス)
- DOI
- 10.1109/IROS.2016.7759410
- 出版者・発行元
- IEEE
One of the biggest challenges in intelligent robotics is to build robots that can learn to use language. To this end, we think that the practical long-term on-line concept/word learning algorithm for robots is a key issue to be addressed. In this paper, we develop an unsupervised on-line learning algorithm that uses Bayesian nonparametrics for categorizing multimodal sensory signals such as audio, visual, and haptic information for robots. The robot uses its physical body to grasp and observe an object from various viewpoints as well as listen to the sound during the observation. The most important property of the proposed framework is to learn multimodal concepts and the language model simultaneously. This mutual learning framework of concepts and language significantly improves both speech recognition and multimodal categorization performances. We conducted a long-term experiment where a human subject interacted with a real robot over 100 hours using 499 objects. Some interesting results of the experiment are discussed in this paper.
- リンク情報
- ID情報
-
- DOI : 10.1109/IROS.2016.7759410
- Web of Science ID : WOS:000391921702120