論文

査読有り
2012年

Online Object Categorization Using Multimodal Information Autonomously Acquired by a Mobile Robot

ADVANCED ROBOTICS
  • Takaya Araki
  • ,
  • Tomoaki Nakamura
  • ,
  • Takayuki Nagai
  • ,
  • Kotaro Funakoshi
  • ,
  • Mikio Nakano
  • ,
  • Naoto Iwahashi

26
17
開始ページ
1995
終了ページ
2020
記述言語
英語
掲載種別
研究論文(学術雑誌)
DOI
10.1080/01691864.2012.728693
出版者・発行元
TAYLOR & FRANCIS LTD

In this paper, we propose a robot that acquires multimodal information, i.e. visual, auditory, and haptic information, fully autonomously using its embodiment. We also propose batch and online algorithms for multimodal categorization based on the acquired multimodal information and partial words given by human users. To obtain multimodal information, the robot detects an object on a flat surface. Then, the robot grasps and shakes it to obtain haptic and auditory information. For obtaining visual information, the robot uses a small hand-held observation table with an XBee wireless controller to control the viewpoints for observing the object. In this paper, for multimodal concept formation, multimodal latent Dirichlet allocation using Gibbs sampling is extended to an online version. This framework makes it possible for the robot to learn object concepts naturally in everyday operation in conjunction with a small amount of linguistic information from human users. The proposed algorithms are implemented on a real robot and tested using real everyday objects to show the validity of the proposed system. (C) 2012 Taylor & Francis and The Robotics Society of Japan

リンク情報
DOI
https://doi.org/10.1080/01691864.2012.728693
Web of Science
https://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcAuth=JSTA_CEL&SrcApp=J_Gate_JST&DestLinkType=FullRecord&KeyUT=WOS:000310611500004&DestApp=WOS_CPL
ID情報
  • DOI : 10.1080/01691864.2012.728693
  • ISSN : 0169-1864
  • eISSN : 1568-5535
  • Web of Science ID : WOS:000310611500004

エクスポート
BibTeX RIS