論文

査読有り
2006年

Fast and stable learning of Quasi-Passive Dynamic Walking by an unstable biped robot based on off-policy natural actor-critic

IEEE International Conference on Intelligent Robots and Systems
  • Tsuyoshi Ueno
  • ,
  • Yutaka Nakamura
  • ,
  • Takashi Takuma
  • ,
  • Tomohiro Shibata
  • ,
  • Koh Hosoda
  • ,
  • Shin Ishii

開始ページ
5226
終了ページ
5231
記述言語
英語
掲載種別
研究論文(国際会議プロシーディングス)
DOI
10.1109/IROS.2006.281663

Recently, many researchers on humanoid robotics are interested in Quasi-Passive-Dynamic Walking (Quasi-PDW) which is similar to human walking. It is desirable that control parameters in Quasi-PDW are automatically adjusted because robots often suffer from changes in their physical parameters and the surrounding environment. Reinforcement learning (RL) can be a key technology to this adaptability, and it has been shown that RL realizes Quasi-PDW in a simulation study. To apply the existing method to controlling real robots, however, requires further improvement to accelerate its learning, otherwise the robots will break down before acquiring appropriate controls. To accelerate the learning, this study employs off-policy natural actor-critic (off-NAC), and applies it to an acquisition problem of Quasi-PDW. The most important feature of the off-NAC is that it reuses the samples that has already been obtained by previous controllers. This study also shows an adaptive method of the learning rate. Simulation as well as real experiments demonstrate that fast and stable learning of Quasi-PDW of an unstable biped robot can be realized by our modified off-NAC. © 2006 IEEE.

リンク情報
DOI
https://doi.org/10.1109/IROS.2006.281663
URL
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.98.9675
ID情報
  • DOI : 10.1109/IROS.2006.281663
  • SCOPUS ID : 34250613580

エクスポート
BibTeX RIS