論文

2018年12月

Accelerating Deep Q Network by weighting experiences

Lecture Notes in Computer Science (Neural Information Processing. ICONIP 2018)
  • Kazuhiro Murakami
  • ,
  • Koichi Moriyama
  • ,
  • Atsuko Mutoh
  • ,
  • Tohgoroh Matsui
  • ,
  • Nobuhiro Inuzuka

11301
開始ページ
204
終了ページ
213
記述言語
英語
掲載種別
研究論文(国際会議プロシーディングス)
出版者・発行元
Springer

Deep Q Network (DQN) is a reinforcement learning methodlogy that uses deep neural networks to approximate the Q-function. Literature reveals that DQN can select better responses than humans. However, DQN requires a lengthy period of time to learn the appropriate actions by using tuples of state, action, reward and next state, called “experience”, sampled from its memory. DQN samples them uniformly and randomly, but the experiences are skewed resulting in slow learning because frequent experiences are redundantly sampled but infrequent ones are not. This work mitigates the problem by weighting experiences based on their frequency and manipulating their sampling probability. In a video game environment, the proposed method learned the appropriate responses faster than DQN.

エクスポート
BibTeX RIS