論文

査読有り
2018年

Parallelizing and optimizing neural Encoder–Decoder models without padding on multi-core architecture

Future Generation Computer Systems
  • Yuchen Qiao
  • ,
  • Kazuma Hashimoto
  • ,
  • Akiko Eriguchi
  • ,
  • Haixia Wang
  • ,
  • Dongsheng Wang
  • ,
  • Yoshimasa Tsuruoka
  • ,
  • Kenjiro Taura

記述言語
英語
掲載種別
研究論文(学術雑誌)
DOI
10.1016/j.future.2018.04.070
出版者・発行元
Elsevier B.V.

Scaling up Artificial Intelligence (AI) algorithms for massive datasets to improve their performance is becoming crucial. In Machine Translation (MT), one of most important research fields of AI, models based on Recurrent Neural Networks (RNN) show state-of-the-art performance in recent years, and many researchers keep working on improving RNN-based models to achieve better accuracy in translation tasks. Most implementations of Neural Machine Translation (NMT) models employ a padding strategy when processing a mini-batch to make all sentences in a mini-batch have the same length. This enables an efficient utilization of caches and GPU/SIMD parallelism but leads to a waste of computation time. In this paper, we implement and parallelize batch learning for a Sequence-to-Sequence (Seq2Seq) model, which is the most basic model of NMT, without using a padding strategy. More specifically, our approach forms vectors which represent the input words as well as the neural network's states at different time steps into matrices when it processes one sentence, and as a result, the approach makes a better use of cache and optimizes the process that adjusts weights and biases during the back-propagation phase. Our experimental evaluation shows that our implementation achieves better scalability on multi-core CPUs. We also discuss our approach's potential to be used in other implementations of RNN-based models.

リンク情報
DOI
https://doi.org/10.1016/j.future.2018.04.070

エクスポート
BibTeX RIS