2017年9月12日
Accelerating matrix multiplication in deep learning by using low-rank approximation
Proceedings - 2017 International Conference on High Performance Computing and Simulation, HPCS 2017
- ,
- ,
- ,
- 開始ページ
- 186
- 終了ページ
- 192
- 記述言語
- 英語
- 掲載種別
- 研究論文(国際会議プロシーディングス)
- DOI
- 10.1109/HPCS.2017.37
- 出版者・発行元
- Institute of Electrical and Electronics Engineers Inc.
The open source frameworks of deep learning including TensorFlow, Caffe, Torch, etc. are widely used all over the world and its acceleration have great meaning. In these frameworks, a lot of computation time is spent on convolution, and highly tuned libraries such as cuDNN play important role on accelerating convolution. In these libraries, however, a convolution computation is performed without approximating a dense matrices. In this research, we propose a method to introduce the low-rank approximation method, widely used in the field of scientific and technical computation, into the convolution computation. As a result of investigating the influence on the recognition accuracy of the existing model, it is possible to reduce up to about 90% of rank of data matrices while keeping recognition accuracy -2% of baseline.
- リンク情報
-
- DOI
- https://doi.org/10.1109/HPCS.2017.37
- DBLP
- https://dblp.uni-trier.de/rec/conf/ieeehpcs/OsawaSNY17
- URL
- http://dblp.uni-trier.de/db/conf/ieeehpcs/ieeehpcs2017.html#conf/ieeehpcs/OsawaSNY17
- Scopus
- https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85032375697&origin=inward
- Scopus Citedby
- https://www.scopus.com/inward/citedby.uri?partnerID=HzOxMe3b&scp=85032375697&origin=inward
- ID情報
-
- DOI : 10.1109/HPCS.2017.37
- ISBN : 9781538632499
- ISBN : 9781538632505
- DBLP ID : conf/ieeehpcs/OsawaSNY17
- SCOPUS ID : 85032375697