MISC

2018年5月19日

The global optimum of shallow neural network is attained by ridgelet transform

  • Sho Sonoda
  • ,
  • Isao Ishikawa
  • ,
  • Masahiro Ikeda
  • ,
  • Kei Hagihara
  • ,
  • Yoshihiro Sawano
  • ,
  • Takuo Matsubara
  • ,
  • Noboru Murata

記述言語
掲載種別
機関テクニカルレポート,技術報告書,プレプリント等

We prove that the global minimum of the backpropagation (BP) training problem<br />
of neural networks with an arbitrary nonlinear activation is given by the<br />
ridgelet transform. A series of computational experiments show that there<br />
exists an interesting similarity between the scatter plot of hidden parameters<br />
in a shallow neural network after the BP training and the spectrum of the<br />
ridgelet transform. By introducing a continuous model of neural networks, we<br />
reduce the training problem to a convex optimization in an infinite dimensional<br />
Hilbert space, and obtain the explicit expression of the global optimizer via<br />
the ridgelet transform.

リンク情報
arXiv
http://arxiv.org/abs/arXiv:1805.07517
URL
http://arxiv.org/abs/1805.07517v3
ID情報
  • arXiv ID : arXiv:1805.07517

エクスポート
BibTeX RIS