2018年5月19日
The global optimum of shallow neural network is attained by ridgelet transform
- ,
- ,
- ,
- ,
- ,
- ,
- 記述言語
- 掲載種別
- 機関テクニカルレポート,技術報告書,プレプリント等
We prove that the global minimum of the backpropagation (BP) training problem<br />
of neural networks with an arbitrary nonlinear activation is given by the<br />
ridgelet transform. A series of computational experiments show that there<br />
exists an interesting similarity between the scatter plot of hidden parameters<br />
in a shallow neural network after the BP training and the spectrum of the<br />
ridgelet transform. By introducing a continuous model of neural networks, we<br />
reduce the training problem to a convex optimization in an infinite dimensional<br />
Hilbert space, and obtain the explicit expression of the global optimizer via<br />
the ridgelet transform.
of neural networks with an arbitrary nonlinear activation is given by the<br />
ridgelet transform. A series of computational experiments show that there<br />
exists an interesting similarity between the scatter plot of hidden parameters<br />
in a shallow neural network after the BP training and the spectrum of the<br />
ridgelet transform. By introducing a continuous model of neural networks, we<br />
reduce the training problem to a convex optimization in an infinite dimensional<br />
Hilbert space, and obtain the explicit expression of the global optimizer via<br />
the ridgelet transform.
- ID情報
-
- arXiv ID : arXiv:1805.07517