論文

査読有り 最終著者 責任著者 国際誌
2023年2月

On the Compressive Power of Boolean Threshold Autoencoders.

IEEE Trans. Neural Networks Learn. Syst.
  • Avraham A. Melkman
  • ,
  • Sini Guo
  • ,
  • Wai-Ki Ching
  • ,
  • Pengyu Liu 0002
  • ,
  • Tatsuya Akutsu

34
2
開始ページ
921
終了ページ
931
記述言語
英語
掲載種別
研究論文(学術雑誌)
DOI
10.1109/TNNLS.2021.3104646

An autoencoder is a layered neural network whose structure can be viewed as consisting of an encoder, which compresses an input vector to a lower dimensional vector, and a decoder, which transforms the low-dimensional vector back to the original input vector (or one that is very similar). In this article, we explore the compressive power of autoencoders that are Boolean threshold networks by studying the numbers of nodes and layers that are required to ensure that each vector in a given set of distinct input binary vectors is transformed back to its original. We show that for any set of n distinct vectors there exists a seven-layer autoencoder with the optimal compression ratio, (i.e., the size of the middle layer is logarithmic in n), but that there is a set of n vectors for which there is no three-layer autoencoder with a middle layer of logarithmic size. In addition, we present a kind of tradeoff: if the compression ratio is allowed to be considerably larger than the optimal, then there is a five-layer autoencoder. We also study the numbers of nodes and layers required only for encoding, and the results suggest that the decoding part is the bottleneck of autoencoding. For example, there always is a three-layer Boolean threshold encoder that compresses n vectors into a dimension that is twice the logarithm of n.

リンク情報
DOI
https://doi.org/10.1109/TNNLS.2021.3104646
DBLP
https://dblp.uni-trier.de/rec/journals/tnn/MelkmanGCLA23
PubMed
https://www.ncbi.nlm.nih.gov/pubmed/34428155
URL
https://dblp.uni-trier.de/db/journals/tnn/tnn34.html#MelkmanGCLA23
ID情報
  • DOI : 10.1109/TNNLS.2021.3104646
  • DBLP ID : journals/tnn/MelkmanGCLA23
  • PubMed ID : 34428155

エクスポート
BibTeX RIS