2017年2月
Generalized Sparse Learning of Linear Models Over the Complete Subgraph Feature Set
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
- ,
- 巻
- 39
- 号
- 3
- 開始ページ
- 617
- 終了ページ
- 624
- 記述言語
- 英語
- 掲載種別
- 研究論文(学術雑誌)
- DOI
- 10.1109/TPAMI.2016.2567399
- 出版者・発行元
- IEEE COMPUTER SOC
Supervised learning over graphs is an intrinsically difficult problem: simultaneous learning of relevant features from the complete subgraph feature set, in which enumerating all subgraph features occurring in given graphs is practically intractable due to combinatorial explosion. We show that 1) existing graph supervised learning studies, such as Adaboost, LPBoost, and LARS/LASSO, can be viewed as variations of a branch-and-bound algorithm with simple bounds, which we call Morishita-Kudo bounds; 2) We present a direct sparse optimization algorithm for generalized problems with arbitrary twice-differentiable loss functions, to which Morishita-Kudo bounds cannot be directly applied; 3) We experimentally showed that i) our direct optimization method improves the convergence rate and stability, and ii) L1-penalized logistic regression (L1-LogReg) by our method identifies a smaller subgraph set, keeping the competitive performance, iii) the learned subgraphs by L1-LogReg are more size-balanced than competing methods, which are biased to small-sized subgraphs.
- リンク情報
- ID情報
-
- DOI : 10.1109/TPAMI.2016.2567399
- ISSN : 0162-8828
- eISSN : 1939-3539
- Web of Science ID : WOS:000395555100015