2015年
Extraction of Key Segments from Day-Long Sound Data
HCI International 2015 - Posters’ Extended Abstracts (Part I), CCIS 528
- ,
- ,
- 巻
- 528
- 号
- 開始ページ
- 620
- 終了ページ
- 626
- 記述言語
- 英語
- 掲載種別
- 論文集(書籍)内論文
- DOI
- 10.1007/978-3-319-21380-4_105
- 出版者・発行元
- SPRINGER-VERLAG BERLIN
We propose a method to extract particular sound segments from the sound recorded during the course of a day in order to provide sound segments that can be used to facilitate memory. To extract important parts of the sound data, the proposed method utilizes human behavior based on a multisensing approach. To evaluate the performance of the proposed method, we conducted experiments using sound, acceleration, and global positioning system data collected by five participants for approximately two weeks. The experimental results are summarized as follows: (1) various sounds can be extracted by dividing a day into scenes using the acceleration data; (2) sound recorded in unusual places is preferable to sound recorded in usual places; and (3) speech is preferable to nonspeech sound.
- リンク情報
-
- DOI
- https://doi.org/10.1007/978-3-319-21380-4_105
- DBLP
- https://dblp.uni-trier.de/rec/conf/hci/KasaiHA15
- Web of Science
- https://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcAuth=JSTA_CEL&SrcApp=J_Gate_JST&DestLinkType=FullRecord&KeyUT=WOS:000377404100105&DestApp=WOS_CPL
- URL
- http://dblp.uni-trier.de/db/conf/hci/hci2015-27.html#conf/hci/KasaiHA15
- ID情報
-
- DOI : 10.1007/978-3-319-21380-4_105
- ISSN : 1865-0929
- eISSN : 1865-0937
- DBLP ID : conf/hci/KasaiHA15
- Web of Science ID : WOS:000377404100105