Papers

Peer-reviewed International journal
2015

Extraction of Key Segments from Day-Long Sound Data

HCI International 2015 - Posters’ Extended Abstracts (Part I), CCIS 528
  • Akinori Kasai
  • ,
  • Sunao Hara
  • ,
  • Masanobu Abe

Volume
528
Number
First page
620
Last page
626
Language
English
Publishing type
Part of collection (book)
DOI
10.1007/978-3-319-21380-4_105
Publisher
SPRINGER-VERLAG BERLIN

We propose a method to extract particular sound segments from the sound recorded during the course of a day in order to provide sound segments that can be used to facilitate memory. To extract important parts of the sound data, the proposed method utilizes human behavior based on a multisensing approach. To evaluate the performance of the proposed method, we conducted experiments using sound, acceleration, and global positioning system data collected by five participants for approximately two weeks. The experimental results are summarized as follows: (1) various sounds can be extracted by dividing a day into scenes using the acceleration data; (2) sound recorded in unusual places is preferable to sound recorded in usual places; and (3) speech is preferable to nonspeech sound.

Link information
DOI
https://doi.org/10.1007/978-3-319-21380-4_105
DBLP
https://dblp.uni-trier.de/rec/conf/hci/KasaiHA15
Web of Science
https://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcAuth=JSTA_CEL&SrcApp=J_Gate_JST&DestLinkType=FullRecord&KeyUT=WOS:000377404100105&DestApp=WOS_CPL
URL
http://dblp.uni-trier.de/db/conf/hci/hci2015-27.html#conf/hci/KasaiHA15
ID information
  • DOI : 10.1007/978-3-319-21380-4_105
  • ISSN : 1865-0929
  • eISSN : 1865-0937
  • DBLP ID : conf/hci/KasaiHA15
  • Web of Science ID : WOS:000377404100105

Export
BibTeX RIS