論文

査読有り
2015年

Learning curves for automating content analysis: How much human annotation is needed?

2015 IIAI 4TH INTERNATIONAL CONGRESS ON ADVANCED APPLIED INFORMATICS (IIAI-AAI)
  • Emi Ishita
  • ,
  • Douglas W. Oard
  • ,
  • Kenneth R. Fleischmann
  • ,
  • Yoichi Tomiura
  • ,
  • Yasuhiro Takayama
  • ,
  • An-Shou Cheng

開始ページ
171
終了ページ
176
記述言語
英語
掲載種別
研究論文(国際会議プロシーディングス)
DOI
10.1109/IIAI-AAI.2015.295
出版者・発行元
IEEE

In this paper, we explore the potential for reducing human effort when coding text segments for use in content analysis. The key idea is to do some coding by hand, to use the results of that initial effort as training data, and then to code the remainder of the content automatically. The test collection includes 102 written prepared statements about Net neutrality from public hearings held by the U.S Congress and the U.S. Federal Communications Commission (FCC). Six categories used in this analysis: wealth, social order, justice, freedom, innovation and honor. A support vector machine (SVM) classifier and a Naive Bayes (NB) classifier were trained on manually annotated sentences from between one and 51 documents and tested on a held out of set of 51 documents. The results show that the inflection point for a standard measure of classifier accuracy (F-1) occurs early, reaching at least 85% of the best achievable result by the SVM classifier with only 30 training documents, and at least 88% of the best achievable result by NB classifier with only 30 training documents. With the exception of honor, the results show that the scale of machine classification would reasonably be scaled up to larger collections of similar documents without additional human annotation effort.

リンク情報
DOI
https://doi.org/10.1109/IIAI-AAI.2015.295
Web of Science
https://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcAuth=JSTA_CEL&SrcApp=J_Gate_JST&DestLinkType=FullRecord&KeyUT=WOS:000380532300031&DestApp=WOS_CPL
ID情報
  • DOI : 10.1109/IIAI-AAI.2015.295
  • Web of Science ID : WOS:000380532300031

エクスポート
BibTeX RIS