MISC

2012年1月1日

Detecting visual text

NAACL HLT 2012 - 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference
  • Jesse Dodge
  • ,
  • Amit Goyal
  • ,
  • Xufeng Han
  • ,
  • Alyssa Mensch
  • ,
  • Margaret Mitchell
  • ,
  • Karl Stratos
  • ,
  • Kota Yamaguchi
  • ,
  • Yejin Choi
  • ,
  • Hal Daumé
  • ,
  • Alexander C. Berg
  • ,
  • Tamara L. Berg

開始ページ
762
終了ページ
772

© 2012 Association for Computational Linguistics.When people describe a scene, they often include information that is not visually apparent; sometimes based on background knowledge, sometimes to tell a story. We aim to separate visual text - descriptions of what is being seen - from non-visual text in natural images and their descriptions. To do so, we first concretely define what it means to be visual, annotate visual text and then develop algorithms to automatically classify noun phrases as visual or non-visual. We find that using text alone, we are able to achieve high accuracies at this task, and that incorporating features derived from computer vision algorithms improves performance. Finally, we show that we can reliably mine visual nouns and adjectives from large corpora and that we can use these effectively in the classification task.

リンク情報
URL
https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84901455535&origin=inward
ID情報
  • SCOPUS ID : 84901455535

エクスポート
BibTeX RIS