論文

2021年11月

Semantically Congruent Bimodal Presentation with Divided-Modality Attention Accelerates Unisensory Working Memory Retrieval

Perception
  • Hongtao Yu*
  • ,
  • Aijun Wang*
  • ,
  • Qingqing Li
  • ,
  • Yulong Liu
  • ,
  • Jiajia Yang
  • ,
  • Satoshi Takahashi
  • ,
  • Yoshimichi Ejima
  • ,
  • Ming Zhang
  • ,
  • Jinglong Wu

50
11
開始ページ
917
終了ページ
932
記述言語
掲載種別
研究論文(学術雑誌)
DOI
10.1177/03010066211052943
出版者・発行元
SAGE Publications

Although previous studies have shown that semantic multisensory integration can be differentially modulated by attention focus, it remains unclear whether attentionally mediated multisensory perceptual facilitation could impact further cognitive performance. Using a delayed matching-to-sample paradigm, the present study investigated the effect of semantically congruent bimodal presentation on subsequent unisensory working memory (WM) performance by manipulating attention focus. The results showed that unisensory WM retrieval was faster in the semantically congruent condition than in the incongruent multisensory encoding condition. However, such a result was only found in the divided-modality attention condition. This result indicates that a robust multisensory representation was constructed during semantically congruent multisensory encoding with divided-modality attention; this representation then accelerated unisensory WM performance, especially auditory WM retrieval. Additionally, an overall faster unisensory WM retrieval was observed under the modality-specific selective attention condition compared with the divided-modality condition, indicating that the division of attention to address two modalities demanded more central executive resources to encode and integrate crossmodal information and to maintain a constructed multisensory representation, leaving few resources for WM retrieval. Additionally, the present finding may support the amodal view that WM has an amodal central storage component that is used to maintain modal-based attention-optimized multisensory representations.

リンク情報
DOI
https://doi.org/10.1177/03010066211052943
URL
http://journals.sagepub.com/doi/pdf/10.1177/03010066211052943
URL
http://journals.sagepub.com/doi/full-xml/10.1177/03010066211052943
ID情報
  • DOI : 10.1177/03010066211052943
  • ISSN : 0301-0066
  • eISSN : 1468-4233

エクスポート
BibTeX RIS