TY - JOUR
T1 - The unique contribution of uncertainty reduction during naturalistic language comprehension
AU - Song, Ming
AU - Wang, Jing
AU - Cai, Qing
N1 - Publisher Copyright:
© 2024 Elsevier Ltd
PY - 2024/12
Y1 - 2024/12
N2 - Language comprehension is an incremental process with prediction. Delineating various mental states during such a process is critical to understanding the relationship between human cognition and the properties of language. Entropy reduction, which indicates the dynamic decrease of uncertainty as language input unfolds, has been recognized as effective in predicting neural responses during comprehension. According to the entropy reduction hypothesis (Hale, 2006), entropy reduction is related to the processing difficulty of a word, the effect of which may overlap with other well-documented information-theoretical metrics such as surprisal or next-word entropy. However, the processing difficulty was often confused with the information conveyed by a word, especially lacking neural differentiation. We propose that entropy reduction represents the cognitive neural process of information gain that can be dissociated from processing difficulty. This study characterized various information-theoretical metrics using GPT-2 and identified the unique effects of entropy reduction in predicting fMRI time series acquired during language comprehension. In addition to the effects of surprisal and entropy, entropy reduction was associated with activations in the left inferior frontal gyrus, bilateral ventromedial prefrontal cortex, insula, thalamus, basal ganglia, and middle cingulate cortex. The reduction of uncertainty, rather than its fluctuation, proved to be an effective factor in modeling neural responses. The neural substrates underlying the reduction in uncertainty might imply the brain's desire for information regardless of processing difficulty.
AB - Language comprehension is an incremental process with prediction. Delineating various mental states during such a process is critical to understanding the relationship between human cognition and the properties of language. Entropy reduction, which indicates the dynamic decrease of uncertainty as language input unfolds, has been recognized as effective in predicting neural responses during comprehension. According to the entropy reduction hypothesis (Hale, 2006), entropy reduction is related to the processing difficulty of a word, the effect of which may overlap with other well-documented information-theoretical metrics such as surprisal or next-word entropy. However, the processing difficulty was often confused with the information conveyed by a word, especially lacking neural differentiation. We propose that entropy reduction represents the cognitive neural process of information gain that can be dissociated from processing difficulty. This study characterized various information-theoretical metrics using GPT-2 and identified the unique effects of entropy reduction in predicting fMRI time series acquired during language comprehension. In addition to the effects of surprisal and entropy, entropy reduction was associated with activations in the left inferior frontal gyrus, bilateral ventromedial prefrontal cortex, insula, thalamus, basal ganglia, and middle cingulate cortex. The reduction of uncertainty, rather than its fluctuation, proved to be an effective factor in modeling neural responses. The neural substrates underlying the reduction in uncertainty might imply the brain's desire for information regardless of processing difficulty.
KW - Entropy reduction
KW - Language comprehension
KW - Naturalistic stimuli
KW - Prediction
KW - Surprisal
KW - fMRI
UR - https://www.scopus.com/pages/publications/85207105523
U2 - 10.1016/j.cortex.2024.09.007
DO - 10.1016/j.cortex.2024.09.007
M3 - 文章
C2 - 39447486
AN - SCOPUS:85207105523
SN - 0010-9452
VL - 181
SP - 12
EP - 25
JO - Cortex
JF - Cortex
ER -