Skip to main content
Start main content

Conference Paper Published

Rearch

Not Every Metric is Equal: Cognitive Models for Predicting N400 and P600 Components During Reading Comprehension

Salicchi, L., & Hsu, Y. Y. (2025). Not Every Metric is Equal: Cognitive Models for Predicting N400 and P600 Components During Reading Comprehension. In Proceedings of the 31st International Conference on Computational Linguistics, 3648-3654.
 
URL:  https://aclanthology.org/2025.coling-main.246/

 

Abstract

In recent years, numerous studies have sought to understand the cognitive dynamics underlying language processing by modeling reading times and ERP amplitudes using computational metrics like surprisal. In the present paper, we examine the predictive power of surprisal, entropy, and a novel metric based on semantic similarity for N400 and P600. Our experiments, conducted with Mandarin Chinese materials, revealed three key findings: 1) expectancy plays a primary role for N400; 2) P600 also reflects the cognitive effort required to evaluate linguistic input semantically; and 3) during the time window of interest, information uncertainty influences the language processing the most. Our findings show how computational metrics that capture distinct cognitive dimensions can effectively address psycholinguistic questions.

 
 

 

 




Your browser is not the latest version. If you continue to browse our website, Some pages may not function properly.

You are recommended to upgrade to a newer version or switch to a different browser. A list of the web browsers that we support can be found here