Skip to main content
Start main content

Book Chapter Published

Rearch

Linguistics in the Age of Large Language Models: Is A Reconciliation Possible?

Chersoni, E., & Huang, C.-R. (2025). Linguistics in the Age of Large Language Models: Is A Reconciliation Possible?. In D. Bradley, K. Dziubalska-KoĹ‚aczyk, C. Hamans, I.-H. Lee, & F. Steurs (Eds.), Contemporary Linguistics: Integrating Languages, Communities, and Technologies, 201-205. Brill.
 
DOI:  https://doi.org/10.1163/9789004715608_017

 

Abstract

The outlook of quantitative and computational linguistics changed drastically with the emergence of Large Language Models (LLMs). These models, or more precisely computational programs, trained on massive amounts of textual data for next-word prediction tasks, achieved impressive performance on a large number of multilingual natural language understanding benchmarks. Under the more familiar name of GenerativeAI (GenAI), LLMs also received unprecedented media attention around the world and made the topic of Artificial General Intelligence (AGI) a household term. This is not surprising, as language is both the quintessential feature of human intelligence and provides the most tangible empirical evidence of human activities. The capacity of LLM-based systems such as ChatGPT to interact with humans in natural conversation settings allows layman users all around the world to chat with them as if they were sentient beings. Looking ahead as linguists, we would like to know what Large Language Models mean for linguistics. In particular, can LLMs provide plausible models of human linguistic behaviour? Can they model the neural mechanisms of language processing? Can we map their representations onto human-interpretable features?

 
 

 

 









Your browser is not the latest version. If you continue to browse our website, Some pages may not function properly.

You are recommended to upgrade to a newer version or switch to a different browser. A list of the web browsers that we support can be found here