Workshop ‘Bilingual language processing: Bridging human and artificial cognition’ (BiLaP 2026)
Background
The study of how the human brain manages and uses multiple languages has been a central topic in cognition, psychology, and psycholinguistics for decades. Different aspects of how bilinguals activate and control their different languages, which cognitive skills are enhanced during this mental juggling, and what are the limits to the processes of language switching and mixing are a few of the topics that have been at the forefront of such research (Blanco-Elorrieta & Pylkkänen 2017, Parafita Couto et al. 2021). With the recent advancements of Large Language Models (LLMs), there has been a renewed interest in these processes and their possibly different manifestations in humans vs. models. In BiLaP 2026, we underscore the need to study bilingual language processing through testing diverse languages but also closely related ones in order to understand whether there are language family proxies in Artificial Intelligence (e.g., transfer from a bigger language to all smaller languages within a language family/branch).
Some of the guiding questions of this workshop are: What are the limits of code-switching in humans? What are the limits of code-switching in LLMs? Are there language-specific neurons or ‘storage rooms’ in LLMs such that languages are kept separately (Cao et al. 2024, Tang et al. 2024)? How does this picture from models compare to the bilingual human brain? How do bilingual humans differ from bilingual LLMs when it comes to linguistic tasks? How do LLMs perform linguistically in languages other than English? Do they have a default language for thought (Etxaniz et al. 2024)?
These are some of the topics that will be covered in BiLaP. Research on regional, non-standard, and/or minority languages pertaining to either human cognition or to artificial language processing is especially welcome.
References
- Blanco-Elorrieta, E. & Pylkkänen, L. (2017). Bilingual language switching in the lab vs. in the wild: The spatio-temporal dynamics of adaptive language control. The Journal of Neuroscience, 37, 9022–9036.
 - Cao, P. et al. (2024). One mind, many tongues: A deep dive into language-agnostic knowledge neurons in Large Language Models. arXiv:2411.1740.
 - Etxaniz, J. et al. (2024). Do multilingual Language Models think better in English? Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
 - Parafita Couto, M. C., Romeli, M. G. & Bellamy, K. (2021). Code-switching at the interface between language, culture, and cognition, Lapurdum: Basque Studies Review.
 - Tang, T. et al. (2024). Language-specific neurons: The key to multilingual capabilities in Large Language Models. Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics.
 
How to participate:
Please submit your abstract through EasyChair.
Important dates:
Deadline for submission: 9 January 2026
Notification: 23 January 2026
Invited Speakers
Esti Blanco-Elorrieta
Jon Andoni Duñabeitia
Carmen Parafita-Couto
Early Career Invited Speakers
Julen Etxaniz
Camilla Masullo
Organizing Committee
Urtzi Etxeberria, CNRS-IKER
Evelina Leivada, UAB/ICREA
Tamara Serrano, UAB
					



