Raquel Montero (UAB): ‘Scalar Implicatures in Multilingual Large Language Models’
Seminari del CLT
Divendres, 6 de març de 2026
Hora 15:30
Aula 202, Facultat de Filosofia i Lletres
Enllaç Teams
Abstract:
Quantification has been proven to be a particularly difficult linguistic phenomenon for Large Language Models (LLMs) to master (Qiu et al. 2023, Testoni et al. 2024, Enyan et al. 2024). However, given that quantification interfaces with the logic, pragmatic, and numerical domains, the exact reasons for their subpar performance are still unclear.
This talk focusses on the role of Scalar Implicatures (SI) and the mechanism and cues deployed by humans and LLMs to arrive at the pragmatic interpretation of quantifiers. Previous research has suggested that models trained via reinforcement learning can achieve human-like performance in SI derivation (Ruis et al. 2023), but an in-depth investigation comparing the effects of different fine-tuning techniques across a wide set of languages and models has not been carried out. Moreover, the impact of word frequencies in the tested stimuli remains under-explored. This work fills these gaps in the literature, and presents the results of an online experiment testing SI derivation in 5 LLMs across 6 languages.
The results pave the way for addressing the nature of MLLMs as semantic and pragmatic agents, and help elucidate the mechanisms underlying SI derivation in artificial agents.


