Tamara Serrano (UAB): ‘When Machines Suggest: Cognitive Responses to Gendered Language in Autocomplete Systems’
Seminari del CLT
Divendres, 27 de març de 2026
Hora 15:30
Aula 202, Facultat de Filosofia i Lletres
Enllaç Teams
Abstract:
As language technologies such as autocomplete and generative AI become part of everyday communicative practice, they invite us to rethink what it means to communicate with non-human interlocutors. These systems do not simply deliver information; they produce linguistic outputs shaped by patterns in large text corpora and human usage, raising questions about how such outputs participate in human language processing.
In this study, we examine cognitive responses to statements about men and women generated through an AI-mediated autocomplete system. Rather than treating these systems as passive conduits of information, we consider them active communicative partners whose suggestions might influence how language is perceived and understood. In a timed experimental task, participants evaluated statements drawn from autocomplete while their reaction times were recorded. This setup allows us to explore whether repeated contact with AI-mediated content alters processing dynamics in ways that are sensitive to semantic, lexical, and social aspects of the stimuli.
By focusing on reaction times alongside overt evaluations, we aim to uncover subtle traces of how AI systems may scaffold or interfere with human judgment. From a gender-oriented perspective, this approach also speaks to broader concerns about the circulation of socially loaded representations through language technologies.
Situating our work within emerging research on human-machine communication, this project treats autocomplete not as an auxiliary tool but as a non-human participant in linguistic practice. Understanding how such technologies shape cognitive patterns contributes to ongoing debates on the roles we assign to machines in communicative ecosystems —including their potential to normalize, reinforce, or disrupt established language use.
References:
Arehalli & Linzen. 2022. Neural networks as cognitive models of the processing of syntactic constraints. https://sarehalli.github.io/research.
Baroni. 2022. On the proper role of linguistically oriented deep net analysis in lingüístic theorizing. Algebraic Structures in Natural Language. CRC Press.
Bender, Gebru, McMillan-Major & Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In Conference on Fairness, Accountability, and Transparency (FAccT ’21), March 3–10, 2021, Virtual Event, Canada. https://doi.org/10.1145/3442188.3445922
Bender & Koller. 2020. Climbing towards NLU. Proc. of the 58th ACL.
Lai, Mesgar & Fraser. 2024. LLMs Beyond English: Scaling the Multilingual Capability of LLMs with Cross-Lingual Feedback. arXiv. 2406.01771.
Leivada, Murphy & Marcus. 2022. DALL-E 2 fails to reliably capture common syntactic processes. arXiv. 2210.12889.
Shin et al. 2023. Investigating a neural language model’s replicability of psycholinguistic experiments. Front. Psychol. doi.org/10.3389/fpsyg.2023.937656.


