8 juny, 2022

Autors:
Tsiakmakis, E. & M.T. Espinal
Títol:
Expletiveness in grammar and beyondEditorial: Glossa: a journal of general linguistics, 7(1)
Data de publicació: Maig del 2022
Text completThis paper sets out to find the defining characteristics of so-called expletive categories and the consequences the existence of such categories has for Universal Grammar. Looking into different instantiations of expletive subjects and impersonal pronouns, definite articles, negative markers and plural markers in various natural languages, we reach the following generalizations: (i) expletive categories are deficient functional elements interpreted as introducing an identity function at the level of semantic representation, (ii) they can be divided into syntactic expletives, that occur to satisfy some syntactic relationship with another item in the clause, and semantic expletives, that stand in a semantic dependency with some c-commanding category, and (iii) expletive categories tend to develop additional meaning components that are computed beyond core grammar, at the level where speech act-related information is encoded. Our discussion reveals that all categories that have been traditionally considered as expletive in the linguistic literature are interpretable in grammar or beyond and, thus, do not violate Chomsky’s Full Interpretation Principle. We conclude that there are no expletive elements in natural languages and that expletiveness is not a grammatically relevant concept.
3 setembre, 2024

Autors:
Anna Gavarró & Alejandra Keidel
Títol:
Subject-verb agreement: Three experiments on CatalanEditorial: First Language (Sage Journals)
Data de publicació: Agost, 2024
Pàgines: 22 Text completThis study delves into the syntactic parsing abilities of children and infants exposed to Catalan as their first language. Focusing first on ages 3 to 6, we conducted two sentence-picture matching tasks. In experiment 1, 3 to 4-year-old children failed in identifying singular third-person subjects within null-subject sentences, although they performed above chance in all other scenarios, including plural third-person subjects and sentences with overt full DP subjects. This is reminiscent of the results of Pérez-Leroux for Spanish. In experiment 2, with the same design but involving numeral distractors, children’s performance was above chance level across all conditions from age 3 to 4. Then, in experiment 3, we moved to a younger age range with the help of eye-tracking techniques. The findings revealed that infants at 22 months had the ability to parse subject–verb agreement in sentences with third-person null subjects, and at 19 months there was evidence of parsing for third-person plural null subjects. These findings are inconsistent with the perception of children grappling with syntactic agreement computation. We argue that instances of underperformance in subject–verb agreement parsing identified in the literature often stem from task-related and pragmatic issues rather than core syntactic delay. If so, the putative asymmetry between early production of verbal inflection and late comprehension disappears; rather, the results suggest early establishment of matching operations and mastery of language-specific agreement properties before production starts.
26 agost, 2024

Autors:
Evelina Leivada, Fritz Günther & Vittoria Dentella
Títol:
Reply to Hu et al: Applying different evaluation standards to humans vs. Large Language Models overestimates AI performanceEditorial: PNAS 121(36), e2406752121 (National Academy of Sciences)
Data de publicació: 26 d'agost, 2024
Text completDentella et al. (DGL) argued that 3 Large Language Models (LLMs) perform almost at chance in grammaticality judgment tasks, while revealing an absence of response stability (1). Hu et al.’s (HEA) “re-evaluation” led to different conclusions (2). HEA argue that i) “LLMs align with human judgments on key grammatical constructions,” ii) LLMs show “human-like grammatical generalization capabilities,” while iii) grammaticality judgments (GJs) are not the best evaluation method because they “systematically underestimate” these capabilities. While HEA’s aim to elucidate the abilities of LLMs is laudable, their claims are fraught with interpretative difficulties.
19 abril, 2024

Autors:
Evelina Leivada, Vittoria Dentella & Fritz Günther
Títol:
Biolinguistics, vol.18Editorial: PsychOpen
Data de publicació: 19 abril, 2024
Text completWe identify and analyze three caveats that may arise when analyzing the linguistic abilities of Large Language Models. The problem of unlicensed generalizations refers to the danger of interpreting performance in one task as predictive of the models’ overall capabilities, based on the assumption that because a specific task performance is indicative of certain underlying capabilities in humans, the same association holds for models. The human-like paradox refers to the problem of lacking human comparisons, while at the same time attributing human-like abilities to the models. Last, the problem of double standards refers to the use of tasks and methodologies that either cannot be applied to humans or they are evaluated differently in models vs. humans. While we recognize the impressive linguistic abilities of LLMs, we conclude that specific claims about the models’ human-likeness in the grammatical domain are premature.