Tsiakmakis & Espinal (2022). Expletiveness in grammar and beyond

Autors:

Tsiakmakis, E. & M.T. Espinal

Títol:

Expletiveness in grammar and beyond

Editorial: Glossa: a journal of general linguistics, 7(1)
Data de publicació: Maig del 2022

Text complet

This paper sets out to find the defining characteristics of so-called expletive categories and the consequences the existence of such categories has for Universal Grammar. Looking into different instantiations of expletive subjects and impersonal pronouns, definite articles, negative markers and plural markers in various natural languages, we reach the following generalizations: (i) expletive categories are deficient functional elements interpreted as introducing an identity function at the level of semantic representation, (ii) they can be divided into syntactic expletives, that occur to satisfy some syntactic relationship with another item in the clause, and semantic expletives, that stand in a semantic dependency with some c-commanding category, and (iii) expletive categories tend to develop additional meaning components that are computed beyond core grammar, at the level where speech act-related information is encoded. Our discussion reveals that all categories that have been traditionally considered as expletive in the linguistic literature are interpretable in grammar or beyond and, thus, do not violate Chomsky’s Full Interpretation Principle. We conclude that there are no expletive elements in natural languages and that expletiveness is not a grammatically relevant concept.

Recasens (2024). The Effect of Manner of Articulation and Syllable Affiliation on Tongue Configuration for Catalan Stop-Liquid and Liquid-Stop Sequences: An Ultrasound Study

Autors:

Daniel Recasens

Títol:

The Effect of Manner of Articulation and Syllable Affiliation on Tongue Configuration for Catalan Stop-Liquid and Liquid-Stop Sequences: An Ultrasound Study

Editorial: Languages, MDPI
Data de publicació: 27 juny 2024

Text complet

The present study reports tongue configuration data recorded with ultrasound for two sets of consonant sequences uttered by five native Catalan speakers. Articulatory data for the onset cluster pairs [kl]-[ɣl] and [kɾ]-[ɣɾ], and also for [l#k]-[l#ɣ] and [r#k]-[r#ɣ], analyzed in the first part of the investigation revealed that, as a general rule, the (shorter) velar approximant is less constricted than the (longer) voiceless velar stop at the velar and palatal zones while exhibiting a more retracted tongue body at the pharynx. These manner of articulation-dependent differences may extend into the preceding liquid. Data for [k#l]-[kl] and [k#r]-[kɾ] dealt with in the second part of the study show that the velar is articulated with more tongue body retraction for [k#l] vs. [kl] and for [k#r] vs. [kɾ], and with a higher tongue dorsum for [k#l] vs. [kl] and the reverse for [k#r] vs. [kɾ]. Therefore, clusters are produced with a more extreme lingual configuration across a word boundary than in syllable-onset position, which at least in part may be predicted by segmental factors for the [k#r]-[kɾ] pair. These articulatory data are compared with duration data for all sequence pairs.

Gavarró & Keidel (2024). Subject-verb agreement: Three experiments on Catalan

Autors:

Anna Gavarró & Alejandra Keidel

Títol:

Subject-verb agreement: Three experiments on Catalan

Editorial: First Language (Sage Journals)
Data de publicació: Agost, 2024
Pàgines: 22

Text complet

This study delves into the syntactic parsing abilities of children and infants exposed to Catalan as their first language. Focusing first on ages 3 to 6, we conducted two sentence-picture matching tasks. In experiment 1, 3 to 4-year-old children failed in identifying singular third-person subjects within null-subject sentences, although they performed above chance in all other scenarios, including plural third-person subjects and sentences with overt full DP subjects. This is reminiscent of the results of Pérez-Leroux for Spanish. In experiment 2, with the same design but involving numeral distractors, children’s performance was above chance level across all conditions from age 3 to 4. Then, in experiment 3, we moved to a younger age range with the help of eye-tracking techniques. The findings revealed that infants at 22 months had the ability to parse subject–verb agreement in sentences with third-person null subjects, and at 19 months there was evidence of parsing for third-person plural null subjects. These findings are inconsistent with the perception of children grappling with syntactic agreement computation. We argue that instances of underperformance in subject–verb agreement parsing identified in the literature often stem from task-related and pragmatic issues rather than core syntactic delay. If so, the putative asymmetry between early production of verbal inflection and late comprehension disappears; rather, the results suggest early establishment of matching operations and mastery of language-specific agreement properties before production starts.

Leivada, Fritz & Dentella (2024). Reply to Hu et al: Applying different evaluation standards to humans vs. Large Language Models overestimates AI performance

Autors:

Evelina Leivada, Fritz Günther & Vittoria Dentella

Títol:

Reply to Hu et al: Applying different evaluation standards to humans vs. Large Language Models overestimates AI performance

Editorial: PNAS 121(36), e2406752121 (National Academy of Sciences)
Data de publicació: 26 d'agost, 2024

Text complet

Dentella et al. (DGL) argued that 3 Large Language Models (LLMs) perform almost at chance in grammaticality judgment tasks, while revealing an absence of response stability (1). Hu et al.’s (HEA) “re-evaluation” led to different conclusions (2). HEA argue that i) “LLMs align with human judgments on key grammatical constructions,” ii) LLMs show “human-like grammatical generalization capabilities,” while iii) grammaticality judgments (GJs) are not the best evaluation method because they “systematically underestimate” these capabilities. While HEA’s aim to elucidate the abilities of LLMs is laudable, their claims are fraught with interpretative difficulties.

Leivada, Dentella & Günther (2024). Evaluating the language abilities of humans vs. Large Language Models: Three caveats

Autors:

Evelina Leivada, Vittoria Dentella & Fritz Günther

Títol:

Biolinguistics, vol.18

Editorial: PsychOpen
Data de publicació: 19 abril, 2024

Text complet

We identify and analyze three caveats that may arise when analyzing the linguistic abilities of Large Language Models. The problem of unlicensed generalizations refers to the danger of interpreting performance in one task as predictive of the models’ overall capabilities, based on the assumption that because a specific task performance is indicative of certain underlying capabilities in humans, the same association holds for models. The human-like paradox refers to the problem of lacking human comparisons, while at the same time attributing human-like abilities to the models. Last, the problem of double standards refers to the use of tasks and methodologies that either cannot be applied to humans or they are evaluated differently in models vs. humans. While we recognize the impressive linguistic abilities of LLMs, we conclude that specific claims about the models’ human-likeness in the grammatical domain are premature.