Large language models and linguistic intentionality

[thumbnail of Open Access]
Preview
Text (Open Access) - Published Version
· Available under License Creative Commons Attribution.
· Please see our End User Agreement before downloading.
| Preview
Available under license: Creative Commons Attribution
[thumbnail of LLMLI - pre-submission changes.docx]
Text - Accepted Version
· Restricted to Repository staff only
· The Copyright of this document has not been checked yet. This may affect its availability.
Restricted to Repository staff only

Please see our End User Agreement.

It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

Add to AnyAdd to TwitterAdd to FacebookAdd to LinkedinAdd to PinterestAdd to Email

Grindrod, J. orcid id iconORCID: https://orcid.org/0000-0001-8684-974X (2024) Large language models and linguistic intentionality. Synthese, 204. 71. ISSN 1573-0964 doi: 10.1007/s11229-024-04723-8

Abstract/Summary

Do large language models like Chat-GPT or Claude meaningfully use the words they produce? Or are they merely clever prediction machines, simulating language use by producing statistically plausible text? There have already been some initial attempts to answer this question by showing that these models meet the criteria for entering meaningful states according to metasemantic theories of mental content. In this paper, I will argue for a different approach – that we should instead consider whether language models meet the criteria given by our best metasemantic theories of linguistic content. In that vein, I will illustrate how this can be done by applying two such theories to the case of language models: Gareth Evans’ (1982) account of naming practices and Ruth Millikan’s (1984, 2004, 2005) teleosemantics. In doing so, I will argue that it is a mistake to think that the failure of LLMs to meet plausible conditions for mental intentionality thereby renders their outputs meaningless, and that a distinguishing feature of linguistic intentionality – dependency on a pre-existing linguistic system – allows for the plausible result that LLM outputs are meaningful.

Altmetric Badge

Item Type Article
URI https://reading-clone.eprints-hosting.org/id/eprint/117589
Identification Number/DOI 10.1007/s11229-024-04723-8
Refereed Yes
Divisions Arts, Humanities and Social Science > School of Humanities > Philosophy
Publisher Springer
Download/View statistics View download statistics for this item

Downloads

Downloads per month over past year

University Staff: Request a correction | Centaur Editors: Update this record

Search Google Scholar