AI Large Language Models: Philosophical and linguistic issues
Friedemann Pulvermüller
Kommentar
The recent success of Large Language Models, which now reach human-like performance in responding to written or oral language input, raises a number of empirical, theoretical and philosophical questions which will be discussed in the seminar.
We will start with classical papers on language, consciousness and AI, and will then focus on current discussions about relevant questions regarding the language skills of AI systems, including meaning and semantic grounding [1-3]. Second, we will discuss methodological problems arising if human-like cognitive abilities, such as mind-reading or even consciousness, are being ascribed to these systems. Third, we will discuss problems coming up when AI systems are used as models for the human mind. A major issue to discuss is whether the remarkable abilities of tools such as chatGPT still show specific limitations compared with human language skills and whether there are any principal reasons to adopt or reject them as putative models of the human cognitive mind and brain [e.g., 3-6].
The seminar will be held in English, but presentations in German are possible.
SchließenLiteraturhinweise
[1] Harnad, S. (1990). The symbol grounding problem. Physica D, 42, 335-346.
[2] Harnad, S. (2024). Language writ large: Llms, chatgpt, grounding, meaning and understanding. arXiv preprint arXiv:2402.02243.
[3] Mollo, D. C., & Millière, R. (2023). The vector grounding problem. arXiv preprint arXiv:2304.01481.
[4] Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big?? Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. Virtual Event, Canada, Association for Computing Machinery, pp. 610–623. DOI: https://dl.acm.org/doi/10.1145/3442188.3445922
[5] Moro, A., Greco, M., & Cappa, S. F. (2023). Large languages, impossible languages and human brains. Cortex, 167, 82-85.
[6] Pulvermüller, F. (2023). Neurobiological Mechanisms for Language, Symbols and Concepts: Clues From Brain-constrained Deep Neural Networks. Prog Neurobiol, 230, 102511. https://doi.org/10.1016/j.pneurobio.2023.102511
Schließen10 Termine
Regelmäßige Termine der Lehrveranstaltung