Vortrag
Recent advances in AI systems based on large language models (LLMs) suggest the idea that some higher cognitive abilities (including advanced rational abilities) may emerge from linguistic abilities, both in humans and in artificial systems. This hypothesis opens up a new perspective on a deeper philosophical question about the relationship between language and morality: how much of our capacity for practical reasoning and moral judgement is actually contained in language and encoded in linguistic practices? A systematic study of artificial practical reasoning in LLMs could shed light on this question. As a first step in this direction, I will explain how a moral-theoretical conceptual framework from philosophical ethics can be imported to build LLM pipelines that (a) assess elementary responsiveness to practical reasons and (b) elicit (the structure of) episodes of artificial practical reasoning. This provides the basis for systematically addressing questions about LLMs’ capacities for practical reasoning: can they identify reason-giving facts? Are they reason-responsive, i.e. are they sensitive (in the right way) to changes in the reason-giving facts or to changes in the structure of practical reasons? What are the structural properties of a given LLM’s episodes of practical reasoning, e.g. are they universalizable, impartial, consequentialist, aggregationist, etc.? I will conclude with a brief outlook on some philosophical questions that need to be explored further, particularly with regard to collective practical deliberation in democracies.