Philosophy of, and for LLMs

(generated with black-forest-labs/FLUX.1-dev)

Large langauge models (LLMs), essentially probabilistic next-word prediction machines, show semantically good behavior in a broad range of tasks, effectively mastering difficult reasoning problems, meta-cognitive coordination, social cooperation, and practical decision making.

This fundamental empirical datum is philosophically both fascinating and puzzling.

It is fascinating because many science fiction scenarios that have driven, as thought experiments, philosophical investigations in the past are now becoming reality.

It is puzzling because it is not clear how to interpret the success of LLMs (and the neo-connectionist cognitive science research it has spurned) in terms of our traditional concepts of epistemic agency, rationality, consciousness, and moral personhood.

This ongoing projects rests on the assumptions that

  1. any philosophical reflection about LLMs must be based on an up-to-date acquaintance of their abilities
  2. mastering the novel GenAI technology, building AI systems and running experiments is highly conducive to clarifying philosophical questions about LLMs
  3. the philosophical reflection about LLMs can and should be a valuable contribution to the development of the GenAI technology

So far, we have been focusing on the following questions:

You can follow us on github and huggingface to stay updated on our progress.

Gregor Betz
Gregor Betz
Professor