Philosophy of, and for LLMs
Large langauge models (LLMs), essentially probabilistic next-word prediction machines, show semantically good behavior in a broad range of tasks, effectively mastering difficult reasoning problems, meta-cognitive coordination, social cooperation, and practical decision making.
This fundamental empirical datum is philosophically both fascinating and puzzling.
It is fascinating because many science fiction scenarios that have driven, as thought experiments, philosophical investigations in the past are now becoming reality.
It is puzzling because it is not clear how to interpret the success of LLMs (and the neo-connectionist cognitive science research it has spurned) in terms of our traditional concepts of epistemic agency, rationality, consciousness, and moral personhood.
This ongoing projects rests on the assumptions that
- any philosophical reflection about LLMs must be based on an up-to-date acquaintance of their abilities
- mastering the novel GenAI technology, building AI systems and running experiments is highly conducive to clarifying philosophical questions about LLMs
- the philosophical reflection about LLMs can and should be a valuable contribution to the development of the GenAI technology
So far, we have been focusing on the following questions:
- Can LLMs (learn to) argue? [blog post] [blog post] [blog post]
- Can LLMs (learn to) analyse and evaluate arguments? [blog post]
- Can we ascribe a coherent set of beliefs to LLMs? [blog post]
- Can we use LLM-based multi-agent simulations to advance social epistemology? [blog post] [blog post]
You can follow us on github and huggingface to stay updated on our progress.