Epistemic Autonomy & LLM-Enhanced Deliberation

Paper presentation at the Conference “Autonomy and its Challenges” on Friday, 10th July

Logo of the Max-Planck-Institute for the Study of Crime, Security and Law

Date
9 Jul 2026 — 10 Jul 2026
Event
Conference “Autonomy and its Challenges”

The emergence of LLM-based AI systems (LAS) presents several challenges. This paper concentrates on a comparatively under-explored one: how LAS challenge our understanding of personal epistemic autonomy, i.e., what it means to have beliefs that are genuinely one’s own.

In the first part, I will delineate the specific nature of this challenge, distinguishing it from more familiar issues of testimony and trust: The pervasive deployment of LAS in epistemic contexts has the potential to profoundly reshape established deliberative practices in a way that creates a tension. On the one hand, the prospect of artificially enhancing deliberation (and cognitive processing more generally) in epistemically relevant dimensions appears to improve personal epistemic autonomy. By engaging in language-based deliberative interactions with LAS, persons may better pursue their epistemic aims, reflect on and reaffirm those aims, and more fully consider and appreciate the relevant reasons in deliberation. On the other hand, language-based deliberative interactions with LAS may diminish personal epistemic autonomy: As with any two-way epistemic interactions with an interlocutor that is epistemically at least roughly on a par (in some domain), there is a risk that one prematurely adopts the interlocutor’s linguistic formulation of a thought as one’s own belief (i.e. all too early accepting the other’s phrasing as the one that “truly expresses one’s own thought” or “truly expresses what one really wants to say”). Additional risks include failing to detect framing, nudging, or manipulative effects embedded in the linguistic interaction, or succumbing to epistemic distortions produced by linguistic repetition that exploits well-studied biases (such as the illusory truth or bandwagon effects). These risks are amplified when two-way epistemic interactions increasingly replace one-way queries to dictionaries, databases or search engines, and when we increasingly outsource cognitive tasks (such as translation, summarisation or conceptualisation) that are integral to forming a belief of one’s own and to making one’s mind about what one really thinks and believes.

This tension raises the question: within epistemic practices shaped by the widespread use of LAS, what does it mean to (come to) have a belief that is genuinely one’s own? I will address this question in the second part of the paper by adapting an account of personal practical autonomy that encompasses three abilities: being able to (1) manage one’s own (epistemic) affairs, (2) to defend oneself against external (epistemic) interference, and (3) to participate in joint (epistemic) endeavours that affect oneself. By elaborating this account in the context of LLM-enhanced deliberation, I will examine how these abilities depend on (a) the ability to perform processes of intrapersonal reflective equilibration and on (b) a set of conceptual abilities (for disambiguation, differentiation, and articulacy). This explication clarifies the initial tension and enables us to understand how using LAS may diminish or enhance epistemic autonomy via practices that either impair or improve the very abilities that constitute it. It also offers a route to operationalise epistemic autonomy – for both agents who use LAS and the LAS themselves.