This one landed on our editor’s desk, and theory of mindThe cognitive ability to understand that other people have beliefs, desires, intentions, and knowledge that differ from your own — the mental capacity that underlies empathy, social prediction, and reading a room. turns out to be one of those questions that sounds simple until you try to answer it properly.
It is one of the most consequential abilities the human brain has ever developed. It is the capacity to understand that other people have beliefs, desires, intentions, and knowledge that differ from your own. Without it, social life as we know it would be impossible. With it, you can predict behavior, detect lies, feel empathy, teach children, negotiate deals, and read a room. It is so fundamental that you almost never notice you are doing it.
Where the Concept Came From
The term “theory of mind” entered the scientific vocabulary in 1978, when psychologists David Premack and Guy Woodruff published a paper with the question: “Does the chimpanzee have a theory of mind?” They showed a chimpanzee named Sarah videotapes of a human actor struggling with problems (reaching for out-of-reach bananas, trying to escape a locked cage) and then offered photographs depicting possible solutions. Sarah consistently picked the correct solution, which the researchers interpreted as evidence that she understood the actor’s goal.
The word “theory” here is deliberate. You cannot directly observe someone else’s mental states. You infer them, building a working model of what another person knows, wants, and believes. That model is your theory of their mind.
The False Belief TestA psychology experiment that measures whether someone can recognize that another person holds a belief the observer knows to be false — considered a key developmental milestone in theory of mind, typically passed around age four.: When Children Learn to Read Minds
The most famous experiment in this field is the Sally-Anne test, developed by Simon Baron-Cohen, Alan Leslie, and Uta Frith in 1985. The setup is simple: Sally puts a marble in her basket and leaves the room. While she is gone, Anne moves the marble to a box. When Sally returns, the child is asked: where will Sally look for her marble?
Adults find this trivially easy. Sally will look in the basket, because she does not know Anne moved the marble. But most children under four get it wrong. They point to the box, where the marble actually is. They cannot yet separate their own knowledge (the marble moved) from Sally’s knowledge (she never saw it move).
Around age four, something shifts. Children begin passing the false belief test consistently, demonstrating that they can model another person holding a belief they know to be incorrect. This is a cognitive milestone: the moment a child can think about thinking.
Theory of Mind and the Brain
Neuroimaging research has identified a network of brain regions that activate during ToM tasks. The medial prefrontal cortex (mPFC), the temporoparietal junction (TPJ), the posterior superior temporal sulcus (pSTS), and the precuneus all play roles. The TPJ, in particular, appears to be critical for reasoning about others’ beliefs as distinct from reality.
This network is not a single “mind-reading module.” It draws on regions involved in memory, self-reflection, and language. The capacity to model other minds, in other words, is not a bolt-on feature. It is woven into the architecture of social cognition, built on top of older capacities for recognizing intentions and tracking gaze direction.
Baron-Cohen proposed a model with several components: an intentionality detector (ID), an eye-direction detector (EDD), a shared-attention mechanism (SAM), and the theory of mind mechanism (ToMM) itself. These build on each other developmentally. Infants can detect eye gaze and intentional action long before they can pass a false belief test; the full architecture assembles over years.
When This Ability Breaks Down
Baron-Cohen’s 1985 Sally-Anne study revealed something striking: while 85% of typically developing children and 86% of children with Down syndrome passed the false belief test, 80% of autistic children did not. This led Baron-Cohen to propose his “mindblindness” hypothesis, the idea that autism involves a selective impairment in the ability to model other minds.
The picture has grown more nuanced since then. Many autistic adults do pass false belief tests; some researchers argue that the difficulty lies not in a complete absence of this capacity but in the speed and automaticity with which neurotypical people deploy it. The social world moves fast, and even a slight delay in modeling another person’s perspective can create significant friction in conversation and interaction.
Similar difficulties appear in schizophrenia, where impaired ability to track others’ intentions contributes to paranoid ideation, and in certain forms of brain injury affecting the prefrontal cortex. Understanding these clinical presentations has helped researchers map which cognitive components are necessary for fluent social interaction, a question that connects directly to debates about how young people develop social skills in digital environments.
Can Machines Have a Theory of Mind?
This is where the question gets genuinely interesting in 2026. Large language models like GPT-4 and Claude have been tested on classic ToM benchmarks, and the results are provocative. A 2024 study published in Nature Human Behaviour found that GPT-4 performed at or above human levels on tasks involving false beliefs and indirect requests, though it struggled with detecting social faux pas.
A 2025 study in Frontiers in Human Neuroscience reported that large language models achieved adult-level performance on higher-order theory of mind tasks (reasoning about what Person A thinks Person B believes about Person C). Meanwhile, mechanistic research published in npj Artificial Intelligence found that perturbing as little as 0.001% of certain model parameters significantly degraded ToM performance, suggesting these capabilities are encoded in surprisingly specific ways within neural networks.
Whether this constitutes genuine understanding or sophisticated pattern matching remains actively debated. The benchmarks themselves are under scrutiny: researchers have argued that existing tests may not capture what the ability actually requires, since they can often be solved through surface-level heuristics rather than genuine mental state reasoning. The field is still working out what it even means for a machine to “understand” another mind, versus producing outputs that look like understanding.
Why It Matters
Theory of mind is not an academic curiosity. It is the cognitive infrastructure that makes cooperation, communication, and culture possible. Every time you anticipate what a colleague will think of your email, guess why a friend seems upset, or parse why nations attribute motives to each other, or teach a child by imagining what they do not yet know, you are running this computation. The fact that this ability develops, can break down, and might (or might not) be emerging in artificial systems tells us something important about what it means to be a social creature, and about the limits of intelligence without genuine social understanding.
Sources
- Premack, D. & Woodruff, G. (1978). “Does the chimpanzee have a theory of mind?” Behavioral and Brain Sciences, 1(4), 515-526.
- Baron-Cohen, S. (1995). Mindblindness: An Essay on Autism and Theory of Mind. MIT Press.
- Strachan, J.W.A. et al. (2024). “Testing theory of mind in large language models and humans.” Nature Human Behaviour.
- Windsberger, A. et al. (2025). “LLMs achieve adult human performance on higher-order theory of mind tasks.” Frontiers in Human Neuroscience.
- Schaafsma, S.M. et al. (2015). “Theory of mind: mechanisms, methods, and new directions.” Frontiers in Human Neuroscience.



