Opinion.
Stafford Beer, the British cybernetician who spent his career studying how organizations actually behave (as opposed to how their mission statements claim they behave), left behind a razor sharp enough to cut through most political arguments in a single stroke: “The purpose of a system is what it does.” Not what it says it does. Not what it was designed to do. Not what the brochure promises. What it does. Beer called this POSIWID'The Purpose Of a System Is What It Does.' A cybernetics principle from Stafford Beer: a system's true purpose is determined by its consistent outputs, not by its stated goals or its designers' intentions., and in the decades since he coined it, the principle has become one of the most useful and most resisted ideas in systems thinking.
The resistance is understandable. POSIWID sounds, to most ears, like a conspiracy theory. If the American healthcare system produces medical bankruptcies and worse health outcomes than countries that spend half as much per capita, then POSIWID says: that is its purpose. If social media platforms produce radicalization, anxiety, and political polarization, then those outcomes are what the system is for. The instinct is to hear this and conclude that someone, somewhere, planned it this way. That a shadowy cabal of insurance executives sat in a room and designed a system to bankrupt cancer patients. That Mark Zuckerberg specifically wanted teenagers to develop eating disorders.
The real insight of systems thinking is both simpler and more disturbing than that: nobody planned it. The system just works this way.
What POSIWID Actually Says
Beer developed POSIWID within the field of cybernetics, the study of how systems regulate themselves through feedback loops. The core observation is that complex systems develop emergent behaviors that no individual participant designed, intended, or even necessarily understands. A system’s stated purpose (healing the sick, educating children, connecting people) is irrelevant if the system consistently produces different outcomes. The outcomes are the purpose, because the system is optimized, through iteration, incentive structures, and path dependency, to produce exactly those outcomes.
This is not a metaphor. When Beer says the purpose of a system is what it does, he means it literally. A system that has been running for decades and consistently produces the same results is, by any functional definition, a system designed to produce those results. The fact that no human being sat down and drew up those specifications is beside the point. Evolution does not have a designer either, but nobody argues that cheetahs are accidentally fast.
Systems thinking at this level requires abandoning the comfortable fiction that intentions matter more than outcomes. In everyday moral reasoning, intentions carry enormous weight. We distinguish between murder and manslaughter. We forgive mistakes made in good faith. But systems are not people, and applying moral reasoning designed for individuals to institutional behavior produces confused analysis. A hospital administrator who genuinely wants to help patients but runs a system that optimizes for billing codes over recovery times is not a villain. He is a component in a machine whose output he does not control.
The Conspiracy Temptation
Here is where POSIWID collides with human psychology. When confronted with a system that produces terrible outcomes, the brain offers two explanations. Option one: someone powerful is doing this on purpose. Option two: nobody is in charge and the system emerged from the accumulated weight of thousands of individual decisions, none of which were individually malicious, most of which were locally rational, and all of which together produce something nobody would have chosen.
Option one is psychologically satisfying. It preserves the idea that the world is ordered, that outcomes are the result of decisions, and that fixing the problem is (conceptually, at least) straightforward: find the bad people and remove them. Option two is genuinely frightening. It implies that the world is largely uncontrolled, that catastrophic outcomes can emerge from perfectly normal behavior, and that fixing things requires redesigning entire incentive structures rather than punishing specific individuals.
Most people choose option one. This is not because they are stupid. It is because agency detection, the tendency to attribute events to intentional actors, is one of the most deeply wired features of the human brain. It was adaptive to assume the rustling in the bushes was a predator rather than wind. It is maladaptive to assume the healthcare system was designed by predators rather than by wind.
Conspiracy theories are, in this framing, a simplified map imposed on complex territory. They take the genuinely observable output of a system (people go bankrupt from medical bills, social media makes people miserable, the education system produces graduates who cannot think critically) and attribute it to deliberate design by identifiable villains. The map feels right because the outcomes are real. The error is in the attribution of agency.
Hanlon’s Razor and Its Limits
POSIWID has a well-known cousin in Hanlon’s Razor: “Never attribute to malice that which is adequately explained by stupidity.” The two principles rhyme but differ in an important way. Hanlon’s Razor is about individual behavior, a heuristic for interpreting why your colleague sent that email or why the bureaucrat denied your application. It assumes a single actor making a single bad decision. POSIWID operates at a higher level. It is not saying that individuals in the system are stupid (though some certainly are). It is saying that the system itself, as an emergent entity, produces outcomes that none of its individual components chose.
This distinction matters because it changes where you look for solutions. Hanlon’s Razor suggests better training, better hiring, clearer processes. POSIWID suggests that none of that will matter if the incentive structure remains unchanged. You can replace every employee in a hospital with a brilliant, compassionate expert, and if the payment model still rewards procedures over outcomes, the system will still optimize for procedures over outcomes. The purpose of the system is what it does. New people, same machine, same output.
This is also where systems thinking parts company with cynicism. A cynic says “the system is broken.” A systems thinker says “the system is working perfectly; you just don’t like what it’s working toward.” That is not the same claim. The first implies dysfunction. The second implies function directed at something other than the stated goal. The distinction is crucial because you fix dysfunction with repairs, but you fix misdirected function with redesign. And redesign is orders of magnitude harder.
Three Systems, Three Unstated Purposes
Consider American healthcare. The stated purpose: keep people healthy. The observed behavior: the United States spends roughly $4.5 trillion annually on healthcare (about 17.6% of GDP, according to the Centers for Medicare and Medicaid Services) while ranking last among comparable nations in health outcomes according to the Commonwealth Fund’s recurring international comparisons. The system reliably produces expensive treatments, complex billing, defensive medicine, and administrative overhead. Those are its products. By POSIWID, those are its purposes.
Consider public education. The stated purpose: develop critical thinking, impart knowledge, prepare citizens. The observed behavior: standardized testing regimes that incentivize teaching to the test, credentialing systems that reward compliance over curiosity, and a structure that sorts children by age rather than ability or interest. The system reliably produces people who can follow instructions and fill in bubbles. By POSIWID, that is its purpose. This is not a conspiracy by teachers’ unions or textbook publishers. It is what happens when you build a system to process millions of children through a standardized pipeline and measure its success with standardized metrics.
Consider social media. The stated purpose: connect people. The observed behavior: algorithmic content ranking that maximizes engagement, which in practice means maximizing emotional arousal, which in practice means amplifying outrage, fear, and tribal conflict. Meta’s own internal research, leaked in 2021 by Frances Haugen, confirmed that the company knew Instagram was harmful to teenage mental health and that its algorithm amplified divisive content. The engineers who built the recommendation system were not trying to radicalize anyone. They were trying to keep people on the platform. The radicalization is an emergent property, not a design specification. But by POSIWID, it is the purpose.
Why You Cannot Punch Entropy
The deepest problem with systems thinking, and the reason most people prefer conspiracy theories, is that it offers no satisfying villain. You cannot march against entropy. You cannot vote out path dependency. You cannot file a lawsuit against emergent behaviorBehavior that arises in a complex system without being designed into any individual component. The whole produces outcomes that no single part intended or could produce alone — evolution, traffic jams, and market crashes are examples.. The tools of political action that humans have developed over millennia (protest, revolution, regime changeThe deliberate replacement of a government through military, diplomatic, or economic intervention, typically by external actors., elections) are all designed to replace one set of decision-makers with another. They assume that the problem is who runs the system. Systems thinking says the problem is the system itself, and that replacing the people inside it will not change its fundamental outputs.
This is psychologically unbearable for most people, and understandably so. It feels like learned helplessness. If the system is the problem and the system is too complex for any individual to redesign, then what exactly are you supposed to do? Beer himself worked on this question for decades, most famously during his Project Cybersyn in Salvador Allende’s Chile, an attempt to use real-time cybernetic feedback to manage an entire national economy. The project was destroyed by the 1973 coup before it could be fully tested, which is either a tragedy for systems thinking or a convenient excuse, depending on your level of optimism.
The honest answer from systems thinking is that reform is possible but slow, unglamorous, and structurally focused. It means changing incentive structures rather than firing executives. It means redesigning feedback loops rather than writing angry letters. It means accepting that the system you are trying to fix is not broken; it is doing exactly what its structure dictates, and if you want different outputs, you need different structures. This is harder than identifying a villain. It is also the only approach that has ever actually worked. Every successful systemic reform in history, from the abolition of child labor to the establishment of food safety standards, succeeded not by punishing bad actors but by changing the rules under which all actors operate.
Why Systems Thinking Satisfies No Tribe
POSIWID occupies an uncomfortable position in political discourse. The left hears it and concludes that the system was designed by capitalists. The right hears it and concludes that the system was designed by bureaucrats. Both sides are committing the same error: attributing emergent systemic outcomes to intentional design by their preferred villains. The actual systems thinking position, that complex systems produce outcomes nobody designed and nobody fully controls, satisfies no political tribe because it offers no one to blame.
This is, paradoxically, what makes it useful. A framework that makes everyone uncomfortable is probably touching something real. Conspiracy theories are comfortable because they preserve the illusion of order: someone is in charge, even if that someone is evil. Systems thinking removes that comfort. Nobody is in charge. The outcomes you see are the product of thousands of interacting incentives, path dependencies, feedback loops, and historical accidents. The system is not broken. The system is the break.
Stafford Beer died in 2002, before social media, before the full flowering of algorithmic content curation, before the American healthcare system hit $4 trillion in annual spending. But his razor has only gotten sharper. The purpose of a system is what it does. If you do not like what it does, you have two choices: redesign the system, or tell yourself a story about the villains who run it. One of those options might actually change something. The other is more fun at dinner parties.
Sources
- Centers for Medicare and Medicaid Services, National Health Expenditure Data (Historical)
- Commonwealth Fund, “Mirror, Mirror 2024: A Portrait of the Failing U.S. Health System”
- Wall Street Journal, “Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show” (2021)
- Eden Medina, “Cybernetic Revolutionaries: Technology and Politics in Allende’s Chile” (MIT Press, 2011)



