Opinion 7 min read

Khaki Is Not Beige, and Other Wikipedia Errors That Prove Hallucinations Are a Human Invention

errores Wikipedia
🎧 Listen
Mar 12, 2026

Opinion.

Our editor sent us a handful of Wikipedia links with a note that read like a dare. After spending an hour clicking through the Wikipedia errors he flagged, we understand why. What follows is an opinion piece, and the opinion is this: the popular narrative that misinformation is an AI problem is, itself, misinformation.

The story goes something like this. Large language models hallucinate, therefore AI is dangerous, therefore humans remain the reliable narrators of truth. This framing is comforting. It is also complete nonsense. Wikipedia errors have been quietly proving for two decades that humans are perfectly capable of hallucinating all on their own, no neural networks required.

The Beige Disaster

Open the English Wikipedia article on the color beige and scroll through the list of “variations of beige.” You will find, among the entries, the color khaki. Khaki, for anyone with functioning eyes, is green. Not greenish-beige. Not beige with aspirations. Green. The kind of green that military uniforms are made of. And yet there it sits, in a list of beiges, sourced to “HTML/CSS,” which is not a color authority any more than a spreadsheet is a sommelier.

It gets better. The page lists dozens of colors as variations of beige that are, by any reasonable visual standard, not beige. Buff, desert sand, fawn, wheat, ecru, champagne, and a constellation of other hues that range from “arguably adjacent” to “not even in the same postal code.” The sourcing for many of these entries comes from web color standards or self-referential color dictionaries, creating a closed loop of Wikipedia errors that nobody has bothered to challenge because, well, who cares enough about beige to start a fight?

Somebody should. Because the French version tells a very different story. The French Wikipedia article on beige is shorter, more focused, and more honest. It treats beige as what it is: a specific, narrow color. No khaki. No desert sand. No fifty shades of “close enough.” Two articles about the same color on the same platform, and one of them is mostly wrong. The difference is not language. It is editorial discipline, and the English version has none.

When “Infant Mortality” Means “Maternal Mortality,” Apparently

If the beige situation were an isolated quirk, it would be merely amusing. It is not isolated. On the French Wikipedia page for the demographics of Morocco, the section titled “Mortalité infantile” (“Infant mortality”) contains the following sentence: “Le taux de mortalité maternelle dans le pays a chuté de 67 % entre 1990 et 2010” (“The maternal mortality rate in the country dropped by 67% between 1990 and 2010”).

Read that again. The heading says infant mortality. The text says maternal mortality. These are not the same thing. One measures how many babies die. The other measures how many mothers die during or shortly after childbirth. They have different causes, different numbers, and different policy implications. Confusing them in a published encyclopedia is not a minor formatting issue. It is a factual error that has survived, uncorrected, on one of the most visited websites in the world.

Wikipedia errors like this persist because the platform’s correction mechanism relies entirely on volunteer attention. Articles about celebrity gossip get watched by thousands. Articles about Moroccan demographic statistics get watched by almost nobody. The error sits there, radiating quiet confidence, waiting to be scraped into a training dataset or cited in a student paper or repeated by a policymaker who Googled it in a hurry.

Lost in Translation: The Cochenille Problem

Wikipedia’s errors are compounded by a broader internet problem that extends well beyond any single platform: translation. Take the French word “cochenille.” If you look it up on WordReference, the most respected bilingual dictionary on the web, you get “cochineal” or “mealybug.” Google Translate gives you “cochineal.” Both translations are wrong.

In French, “cochenille” refers to the entire superfamily Coccoidea, commonly known in English as scale insects. Cochineal is specifically the red dye-producing insect (Dactylopius coccus), one single species within that superfamily. Mealybug is the white, woolly variety, another subset entirely. Translating “cochenille” as “cochineal” is like translating “cat” as “tabby”: technically a cat, sure, but you have just excluded every other type of cat from the conversation.

This is not a niche complaint. Translation tools and bilingual dictionaries are foundational infrastructure for how billions of people understand the world across languages. When they get a basic taxonomic term wrong, the error cascades. Students learn it wrong. Writers repeat it. Databases encode it. And eventually, an AI model trains on it and reproduces it with perfect confidence, at which point everyone blames the AI.

The Dumpster Fire You Trained On

Here is the part that nobody in the “AI hallucination” discourse wants to acknowledge: most training data for large language models comes from the internet. As our editor put it, “most training data come from the internet, which is a dumpster, and that’s on humans, not LLMs.” He is not wrong.

When a language model confidently tells you something incorrect, the reflexive response is to call it a hallucination, as though the machine spontaneously invented a falsehood from thin air. Sometimes it does. But often, the model is faithfully reproducing what it learned from its training data, which was written by humans, uploaded by humans, and left uncorrected by humans. The Wikipedia errors on the beige page were not generated by AI. They were written by a person, sourced to a color standard that has no business being treated as an authority on chromatic taxonomy, and left to ferment for years. The model that later ingests this data and tells you khaki is beige is not hallucinating. It is repeating what it was taught.

This does not excuse AI errors. Models should be better at reasoning through contradictions, and developers have a responsibility to build systems that can flag low-confidence claims. But the framing of hallucination as a uniquely artificial phenomenon is itself a kind of hallucination, one that flatters human vanity while ignoring the quality of the information ecosystem humans have built.

Wikipedia Errors Are Older Than AI

Mistranslations, uncorrected statistical blunders, colors that aren’t colors: none of this is new. What is new is the scale at which these errors propagate. Before the internet, a wrong encyclopedia entry reached a few thousand readers over its print run. Now it reaches millions, gets scraped into datasets, recycled by translation tools, and amplified by algorithms that treat “frequently repeated” as “probably true.” The infrastructure of modern knowledge is built on a foundation that includes a substantial amount of garbage, and that garbage was placed there by humans long before any AI touched it. This is not a fringe issue limited to color charts and translation tools: widely repeated health claims about quinoa being safe for celiac patients turn out to rest on the same pattern of unchecked repetition.

The conversation about information reliability needs to get honest. Blaming AI for misinformation while treating the sources it learns from as sacred is like blaming the student for a bad textbook. The textbook needs fixing too. Fixing Wikipedia errors requires better oversight of obscure articles, not just popular ones. Translation tools need taxonomic accuracy. And the internet, broadly, needs to stop pretending that crowdsourced information is self-correcting. It is not. It is self-reinforcing, which is a very different thing.

As Martin Luther King Jr. once wisely said, “You can’t trust everything you read on the internet, yo, even if it comes from trusted sources, yo.”

He didn’t say that, obviously. But if you found it on Wikipedia, you might believe he did. And that is exactly the point.

Sources

Did you spot a factual error? Let us know: contact@artoftruth.org

Share
Facebook Email