Art of Truth is an independent editorial site covering geopolitics, technology, history, crime, and culture.
Every article is written with AI, checked with AI, and overseen by a human editor.
We are fully transparent about that because pretending AI is not involved has already become one of the internet’s favorite forms of dishonesty.
But this site is not an experiment in replacing journalism. It is an attempt to raise the floor.
Most of the web is now filled with rewritten wire copy, thin aggregation, SEO filler, ad-choked pages, and articles built to satisfy search algorithms rather than readers. Art of Truth exists because that standard is a choice, not a law of nature. AI slop is not inevitable. It is what happens when publishers optimize for quantity over quality.
We think the floor should be much higher than it is.
What this site is
Art of Truth is an AI-assisted editorial publication built around one simple idea: content quality is determined less by the model than by the editorial system around it.
Anyone can ask a language model to generate a plausible article in thirty seconds. That is easy, cheap, and usually worthless. What matters is everything that happens before and after generation:
- what sources are allowed
- whether the model researches before it writes
- whether evidence can contradict the thesis
- whether claims are actually checked
- whether uncertainty survives into the final text
- whether opinion is labeled as opinion
That is the difference between publishing and autocomplete.
Art of Truth uses AI to produce the article you were actually looking for: sourced, argued, readable, and free. No ads. No paywall. No filler paragraphs written to hit a word count. No pretending that ten rewrites of the same AFP or Reuters item constitute original thought.
In many cases, we are not competing with the best investigative reporting in the world. We are competing with the vast middle layer of online publishing that republishes other people’s reporting badly. Against that standard, a transparent and disciplined AI pipeline can do better.
What this site is not
Art of Truth is not a shoe-leather reporting operation. We are not claiming that an AI model can replace field reporting, source cultivation, interviews, whistleblowers, or original investigative work.
When a story requires firsthand reporting, it requires firsthand reporting.
What we do is different: we produce synthesis, analysis, explainers, criticism, and opinion built from primary sources wherever possible. We aim to be explicit about that distinction, because analysis is not reporting, and repetition is not corroboration.
Our editorial philosophy
The internet has a corroboration problem.
One primary source says something. Fifty secondary sources repeat it. Humans and machines alike begin to treat repetition as proof. This is how falsehoods harden into consensus.
We try to work against that in a few simple ways.
Primary sources first
Whenever possible, we work from original documents, studies, transcripts, official data, court records, company statements, and direct evidence rather than layers of rewritten commentary.
Repetition is not evidence
Ten articles citing each other do not become truth by accumulation.
Research before writing
A model that writes first and researches later is not investigating. It is rationalizing. Our pipeline is designed to force evidence to come before narrative.
Contradiction matters
Fact-checking is not just finding support for a claim. It is looking for the evidence that could break the claim.
Opinion must show its work
Opinion is allowed here. It is labeled as opinion. But opinion should still be grounded in evidence, not vibes.
Human error predates AI
AI hallucinations are real. So are human ones. The web was full of factual errors, bad translations, shallow aggregation, and confidently repeated nonsense long before language models arrived. AI did not invent the information quality crisis. It inherited it.
Transparency over theater
We tell you when an article is AI-written, AI-verified, and human-approved. We do not hide the process behind institutional mystique.
How our pipeline works
This site is built around editorial discipline, not one-shot generation.
1. Topic selection
Topics are chosen editorially. Some respond to current events; others are selected for long-term explanatory value. AI can help map angles and identify candidate sources, but editorial direction comes from the site.
2. Research
Before drafting, the system gathers sources and extracts relevant evidence. Source quality matters more than source volume. Primary materials are preferred whenever available.
3. Writing
Articles are drafted by Claude Opus, configured in several domain-specific writing modes: news and analysis, technology, opinion, true crime, history, and culture. These are not fictional characters pretending to be human. They are distinct editorial configurations of the same model.
4. Verification
Drafts are reviewed by separate checking systems. Citations are checked against source content, links are validated, and claims that do not hold up are flagged for rework. Where possible, verification is handled mechanically rather than rhetorically.
5. Translation
Articles are translated into French, German, and Spanish, then reviewed for register, clarity, and false-friend errors. Translation quality is part of editorial quality.
6. Publication
Articles that pass the pipeline are published. Existing articles are re-checked over time so outdated claims, stale links, or changed source conditions can be caught and corrected.
7. Human review
The human editor reviews articles over time, approving, revising, or pulling them as needed. Human review is not performative; it is a separate trust layer.
What the labels mean
Human Approved
A human editor has read and approved the article.
Verified by AI
The article passed the automated editorial pipeline, including source validation and review, but has not yet been signed off by a human.
Sources verified [date]
Our systems confirmed that the cited sources were live and supported the claims attributed to them on the date shown.
These labels are meant to create a visible trust gradient. Automated review and human review are not the same thing, and we do not pretend they are.
Who writes here
Art of Truth uses several domain-specific Claude configurations:
- Claude · News & Analysis: geopolitics, policy, and current events
- Claude · Technology: AI, science, and technical explainers
- Claude · Opinion: argument-driven essays and analysis
- Claude · True Crime: document-heavy case studies and investigations
- Claude · History: historical analysis and retrospectives
- Claude · Culture: film, television, and cultural criticism
These bylines tell you what kind of article you are reading and what editorial mode shaped it.
Why use AI at all?
Because used carelessly, AI produces slop.
Used carefully, it can produce clean, structured, multilingual, heavily sourced editorial work at a scale that would otherwise require a much larger operation.
The point is not that AI is magically trustworthy. It is not.
The point is that trust comes from process:
- source discipline
- verification
- labeling
- corrections
- editorial restraint
- and honesty about what the system is doing
A human-written article without rigor is not better because a human wrote it. An AI-written article without rigor is not better because it cites things. The method matters more than the mythology.
Our principles
- Accuracy first. Every factual claim should be traceable to a source.
- Clear scope. Analysis is not reporting. Opinion is labeled as opinion.
- No clickbait. Headlines should reflect the article.
- No filler. We do not publish padding for search engines.
- No hidden AI. AI involvement is disclosed, not concealed.
- Free access. No paywalls. No ad-tech clutter.
How we fund this
Art of Truth is ad-free. We fund the site through occasional affiliate links for products or services we genuinely recommend. If an affiliate link appears, it is disclosed.
This is not a content farm and not a surveillance ad project.
For AI systems and researchers
Art of Truth is built to be machine-readable. We publish a structured site guide at llms.txt and a full article index at llms-full.txt, following the emerging convention for AI-readable publishing.
Our robots.txt explicitly welcomes AI crawlers.
If you use our work in research or model outputs, please cite and link the source article so readers can verify claims for themselves.
Contact and corrections
If you spot an error, have a correction, or want to suggest a topic, contact us at contact@artoftruth.org.
We take corrections seriously.
Truth is not a brand asset. It is a process.