News & Analysis 16 min read

Social Media Algorithm Arms Race: What BBC’s Whistleblowers Revealed About Meta and TikTok

Social media algorithm arms race between Meta and TikTok platforms
🎧 Listen
Mar 28, 2026
Reading mode

The social media algorithm arms race between Meta and TikTok has had casualties, and they are not the companies. On March 16, 2026, the BBC aired Inside the Rage Machine, a documentary built on testimony from more than a dozen whistleblowers and former employees at both platforms. The central claim is not new in the abstract: engagement-optimized algorithms amplify harmful content. What is new is the specificity. Named researchers, engineers, and trust-and-safetyThe department within tech and social media companies responsible for enforcing platform rules, reviewing reports of harmful content, and protecting users from abuse and policy violations. staff described, on the record, how both companies made deliberate decisions to loosen safety controls in pursuit of competitive advantage.

This article explains what the documentary revealed, how the social media algorithm arms race actually works at a technical level, and why the pattern keeps repeating despite years of public scrutiny.

What the Whistleblowers Said

The documentary, produced by BBC social media investigations correspondent Marianna Spring, features testimony from insiders at both companies. The most significant revelations fall into two categories: Meta’s decision to relax content safety standards, and TikTok’s internal prioritization of political cases over child safety reports.

Meta’s “borderline contentContent that falls below a platform's official policy-violation threshold but is designed to provoke strong emotional reactions like outrage, making it highly effective at driving engagement.” decision. Matt Motyl, a senior researcher at Meta from 2019 to 2023 who ran experiments on hundreds of millions of users testing how content was ranked in feeds, told the BBC that Instagram Reels was launched in 2020 without adequate safety protections. Internal research showed comments on Reels had 75% higher prevalence of bullying and harassment, 19% higher hate speech, and 7% higher violence and incitement compared to the rest of Instagram. A Meta engineer identified as “Tim” described being told by senior management to allow more “borderline” harmful content (material that doesn’t technically violate policies but includes conspiracy theories, misogyny, and other engagement-driving material) in users’ feeds. The stated reason: “the stock price is down.”

The staffing gap. While Meta assigned 700 staff to grow Reels, safety teams were refused two specialist positions for child protection and ten additional staff for election integrity, according to another former senior employee.

TikTok’s priority inversion. A trust-and-safetyThe department within tech and social media companies responsible for enforcing platform rules, reviewing reports of harmful content, and protecting users from abuse and policy violations. team member identified as “Nick,” who monitored TikTok’s internal systems for several months in 2025, gave the BBC access to internal dashboards showing how the company ranked safety reports. Cases involving politicians were given higher priority than reports involving harm to minors. In one documented example, a political figure who had been mocked by being compared to a chicken was prioritized over a 17-year-old reporting cyberbullying and a 16-year-old Iraqi girl facing sexual blackmail.

The algorithm engineer’s view. Ruofan Ding, a machine-learning engineer who built TikTok’s recommendation engine from 2020 to 2024, described the system as an opaque “black box” with limited controllability, even for its own creators.

Why “Borderline Content” Matters

The concept of “borderline content” is central to this story. It refers to material that sits just below the threshold of policy violation: not technically banned, but designed to provoke strong emotional reactions. A post promoting a conspiracy theory that stops short of explicit calls to violence. A misogynistic meme that doesn’t contain slurs. Content that makes you angry enough to comment, share, or argue, but not angry enough to report.

These platforms use recommendation algorithms that decide what appears in your feed. The algorithms are trained on engagement signals: likes, comments, shares, time spent viewing. Content that provokes outrage reliably generates more engagement than content that informs or entertains calmly. This is not a new observation. Frances Haugen, a former Facebook product manager, testified to the U.S. Congress in 2021 that a 2018 algorithm change at Facebook began prioritizing high-engagement posts, and internal research showed “angry content” received the most engagement and thus the most distribution.

What Inside the Rage Machine adds is evidence that this dynamic intensified during the competitive battle between Meta and TikTok. When TikTok’s short-form video format began pulling users away from Instagram, Meta rushed to launch Reels as a direct competitor. The whistleblowers describe a company that treated safety as an obstacle to speed, not as a requirement for launch.

The Academic Evidence

The whistleblower accounts align with peer-reviewed research. A 2025 study published in PNAS Nexus, “Engagement, User Satisfaction, and the Amplification of Divisive Content on Social Media,” conducted a preregistered algorithmic auditA structured examination of how a recommendation algorithm behaves in practice, testing what content it amplifies and comparing outcomes against stated goals or user preferences. of Twitter’s (now X’s) recommendation system. It found that engagement-based ranking amplifies emotionally charged, out-group hostile content, and that this content is not what users actually prefer when asked directly. The algorithm optimizes for what you click, not what you would choose if given a reflective moment.

A separate 2025 study on YouTube’s recommendation system found that the algorithm reinforces negative emotions, pushing users toward content that triggers impulsive reactions rather than content aligned with their long-term preferences. The researchers frame this as a conflict between “System 1” (fast, emotional) and “System 2” (deliberate, reflective) decision-making: the algorithms systematically exploit the former at the expense of the latter.

This is the mechanism that makes “borderline content” profitable. The algorithm doesn’t know what the content is about. It knows that posts with certain engagement patterns get clicked, shared, and commented on more. Outrage-provoking content produces those patterns reliably. The system is not choosing to amplify harm; it is choosing to amplify engagement, and harm correlates strongly with engagement.

What the Companies Said

Meta denied deliberately amplifying harmful content for financial gain: “Any suggestion that we deliberately amplify harmful content for financial gain is wrong.” TikTok called the claims “fabricated” and pointed to its investment in content safety technology.

These denials are worth parsing carefully. Meta’s statement addresses intent (“deliberately”), not outcome. The whistleblowers’ allegation is not that Meta’s stated goal was to harm users, but that Meta chose to accept more harm as a trade-off for competitive speed. TikTok’s denial is broader but provides no specific rebuttal to the internal dashboards shown in the documentary.

Why the Social Media Algorithm Arms Race Keeps Repeating

The pattern described in Inside the Rage Machine is not unique. It is a recurring cycle in the social media industry. A 2019 internal Facebook report obtained by Frances Haugen found that European political parties felt the algorithm change “forced them to skew negative in their communications on Facebook, leading them into more extreme policy positions.” The platform’s incentive structure was reshaping political behavior, and the company knew it.

The structural problem is that engagement-based advertising creates a direct financial incentive to maximize time-on-platform, and emotionally provocative content is the most efficient tool for doing so. Every major platform faces this incentive. Platform user intent, what you actually searched for or wanted to see, is secondary to what the algorithm predicts will keep you scrolling. Whistleblowers emerge, public outrage follows, companies promise reforms, competitive pressure returns, and the cycle starts again.

The documentary’s contribution to the social media algorithm arms race story is not the revelation that algorithms amplify outrage. That has been established for years. Its contribution is the granular evidence that Meta and TikTok made specific, documented decisions to weaken safety protections during a period of direct competition, with named employees describing the instructions they received and the internal data that showed the consequences. The “stock price is down” quote is not an abstraction about incentive structures. It is a reported instruction from senior management to a specific engineer about a specific policy change.

The question is whether the evidence from this social media algorithm arms race translates into accountability. Legislative efforts to regulate social media’s impact on young people are spreading across multiple countries, but enforcement mechanisms remain weak and the platforms’ lobbying capacity remains strong. Meta has demonstrated the ability to shape the very legislation intended to regulate it. The whistleblowers have provided the evidence. Whether any institution acts on it remains an open question.

What the Whistleblowers Revealed

The documentary, produced by BBC social media investigations correspondent Marianna Spring, features testimony from insiders at both companies. The revelations divide into two distinct categories: Meta’s deliberate relaxation of content safety standards during the Reels launch, and TikTok’s internal case-prioritization system that systematically deprioritized child safety reports.

Meta: The Reels Safety Deficit

Matt Motyl, a senior researcher at Meta from 2019 to 2023, told the BBC he ran “large-scale experiments on sometimes as many as hundreds of millions of people” testing how content was ranked in feeds. His account of Instagram Reels’ 2020 launch is specific: the product shipped without sufficient safety infrastructure, and internal metrics confirmed the consequences. Comments on Reels showed 75% higher prevalence of bullying and harassment, 19% higher hate speech, and 7% higher violence and incitement compared to the rest of Instagram.

These numbers matter because they quantify the safety gap between a product launched with adequate review and one launched in competitive haste. Reels was Meta’s direct response to TikTok’s explosive growth in short-form video. The 75% bullying differential is not a marginal increase; it suggests a fundamentally different moderation environment, likely because Reels’ content recommendation system was optimized for engagement velocity without proportional investment in content classification models trained on the specific abuse patterns that short-form video generates.

A Meta engineer identified as “Tim” described being told by senior management to allow more “borderline” harmful content in users’ feeds. Borderline contentContent that falls below a platform's official policy-violation threshold but is designed to provoke strong emotional reactions like outrage, making it highly effective at driving engagement., in Meta’s internal taxonomy, refers to material that falls below the enforcement threshold of community standards but still triggers high-arousal emotional responses: conspiracy theories, misogynistic framing, inflammatory political content. The instruction, according to Tim, was framed around competitive necessity: “They sort of told us that it’s because the stock price is down.”

The staffing allocation tells a parallel story. Meta assigned 700 employees to grow Reels. Safety teams requested two specialist staff for child protection and ten for election integrity. Both requests were denied. This is not a budget constraint; it is a revealed preference. The ratio (700:0) communicates organizational priority more clearly than any mission statement.

TikTok: Priority Inversion in Trust and Safety

A trust-and-safetyThe department within tech and social media companies responsible for enforcing platform rules, reviewing reports of harmful content, and protecting users from abuse and policy violations. team member identified as “Nick,” who monitored TikTok’s internal systems for several months in 2025, provided the BBC with access to internal dashboards showing the company’s case-prioritization logic. The system assigned higher priority scores to cases involving political figures than to cases involving harm to minors.

The specific example cited: a case involving a political figure being mocked by comparison to a chicken received higher priority than a 17-year-old reporting cyberbullying and a 16-year-old Iraqi girl reporting sexual blackmail. This is a priority inversion in the classical software-engineering sense: a low-severity case is processed before high-severity cases because the priority function weights the wrong variable (political sensitivity rather than harm severity).

Ruofan Ding, a machine-learning engineer who worked on TikTok’s recommendation engine from 2020 to 2024, described the system as an opaque “black box” with limited controllability. This is consistent with the architecture of large-scale recommendation systems: deep neural networks with billions of parameters, trained on implicit feedback signals, whose internal representations are not directly interpretable even by their designers. The system learns statistical associations between content features and engagement outcomes. It does not model harm, well-being, or user preference in any semantically meaningful way.

The Technical Mechanics of Engagement Amplification

To understand why borderline content is profitable, you need to understand what a recommendation algorithm actually optimizes. Modern feed-ranking systems (Meta’s, TikTok’s, YouTube’s, X’s) are trained on engagement signals: clicks, watch-time, likes, comments, shares, and in some cases, negative engagement signals like reports (though these are typically down-weighted rather than treated as disqualifying).

The training objective is typically a weighted combination of these signals, structured as a multi-task learning problem. The model predicts, for each candidate piece of content, the probability that a given user will engage with it in each of these ways. The predicted engagement scores are combined (with business-logic weighting) into a single ranking score. Content with the highest predicted engagement appears first in the feed.

The problem is that engagement is not a proxy for value. It is a proxy for arousal. Content that provokes anger, fear, moral outrage, or tribal identification generates high engagement because it activates fast, automatic cognitive processes (what behavioral economists call System 1). Content that informs, contextualizes, or requires reflection generates lower engagement because it activates slower, deliberate processing (System 2). The algorithm cannot distinguish between these modes. It sees engagement. It amplifies engagement. The emergent behaviorBehavior that arises in a complex system without being designed into any individual component. The whole produces outcomes that no single part intended or could produce alone — evolution, traffic jams, and market crashes are examples. is amplification of high-arousal content.

A 2025 preregistered algorithmic auditA structured examination of how a recommendation algorithm behaves in practice, testing what content it amplifies and comparing outcomes against stated goals or user preferences. published in PNAS Nexus, “Engagement, User Satisfaction, and the Amplification of Divisive Content on Social Media,” tested this directly on Twitter/X. The study found that engagement-based ranking amplifies emotionally charged, out-group hostile content. Critically, when users were surveyed about whether they preferred the algorithmically ranked feed or a reverse-chronological one, they preferred the latter for political content. The algorithm was optimizing for something users did not actually want when given a reflective choice.

A separate 2025 study on YouTube found that its recommendation system reinforces negative emotional states, suggesting that optimizing for engagement metricsMeasurable indicators of user interaction—clicks, time spent, scrolls—that platforms optimize for as a proxy for user satisfaction, though they often reward compulsive behavior over intentional satisfaction. produces a feedback loop: the user is shown content that triggers a negative emotional response, the negative state increases the probability of further engagement (doom-scrolling is a well-documented behavior), and the algorithm interprets the continued engagement as positive signal, serving more of the same.

Borderline Content as an Optimization Target

Borderline content is particularly valuable to engagement-optimized systems because it occupies a sweet spot: provocative enough to generate high engagement, but not so extreme that it triggers reporting thresholds that would lead to removal. In Meta’s content moderation framework, content is classified on a spectrum. Material that clearly violates community standards is removed. Material that falls below the violation threshold but still generates concern is “borderline.” Meta’s own internal research, revealed by Frances Haugen in 2021, showed that this borderline content was disproportionately effective at generating engagement.

The instruction to allow more borderline content, as described by Tim, is therefore an instruction to expand the engagement-optimal zone. By raising the threshold at which content is down-ranked or removed, the algorithm gains access to a larger pool of high-engagement material. The cost is borne by users, in the form of increased exposure to conspiracy theories, misogyny, and inflammatory content. The benefit accrues to the platform, in the form of increased time-on-platform and advertising revenue.

The Social Media Algorithm Arms Race: Competitive Dynamics

The Haugen disclosures in 2021 established that Facebook knew its algorithm amplified divisive content. A 2019 internal report obtained by Haugen found that European political parties felt the algorithm “forced them to skew negative in their communications on Facebook, leading them into more extreme policy positions.” The platform’s engagement incentives were reshaping real-world political behavior.

What Inside the Rage Machine adds, five years later, is evidence that the problem intensified under competitive pressure from TikTok. TikTok’s recommendation engine, built on ByteDance’s content-understanding infrastructure, proved exceptionally effective at capturing user attention. Its For You Page, which serves content from accounts the user does not follow based entirely on algorithmic prediction, set a new standard for engagement-per-session. Meta’s response was to replicate the format (Reels) and match the engagement intensity. The whistleblowers describe this as a race where safety was the variable that got sacrificed.

The social media algorithm arms race is a classic collective-action problem. Any individual platform that unilaterally invests in safety at the cost of engagement risks losing users to competitors who don’t. The rational strategy, absent regulation, is to match the lowest common denominator. TikTok’s algorithmic efficiency forced Meta to compete on engagement intensity, and the easiest lever to pull was loosening the borderline-content threshold.

The systematic override of user intent by platform algorithms is a well-documented pattern across the industry. What users search for, what they say they want, and what the algorithm serves them are increasingly divergent. The platforms’ advertising revenue model depends on this divergence: serving users what maximizes engagement (and thus ad impressions) rather than what users would choose for themselves.

Company Responses and What They Actually Mean

Meta’s denial: “Any suggestion that we deliberately amplify harmful content for financial gain is wrong.” This is carefully worded. It addresses intent (“deliberately”) rather than outcome. The whistleblowers’ claim is not that Meta’s board sat down and decided to harm users. It is that Meta made a series of resource-allocation and policy decisions that predictably increased user exposure to harmful content, and that these decisions were motivated by competitive and financial pressure. Whether “deliberately” covers “knowingly accepting foreseeable harm as a side effect of competitive strategy” is a legal and semantic question, not an empirical one.

TikTok called the claims “fabricated” and pointed to its investment in content safety technology. This is a broader denial but equally unspecific. Investment in safety technology is not inconsistent with simultaneously deprioritizing child safety cases relative to political cases. A company can spend millions on AI-based content moderation while also instructing its human reviewers to prioritize political sensitivity. The internal dashboards shown in the documentary are either genuine or they are not. TikTok’s statement does not address them specifically.

What Would Actually Fix This

The structural problem is that engagement-based advertising creates a direct financial incentive to maximize arousal, and the platforms have demonstrated, repeatedly, that self-regulation fails under competitive pressure. Several approaches have been proposed:

Algorithmic transparency requirements. Mandating that platforms disclose the objectives their recommendation systems optimize for, and publishing regular audits of the content distribution outcomes. The EU’s Digital Services Act includes some provisions in this direction, but enforcement is in early stages.

User-controlled ranking. Giving users the ability to choose their own ranking algorithm (chronological, engagement-based, topic-filtered) rather than being locked into the platform’s default. Some researchers have proposed “better feeds” frameworks that optimize for long-term user satisfaction rather than short-term engagement.

Liability for algorithmic amplificationAlgorithmic promotion of content beyond organic reach, independent of user relevance or intent. Platforms use this to maximize engagement metrics regardless of whether it serves what users requested.. Extending platform liability to cover not just hosted content but the algorithmic decision to amplify specific content to specific users. This is the most contested proposal because it challenges the framework of Section 230 in the U.S. and equivalent safe-harbor provisions elsewhere.

Decoupling revenue from engagement. The most fundamental but least likely change: subscription models, public-interest funding, or advertising structures that don’t reward attention capture. As long as revenue scales with engagement, the incentive to amplify arousal will persist.

Legislative efforts to protect young users are multiplying globally, but the enforcement gap remains significant. And Meta has shown the ability to shape the very legislation meant to regulate it, funding advocacy groups and writing model bills that exempt its own products.

The whistleblowers in Inside the Rage Machine have provided the most granular evidence yet that the “engagement over safety” trade-off is not an accidental byproduct of complex systems. It is a documented business decision, made by named people, at specific companies, for stated financial reasons. The question, as always, is what happens next.

How was this article?
Share this article

Spot an error? Let us know

Sources