Opinion 12 min read

The Psychology of Digital Radicalization: How Recommendation Engines Mimic Cult Recruitment Patterns

Abstract visualization of digital radicalization psychology through algorithmic feeds
🎧 Listen
Apr 14, 2026
Reading mode

The tactics that cults use to recruit and control members have been studied for decades. Isolation from outside perspectives. Controlling what information reaches someone. Creating emotional dependency. Suppressing critical thinking. These techniques work because they exploit fundamental vulnerabilities in human psychology. What researchers are now recognizing is that recommendation algorithms on social media platforms can produce strikingly similar effects, even without any human recruiter pulling the strings. The study of digital radicalization psychology reveals an uncomfortable truth: the same mechanisms that make cults effective are baked into the design of your social media feed.

This is not to say that Silicon Valley engineers sat down and studied cult manuals. The parallel emerged from a simpler imperative: maximize engagement. But engagement optimizationThe algorithmic practice of prioritizing content that generates the most user interactions (clicks, likes, shares, comments) rather than content quality or public interest., pursued relentlessly through machine learning, has independently converged on techniques that psychologists have long identified as tools of undue influence. Understanding digital radicalization psychology requires grasping this convergence between ancient manipulation tactics and modern algorithmic design.

The BITE ModelA framework developed by cult researcher Steven Hassan that categorizes tactics of psychological manipulation into Behavior, Information, Thought, and Emotional control. Meets the Algorithm

In the 1980s, cult researcher Steven Hassan developed the BITE Model to describe how destructive groups maintain control over members. BITE stands for Behavior, Information, Thought, and Emotional control[s]. The model draws on research by psychiatrists Robert Jay Lifton and Louis Jolyon West, who studied brainwashing techniques used in totalitarian regimes[s]. Each component targets a different aspect of how humans process reality and make decisions.

Recommendation algorithms touch all four components. They shape behavior by determining what content appears and when, creating patterns of compulsive checking. They control information by filtering what reaches users and what gets suppressed. They influence thought by reinforcing certain worldviews through repetition. They manipulate emotion by prioritizing content that triggers strong reactions. The platforms did not design these systems to radicalize anyone. They designed them to keep people scrolling. The radicalization is a side effect.

Information Control: The Filter Bubble as Isolation Chamber

Cults restrict access to outside information. Members are discouraged from reading critical material or speaking with former members[s]. The goal is to ensure that the only information reaching someone reinforces the group’s worldview.

Recommendation algorithms achieve something functionally similar through personalization. When YouTube’s algorithm learns that a user responds to certain content, it serves more of the same. Research from the Internet Policy Review found that YouTube amplifies extreme and fringe content after users interact with far-right materials[s]. Users are not locked in a room and handed propaganda. They are handed a feed that increasingly resembles one.

The effect compounds over time. Social media platforms “push users into increasingly narrow ideological ranges of content in what we might call evidence of a mild ideological echo chamberAn information environment where people encounter only beliefs or opinions that confirm their existing views, reinforcing their biases.,” according to research from New York University and Vanderbilt[s]. The narrowing is gradual enough that users often do not notice it happening.

Emotional Manipulation: The Dopamine Trap

Cults use techniques called “love bombingA cult recruitment technique involving overwhelming a target with excessive affection and attention to create emotional dependency.” at first, then guilt, fear, and shame to maintain control[s]. The emotional rollercoaster creates dependency and makes members fear leaving.

Social media platforms create their own emotional dependency through variable reward schedules, the same mechanism that makes slot machines addictive. Users become “victims of an unrelenting dopamine cycle created in a loop of desire induced by endless social media feeds, seeking and anticipating rewards in the way of photo tagging, likes, and comments,” according to research published in Cureus[s]. The unpredictability of when the next dopamine hit will come keeps users engaged far longer than any consistent reward would.

Algorithms preferentially surface content that triggers strong emotional responses. “Algorithms usually promote emotionally provocative or controversial material by focusing on metrics such as likes and shares, creating feedback loops that amplify polarising narratives,” notes analysis from the Observer Research Foundation[s]. Fear and outrage drive more engagement than calm analysis. The algorithm learns this and optimizes accordingly.

The Speed of Modern Radicalization

Traditional cult recruitment took months or years. A recruiter had to build trust, introduce ideas gradually, and slowly isolate the target from outside influences. Digital radicalization psychology operates on a compressed timeline.

“A radicalization process that once unfolded over months or years now typically takes days or even hours, largely due to the prevalence of extremist short-form online propaganda,” reports the Soufan Center[s]. The same report notes that social media platforms “enable violent extremists to recruit youths more expediently than in-person; algorithms channel those youths to more emotionally charged content.”

This acceleration happens because algorithms can process behavior and adjust recommendations in real time, far faster than any human recruiter could respond. The system notices what holds attention, what gets shared, what provokes a reaction, and immediately serves more of it.

The Counterargument: Most Users Are Fine

Critics of algorithmic radicalizationThe process by which recommendation algorithms gradually lead users toward increasingly extreme content through personalized content curation. theory point to research showing the effect is limited. A study from New York University and Vanderbilt found that only 3% of participants experienced a genuine “rabbit hole” where recommendations led to progressively more extreme content[s]. Most users receive recommendations that reflect their existing preferences rather than pushing them toward extremes.

Research from Nottingham Trent University concludes that “while online radicalization does occur, with and without reference to offline processes, the resulting threat is not overly high”[s]. The majority of extremist content consumption appears to be driven by users actively seeking it out rather than being pushed there by algorithms.

This counterargument deserves serious consideration. But it misses an important point about scale. Three percent of YouTube’s two billion monthly users is still sixty million people. And the research specifically notes that the assessment “refers to the present only and is unlikely to hold for the future, given the general growth and acceleration of online activity among terrorist actors.”

What Should Change

The parallel between digital radicalization psychology and cult recruitment does not mean platforms are deliberately creating extremists. It means that optimizing purely for engagement, without accounting for psychological harm, produces systems that exploit the same vulnerabilities that cult leaders have always exploited. The difference is scale and automation.

Understanding this parallel points toward solutions. Transparency about how algorithms select content would allow researchers to identify harmful patterns before they cause damage. Designing systems that optimize for user wellbeing rather than pure engagement could reduce the most harmful amplification effects. Teaching media literacy that includes algorithmic awareness would help users recognize when their information environment is being narrowed.

The platforms themselves have begun acknowledging the problem. Issue One’s Council for Responsible Social Media has convened experts in national security and technology who concluded that “social media platforms are designed to maximize our attention, suck us in, and keep us hooked” by “keeping the public in the dark about how algorithms elevate certain content, pushing people deeper and deeper into echo chambers”[s].

Digital radicalization psychology is not a conspiracy. It is an emergent property of systems designed to capture and hold attention at any cost. The fact that these systems independently discovered techniques that cults have used for centuries should give us pause. It should also give us direction: the psychological research on cult recovery and prevention may offer insights for designing healthier digital environments.

The BITE ModelA framework developed by cult researcher Steven Hassan that categorizes tactics of psychological manipulation into Behavior, Information, Thought, and Emotional control. of Authoritarian Control, developed by cult researcher Steven Hassan, provides a framework for understanding how high-control groupsIn research, the group of participants that does not receive the treatment being tested, used for comparison with the treatment group. maintain influence over members through Behavior, Information, Thought, and Emotional manipulation[s]. Drawing on research by psychiatrists Robert Jay Lifton and Louis Jolyon West into communist brainwashing techniques[s], the model identifies specific mechanisms of psychological manipulation. Examining digital radicalization psychology through this lens reveals structural similarities between cult recruitment patterns and the behavioral effects of recommendation algorithms, even though the systems operate through entirely different mechanisms.

The thesis here is not that platform engineers studied cult techniques and implemented them deliberately. Rather, that optimization for engagement metricsMeasurable indicators of user interaction—clicks, time spent, scrolls—that platforms optimize for as a proxy for user satisfaction, though they often reward compulsive behavior over intentional satisfaction. has independently converged on psychological manipulation strategies that exploit the same cognitive vulnerabilities. Understanding digital radicalization psychology requires examining this convergence at the mechanistic level.

Mapping BITE Components to Algorithmic Functions

Behavior Control: The BITE model identifies behavior control through regulation of physical reality, rigid rules, and reward/punishment systems[s]. Recommendation algorithms achieve behavioral regulation through variable ratio reinforcementA reward schedule where reinforcement comes after an unpredictable number of actions, making the behavior highly resistant to stopping — the mechanism behind slot machines and social media feeds. schedules. Research published in Cureus documents that users become trapped in “an unrelenting dopamine cycle created in a loop of desire induced by endless social media feeds, seeking and anticipating rewards”[s]. The neurophysiological impact includes altered dopamine pathways, “fostering dependency analogous to substance addiction.”

Information Control: Cults restrict access to outside information and encourage members to distrust critics[s]. Algorithmic filtering creates functional information restriction through personalization. An empirical analysis in the Internet Policy Review examined recommendation systems on YouTube, Reddit, and Gab, finding that “YouTube does amplify extreme and fringe content” after interaction with far-right materials[s]. The effect creates what researchers call “algorithmic radicalisation, which shows how social media platforms coax users into ideological rabbit holes”[s].

Thought Control: Cults require members to internalize doctrine as truth and adopt black-and-white thinking[s]. Recommendation algorithms do not mandate belief, but they shape the information environment in ways that reinforce existing cognitive biases. Research from NYU and Vanderbilt found that “YouTube’s recommendation algorithm does push users into increasingly narrow ideological ranges of content in what we might call evidence of a (very) mild ideological echo chamberAn information environment where people encounter only beliefs or opinions that confirm their existing views, reinforcing their biases.[s].

Emotional Control: The BITE model documents how cults manipulate emotions through “extremes of emotional highs and lows” and “phobia indoctrinationSystematic installation of irrational fears to prevent questioning or leaving a group or belief system.[s]. Algorithms optimize for emotional engagement because it drives metrics. “Algorithms usually promote emotionally provocative or controversial material by focusing on metrics such as likes and shares, creating feedback loops that amplify polarising narratives”[s].

Temporal Compression in Digital Radicalization Psychology

Traditional radicalization through extremist groups follows the psychosocial model of recruitment and violent mobilization, which identifies phases of emotional radicalization, doctrinal radicalization, and violent disinhibition[s]. Research on the 17-A cell that carried out the Barcelona attacks found that “some techniques appear to be like those used by totalitarian cults and are aimed at eliminating the personal identity of the target by reinforcing a new social identity with the extremist group.”

Digital environments compress this timeline dramatically. The Soufan Center reports that “a radicalization process that once unfolded over months or years now typically takes days or even hours”[s]. This acceleration occurs because algorithms process behavioral signals and adjust recommendations in real time. “Social media platforms like TikTok, X, and Facebook enable violent extremists to recruit youths more expediently than in-person; algorithms channel those youths to more emotionally charged content.”

Methodological Limitations and Countervailing Evidence

The algorithmic radicalizationThe process by which recommendation algorithms gradually lead users toward increasingly extreme content through personalized content curation. hypothesis has faced significant empirical challenges. Research examining actual user behavior, rather than algorithmic outputs in isolation, finds more limited effects. Analysis of 527 YouTube users with real browsing histories found that while the algorithm “does push users into increasingly narrow ideological ranges,” only a small proportion experience genuine rabbit holes leading to extremist content[s].

Researchers at Nottingham Trent University, reviewing quantitative evidence on online radicalization in terrorism contexts, concluded that “while online radicalization does occur, with and without reference to offline processes, the resulting threat is not overly high”[s]. Importantly, they note this assessment “refers to the present only and is unlikely to hold for the future, given the general growth and acceleration of online activity among terrorist actors.”

The distinction between user-driven and algorithm-driven content consumption is methodologically critical. Much extremist content consumption may reflect demand rather than supply, with users actively seeking material that algorithms then amplify. This does not eliminate the role of digital radicalization psychology, but it complicates causal attribution.

Policy Implications and Research Directions

Expert panels convened by Issue One’s Council for Responsible Social Media have identified the core problem as opacity: “Social media platforms are designed to maximize our attention, suck us in, and keep us hooked” by “keeping the public in the dark about how algorithms elevate certain content”[s]. This opacity prevents independent verification of radicalization effects and limits the ability to design interventions.

The structural parallel between digital radicalization psychology and cult manipulation suggests that psychological research on undue influence may inform platform design. The BITE model was developed specifically to help individuals recognize when they are subject to manipulative influence[s]. Algorithmic literacy education could incorporate similar frameworks to help users identify when their information environment is being artificially narrowed.

The convergence between engagement optimizationThe algorithmic practice of prioritizing content that generates the most user interactions (clicks, likes, shares, comments) rather than content quality or public interest. and psychological manipulation is not a conspiracy but an emergent property of systems designed without adequate consideration of cognitive vulnerabilities. The same psychological research that illuminates cult dynamics may prove essential for designing recommendation systems that inform rather than manipulate.

How was this article?
Share this article

Spot an error? Let us know

Sources