Digital Privacy News & Analysis Tech & AI Policy 6 min read

Social Media Bans for Teenagers Are Spreading. Can They Actually Work?

Australia, Indonesia, France, and the UK are restricting teenagers' access to social media. The concern is real, but the laws face enforcement problems that no country has solved yet.

Age verification challenges for teen social media access restrictions

A new wave of laws restricting teenagers’ access to social media is crossing the globe. Indonesia announced in early March 2026 that it would ban social media and online platforms for users under 16. Australia passed a law in late 2024 prohibiting social media for under-16s, which took effect in December 2025, making it the first country to implement an age-based ban of this scope. The United Kingdom’s Online Safety Act mandates age verification across a broad range of online services. Under a 2023 law, France requires parental consent for users under 15.

Each of these laws is framed as a response to a genuine and documented concern: the accumulating evidence that heavy social media use is associated with worse mental health outcomes for adolescents, particularly girls. The political momentum is real. But the technical question, whether these bans can actually work, has received less scrutiny than the political question of whether they should exist.

What the Laws Actually Require

Age-based social media bans work, in principle, by requiring platforms to verify user ages and deny access to those below the threshold. This sounds straightforward. In practice, age verification is a genuinely hard problem.

The mechanism matters. The simplest approach, asking users their date of birth, does not verify anything; it just adds a field to the signup form that younger users can fill in with false information. More robust approaches involve document checks (upload a government-issued ID), credit card verification (minors typically do not have credit cards), or biometric age estimation (facial analysis to estimate a user’s age from a photograph).

Each of these approaches creates its own problems. Document verification requires platforms to collect and store sensitive personal data on hundreds of millions of users, a significant privacy and security risk that extends to adult users who were never the target of the policy. Biometric age estimation is imprecise, affected by lighting and image quality, and raises its own concerns about the collection of facial data by private companies. Credit card checks exclude adults who do not hold cards.

None of these approaches is robust against circumvention. A teenager determined to access a banned platform can use a VPN to appear to connect from a different jurisdiction, borrow an adult’s credentials, or simply use an alternative platform that has not implemented verification. Australia’s ban took effect in December 2025; within weeks, news coverage was documenting the VPN workarounds being circulated among Australian teenagers.

The Age Verification Enforcement Gap

Legislation places obligations on platforms, not on individual users. The theory of change is: regulate the platforms, and access becomes harder. This works to a point. Large platforms with substantial regulatory exposure, Meta, TikTok, Snapchat, YouTube, will implement whatever verification systems regulators require, because the alternative is fines or market exclusion. Smaller platforms, or those hosted in jurisdictions without enforcement agreements, face much weaker pressure.

The enforcement gap is structural. A teenager who cannot access Instagram in Australia can access alternative platforms hosted in countries that do not enforce the ban. They can use VPN services to appear to be connecting from a non-regulated location. They can create accounts under false ages before verification systems are implemented. The ban may succeed in removing the most casual teenage users from the largest platforms, which is not nothing, while being ineffective against the most determined ones.

This is not an argument against the policy. It is an accurate description of the technical terrain that determines what the policy can and cannot achieve.

The Privacy-Safety Trade-off

The deeper tension in age verification laws is the one between the harm they are trying to prevent and the harm created by the verification mechanism itself.

The policy goal is protecting minors from exposure to harmful content and the documented mental health effects of algorithmic social media. This is a legitimate goal. Achieving it through robust age verification requires building identity infrastructure, databases of user ages, linked to real identities, that represents a significant expansion of the personal data held by commercial platforms and, in some verification models, by third-party identity services.

That data can be breached. It can be subpoenaed. In jurisdictions with weak rule of law, it can be accessed by governments. A database connecting real identities to social media use has surveillance value that extends far beyond its original purpose.

Privacy advocates arguing against age verification are not arguing that teenagers should have unrestricted access to harmful content. They are arguing that the verification systems required to enforce such bans create risks that fall on the entire user population, not just the minors the laws are designed to protect. The UK’s Information Commissioner’s Office has noted this tension explicitly; the trade-off has no clean resolution.

What Actually Works

The research on what actually reduces social media harm to teenagers is less settled than the political consensus suggests. Heavy algorithmic amplificationAlgorithmic promotion of content beyond organic reach, independent of user relevance or intent. Platforms use this to maximize engagement metrics regardless of whether it serves what users requested., recommendation engines optimized for engagement rather than user wellbeing, appears to be a more significant driver of harm than mere access. A teenager who uses social media to talk to friends is not the same as a teenager spending six hours a day in an algorithmically curated feed of anxiety-inducing content.

Interventions targeting algorithmic design rather than age-based access would address the mechanism more directly. Several jurisdictions have moved in this direction: the EU’s Digital Services Act requires large platforms to offer non-personalized, non-algorithmic feed options, and to give users tools to understand and adjust how recommendation systems work. The UK Online Safety Act includes requirements about algorithmic safety alongside its age verification provisions.

Age verification bans are politically legible and administratively simple to mandate. They are also, technically, porous. The gap between what these laws promise and what they can deliver depends heavily on implementation details that are still being worked out in each jurisdiction. Indonesia’s ban, as of its announcement in early March 2026, has not yet specified the verification mechanism, which is where the real policy question lies.

Note: this is a fast-moving policy area. Specific verification requirements and enforcement mechanisms may change after this article’s publication date.

How was this article?
Share this article

Spot an error? Let us know

Sources

  • Engadget, “Indonesia outlines plan to limit under-16s’ access to social media,” March 6, 2026. (BBC News reported the same story at bbc.co.uk; TechCrunch link substituted as BBC URL could not be confirmed.)
  • Australian eSafety Commissioner, “Online Safety Amendment (Social Media Minimum Age) Act 2024.” esafety.gov.au
  • UK Online Safety Act 2023. Full text: legislation.gov.uk
  • European Commission, Digital Services Act. digital-strategy.ec.europa.eu
  • Haidt, Jonathan. The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness. Penguin Press, 2024. (The central academic case for the harm argument.)
  • UK Information Commissioner’s Office, “Age Appropriate Design Code” (Children’s Code). ico.org.uk