Australia’s internet regulator has criticised the world’s biggest social platforms of not adequately implementing the country’s prohibition preventing under-16s from accessing their platforms, despite legislation that came into force in December. The eSafety Commissioner, Julie Inman Grant, has raised “serious concerns” about compliance from Facebook, Instagram, Snapchat, TikTok and YouTube, citing poor practices including permitting prohibited users to make repeated attempts at age verification and insufficient measures to prevent new accounts. In its initial compliance assessment since the prohibition came into force, the regulator found numerous deficiencies and has now moved from monitoring to active enforcement, cautioning that platforms must demonstrate they have implemented “appropriate systems and processes” to stop under-16s from using their services.
Regulatory Breaches Exposed in Initial Significant Review
Australia’s eSafety Commissioner has detailed a worrying pattern of non-compliance among the world’s largest social media platforms in her inaugural review following the ban came into effect on 10 December. The report demonstrates that Meta, Snap, TikTok, YouTube and Snapchat have collectively neglected to establish sufficient safeguards to prevent minors from accessing their services. Julie Inman Grant expressed particular concern about systemic weaknesses in age verification processes, noting that some platforms have allowed children who originally stated themselves under 16 to later assert they were older, thereby undermining the law’s intent.
The findings demonstrate a notable intensification in the regulatory action, with the eSafety Commissioner moving beyond monitoring to direct enforcement. The regulator has stressed that simply showing some children still hold accounts is insufficient; platforms must instead furnish substantive proof that they have established robust systems and processes intended to stop under-16s from opening accounts in the first place. This shift signals the government’s commitment to ensure tech giants accountable, with possible sanctions looming for companies that do not meet the legal requirements.
- Allowing formerly prohibited users to re-verify their age and regain account access
- Enabling multiple tries at the identical verification process without penalty
- Weak mechanisms to block new under-16 accounts from being established
- Limited notification systems for parents and the general public
- Shortage of clear information about compliance actions and user account terminations
The Scope of the Issue
The considerable scale of social media activity amongst young Australians underscores the regulatory challenge confronting both the authorities and the platforms in question. With millions of accounts already restricted or removed since the ban’s implementation, the figures provide evidence of extensive early non-compliance. The eSafety Commissioner’s conclusions indicate that the technical and procedural obstacles to implementing age restrictions have proven far more complex than anticipated, with platforms struggling to distinguish genuine age declarations from false claims. This complexity has left enforcement authorities wrestling with the core issue of whether existing age verification systems are sufficient for the purpose.
Beyond the operational challenges lies a wider issue about the readiness of companies to prioritise compliance over user growth. Social media companies have consistently opposed strict identity verification requirements, citing data protection worries and the genuine difficulty of verifying age digitally. However, the Commissioner’s report suggests that some platforms might not be demonstrating adequate commitment to implement the systems required by law. The move to active enforcement represents a critical juncture: either platforms will significantly enhance their compliance infrastructure, or they risk facing significant penalties that could reshape their business models in Australia and possibly affect regulatory approaches internationally.
What the Statistics Demonstrate
In the first month subsequent to the ban’s implementation, Australian officials indicated that 4.7 million accounts had been limited or taken down. Whilst this figure initially appeared to prove enforcement effectiveness, subsequent analysis reveals a more layered picture. The considerable quantity of account takedowns implies that many under-16s had been able to set up accounts in the first place, indicating that protective safeguards were insufficient. Moreover, the data prompts inquiry about whether deleted profiles reflect authentic compliance or just users removing their profiles voluntarily in response to the latest limitations.
The limited transparency regarding these figures has disappointed independent observers attempting to evaluate the ban’s actual effectiveness. Platforms have provided minimal information about their compliance procedures, effectiveness metrics, or the nature of removed accounts. This lack of clarity makes it hard for regulators and the general public to assess whether the ban is working as intended or whether teenagers are simply finding other methods to access social media. The Commissioner’s demand for thorough documentation of structured adherence protocols reflects increasing concern with platforms’ reluctance to provide comprehensive data.
Industry Response and Pushback
The social media giants have addressed the regulator’s enforcement action with a combination of compliance assurances and doubts regarding the ban’s practicality. Meta, which operates Facebook and Instagram, emphasised its dedication to adhering to Australian law whilst simultaneously arguing that accurate age determination continues to be a major challenge across the industry. The company has advocated for a different approach, suggesting that robust age verification and parental approval mechanisms implemented at the app store level would be more effective than enforcement at the platform level. This stance demonstrates wider concerns across the industry that the existing regulatory system puts an unrealistic burden on separate platforms.
Snap, the creator of Snapchat, has adopted a more assertive public position, announcing that it had suspended 450,000 accounts following the ban’s implementation and asserting it continues to suspend additional accounts each day. However, sector analysts dispute whether such figures reflect authentic adherence or merely reactive account management. The fundamental tension between platforms’ commercial structures—which historically relied on maximising user engagement and growth—and the regulatory requirement to actively exclude an whole age group persists unaddressed. Companies have consistently opposed rigorous age verification methods, citing privacy issues and technical constraints, establishing an impasse between regulators and platforms over who bears responsibility for execution.
- Meta argues age verification should occur at app store level instead of on individual platforms
- Snap states to have locked 450,000 user accounts since the ban’s implementation in December
- Industry groups point to privacy issues and technical challenges as impediments to effective age verification
- Platforms contend they are doing their best whilst questioning the ban’s general effectiveness
Wider Considerations Concerning the Ban’s Efficacy
As Australia’s under-16 online platform ban moves into its enforcement phase, key concerns remain about whether the law will achieve its stated objectives or merely push young users towards unregulated platforms. The regulator’s initial compliance assessment reveals that following implementation, significant loopholes exist—children keep discovering ways to bypass age verification systems, and platforms have struggled to stop new underage accounts from being created. Critics contend that the ban’s effectiveness depends not merely on regulatory oversight but on whether young people will genuinely abandon major social networks or simply shift towards other platforms, secure messaging apps, or virtual private networks designed to conceal their age and location.
The ban’s global implications increase the complexity of assessments of its effectiveness. Countries such as the United Kingdom, Canada, and multiple European countries are observing Australia’s initiative closely, exploring similar laws for their own citizens. If the ban proves ineffective at reducing children’s social media usage or fails to protect them from harmful content, it could weaken the case for equivalent legislation elsewhere. Conversely, if implementation proves sufficiently strict to genuinely restrict underage usage, it may inspire other nations to adopt comparable measures. The outcome will potentially determine global regulatory trends for the foreseeable future, making Australia’s enforcement efforts scrutinised far beyond its borders.
Who Benefits and Who Is Disadvantaged
Mental health campaigners and child safety organisations have endorsed the ban as a necessary intervention to counter algorithmic manipulation and exposure to harmful content. Parents and educators contend that removing young Australians platforms designed to maximise engagement could lower anxiety levels, enhance sleep quality, and reduce exposure to cyberbullying. Tech companies’ own research has recognised the mental health risks associated with social media use amongst adolescents, lending credibility to these concerns. However, the ban also removes legitimate uses of social media for young people—maintaining friendships, obtaining educational material, and participating in online communities around shared interests. The regulatory framework assumes harm outweighs benefit, a calculation that some young people and their families question.
The ban’s concrete implications goes further than individual users to impact content creators, small businesses, and community organisations that rely on social media platforms. Young people who might have followed creative careers through platforms like TikTok or Instagram now encounter legal barriers to participation. Small Australian businesses that are dependent on social media marketing lose access to younger demographic audiences. Community groups, charities, and educational organisations find it difficult to engage young people through channels they previously used effectively. Meanwhile, the ban unintentionally favours large technology companies with resources to develop age verification infrastructure, arguably consolidating their market dominance rather than reducing it. These unintended consequences suggest the ban’s effects go well past the simple goal of child protection.
What Lies Ahead for Regulatory Action
Australia’s eSafety Commissioner has signalled a notable transition from hands-off observation to direct intervention, marking a critical turning point in the rollout of the youth access prohibition. The regulator will now collect data to determine whether services have omitted “reasonable steps” to prevent underage access, a regulatory requirement that goes further than simply noting that young people stay within these platforms. This approach demands demonstrable proof that companies have established suitable mechanisms and procedures intended to prevent minors. The Commissioner’s office has stated it will launch probes methodically, constructing evidence that could trigger substantial penalties for breach of requirements. This transition from observation to action reveals growing frustration with the platforms’ current efforts and signals that willing participation alone will no longer suffice.
The enforcement phase raises significant concerns about the adequacy of penalties and the practical mechanisms for holding tech giants accountable. Australia’s regulatory framework offers regulatory tools, but their efficacy relies on the eSafety Commissioner’s willingness to pursue formal action and the platforms’ capability to adjust substantively. Overseas authorities, notably regulators in the United Kingdom and European Union, will carefully track Australia’s regulatory approach and outcomes. A effective regulatory push could set a blueprint for additional countries evaluating equivalent prohibitions, whilst failure might compromise the entire regulatory framework. The coming months will be critical whether Australia’s groundbreaking legislation produces genuine protection for adolescents or remains largely symbolic in its influence.
