YouTube to be included in Australia’s social media ban for children under 16

Kids to be blocked from YouTube under Australia's social media ban

Monday | 4th August 2025

Brisbane, Australia — The Australian government has dramatically reversed its earlier stance on YouTube, announcing that the platform will now be included in its sweeping social media ban for children under the age of 16, scheduled to take effect in December. The move marks a significant policy shift and sets the stage for a potential legal confrontation with YouTube’s parent company, Alphabet.

Previously, YouTube had been excluded from the list of restricted platforms due to its perceived value as an educational and entertainment tool. However, in a statement released Wednesday, the Labor government confirmed that YouTube will now be treated the same as other major social media platforms—Facebook, Instagram, Snapchat, TikTok, and X (formerly Twitter)—under the new legislation.

Under the forthcoming law, social media companies will be held accountable for verifying users’ ages and preventing anyone under 16 from creating an account. Failure to comply could lead to penalties of up to 50 million Australian dollars (around $32 million USD). The responsibility to enforce the ban will rest squarely on the shoulders of the tech giants, rather than on parents or schools.

A spokesperson for YouTube criticized the government’s change of course, stating that it “reverses a clear, public commitment” to treat YouTube differently from conventional social media. The company emphasized that it had previously been viewed as a platform more aligned with educational content and less focused on user-generated interactions. While the spokesperson declined to comment on potential legal action, they indicated that YouTube is “considering next steps” and plans to continue dialogue with government officials. Notably, YouTube Kids—a separate app designed for younger users that doesn’t allow video uploads or public comments—will remain exempt from the ban.

Communications Minister Anika Wells defended the decision in a speech to Parliament, drawing an analogy between internet safety and Australia’s well-known swimming culture.

“It is like trying to teach your kids to swim in the open ocean, with the rip currents and the sharks, compared to at the local council pool,” Wells said. “We can’t control the ocean, but we can police the sharks. That’s why I will not be intimidated by legal threats when this is a genuine fight for the well-being of Australian kids.”

The government’s decision was strongly influenced by a recent report from the eSafety Commission, an independent watchdog that monitors online safety issues in Australia. The commission’s latest survey, published earlier this month, found that 37% of children had encountered harmful content on YouTube. The material reported included sexist or misogynistic rhetoric, violent content such as fight videos, dangerous viral challenges, and media promoting unhealthy eating habits or extreme fitness routines.

“YouTube uses the same persuasive design features as other social media platforms,” Minister Wells explained, citing infinite scroll, autoplay, and algorithmically curated feeds as mechanisms that can expose children to a continuous stream of potentially harmful material.

“Our kids don’t stand a chance,” she said, “and that is why I accepted the eSafety Commission’s recommendation that YouTube should not be treated differently from other social media platforms.”

The broader legislation is part of a growing international trend among governments seeking to rein in the influence of tech companies on children’s mental health and digital well-being. Australia’s move is among the strictest in the developed world and could set a precedent for how other countries handle platforms that straddle the line between entertainment, education, and social media.

As the December rollout date approaches, tensions are likely to rise between the Australian government and Silicon Valley, particularly if YouTube opts to challenge the law in court. Meanwhile, regulators and advocacy groups in Australia have welcomed the inclusion of YouTube in the ban, arguing that children’s online safety must be prioritized over corporate interests

How will it work?

During that buffer period, the government commissioned a series of age assurance verification trials designed to test the technological feasibility of enforcing age restrictions across various platforms. The aim was to assess how companies might accurately verify a user’s age without compromising privacy—a delicate balancing act that sits at the heart of this contentious legislation.

A preliminary report from the trials, released in June, outlined 12 key findings. Among them was a promising indication that age verification can be achieved in a “private, robust, and effective” manner using currently available technologies. However, the report also stressed that no single method could be applied universally to all platforms or user scenarios, nor could any one solution guarantee 100% accuracy.

More troubling were revelations about platforms’ proactive data collection efforts. The report noted “concerning evidence” that some social media providers were “over-anticipating the eventual needs of regulators” by creating tools that could store and retrace the age verification actions of individual users. In practice, this could involve collecting and retaining large amounts of sensitive personal data—raising the specter of privacy breaches and disproportionate surveillance.

“Some providers were found to be building tools to enable regulators, law enforcement or Coroners to retrace the actions taken by individuals to verify their age,” the report said. “This could lead to increased risk of privacy breaches due to unnecessary and disproportionate collection and retention of data.”

These findings have triggered alarm among privacy advocates, who argue that the legislation could inadvertently normalize surveillance or create data honeypots vulnerable to hacking or misuse. Others, including some child welfare advocates, have voiced concern that the ban may harm vulnerable or isolated children, particularly those from abusive households or marginalized communities, who rely on social media as a form of peer support, expression, or lifeline.

Despite these critiques, Communications Minister Anika Wells has remained firm in her defense of the law. While acknowledging the imperfections and the inevitability of workarounds, she insists the government’s priority must be the safety and well-being of children in the digital age.

“Kids, God bless them, are going to find a way around this,” Wells said in a candid moment. “Maybe they’re all going to swarm on LinkedIn. We don’t know.”

The comment underscores both the limitations and unpredictability of tech regulation, particularly when it comes to young, tech-savvy users. Still, the government views the legislation as an essential first step in reshaping the relationship between Big Tech and child safety, even if enforcement and compliance remain major challenges.

With the December enforcement date drawing closer, attention will now shift to how platforms like YouTube, TikTok, and Meta implement the new rules—and how regulators, parents, and children respond to what is arguably one of the most ambitious online safety initiatives undertaken anywhere in the world.

The industry perspective

As pushback mounts over Australia’s upcoming social media ban for children under 16, tech giants like YouTube and TikTok are stepping up efforts to defend their platforms and showcase the measures they say are already in place to protect young users.

This week, YouTube announced a new artificial intelligence trial in the United States aimed at better identifying users under the age of 18. The platform said the AI system would analyze a “variety of signals” to infer a user’s age without requiring intrusive data collection. These signals include search queries, video viewing habits, and the age of the user’s account.

“If users are determined to be under 18,” YouTube explained, “personalized ads will be turned off, well-being tools will be activated, and repetitive viewing will be limited for certain categories of content.” The company framed these actions as part of a broader strategy to promote safer and healthier online experiences for teens.

Despite these changes, platforms like YouTube and TikTok have intensified lobbying efforts to persuade Australian lawmakers and parents to reconsider the upcoming restrictions. TikTok, for example, has taken its case directly to the public, launching a series of ads on Facebook that depict the platform as a hub for informal learning. One ad reads: “From fishing to chef skills, Aussie teens are learning something new every day on TikTok.”

YouTube’s efforts have taken a more personal—and uniquely Australian—turn. According to Communications Minister Anika Wells, the company recently sent a representative of The Wiggles, the iconic children’s music group, to argue against including YouTube in the ban. The Wiggles, beloved by generations of Australian children and widely seen as champions of safe, wholesome entertainment, were reportedly enlisted to make the case that YouTube provides positive content for young viewers.

But Wells said she was unmoved.

“The Wiggles are a treasured Australian institution,” she acknowledged during an interview with CNN affiliate 9 News. “But like I said to them, you’re arguing that my 4-year-old twins’ right to have a YouTube login is more important than the fact that four out of 10 of their peers will experience online harm on YouTube—and they might be two of those four.”

Her remark refers to sobering data from the “Keeping Kids Safe” survey, conducted by the eSafety Commission between December 2024 and February 2025. The survey gathered responses from nearly 3,500 Australian children aged 10 to 17, and found that 37% reported encountering harmful or distressing content on YouTube—a higher rate than most other social platforms.

This figure played a central role in the government’s decision to reclassify YouTube as a social media platform, subject to the same restrictions as TikTok, Instagram, and others. Harmful content identified in the survey included sexist and hateful rhetoric, videos glamorizing risky behavior, and content encouraging disordered eating or obsessive fitness routines.

Minister Wells has consistently emphasized that the new laws are not about demonizing technology, but about resetting the terms of engagement between children and digital platforms. While acknowledging the platform’s educational value, she said the sheer scale and power of algorithm-driven content make unmoderated access too dangerous for young users.

The law’s implementation in December will serve as a crucial test of whether age-assurance technologies—combined with regulatory pressure—can shift how tech companies design and deliver content to children. But as both sides dig in, the battle over kids’ digital access is far from over, and Australia may soon find itself at the center of a global debate over children’s rights in the digital age.

Scroll to Top
Scroll to Top