

Australia has become the first country in the world to enforce a nationwide ban on the use of social media by children under the age of 16. The legislation follows several reported deaths and suicides among teenagers linked to cyberbullying and online harassment. Mothers of the victims mounted sustained campaigns, urging legislators everywhere to act against what they see as a growing digital menace. The Australian government has stated that the law is the outcome of extensive consultations with young people, parents and carers, aimed at protecting Generation Alpha from being drawn into what it describes as a purgatory created by predatory algorithms.
Globally, there is no single consolidated figure for children under 16 holding social media accounts. However, multiple studies indicate very high usage. Estimates suggest that up to 95% of teenagers aged 13-15 are active on social media, with a significant proportion of younger children aged 8-12 also using these platforms, despite minimum age requirements, often set at 13. According to the Pew Research Center, YouTube, TikTok, Instagram and Snapchat are among the most popular platforms across age groups. In the UK in 2024, nearly 80% of children aged 8-12 reportedly used social media, and some estimates suggest that hundreds of thousands of preschoolers aged 3-5 remained active online. In the US, usage among teens aged 13-17 is near-universal at 95%, with more than a third reporting that they are online “almost constantly”, according to NIH researchers.
Australia’s legislation, one of the toughest directed at technology companies, includes penalties of up to A$49.5 million for non-compliance. It clearly places responsibility on social media platforms, rather than on teenagers or parents, to take reasonable steps to prevent children under 16 from managing accounts. Meta, the parent company of Facebook and Instagram, expressed concern over what it described as a rushed legislative process that did not adequately consider existing industry safeguards or the voices of young people. Other platforms such as TikTok, YouTube and Snapchat have argued that a blanket ban is ineffective and potentially counterproductive. X (formerly Twitter) raised concerns about the possible impact on young people’s rights, including freedom of expression and access to information.
Industry groups have also cautioned against unintended consequences. The Digital Industry Group Inc. (DIGI) warned that removing children from regulated platforms could push them into “darker, less safe corners” of the internet with limited moderation and safety controls. Despite these reservations, tech majors have indicated that they will comply with the law by taking reasonable steps to verify users’ ages, using registered account details and behavioural signals to assess whether users meet age requirements.
The ban has sparked debate across the world. A senior fellow at the Cato Institute has expressed concern that age-verification systems could adversely affect older users who are unwilling or unable to verify their age due to privacy concerns, a desire for anonymity, or fear of data breaches. A psychology professor at the University of Sydney has questioned whether platforms can realistically enforce the ban without raising serious privacy issues, particularly since self-reported age data has long proved unreliable.
Others point to the complexity of implementation. A sociology professor at Rutgers University noted that while the intent behind the legislation is positive, enforcement will be extremely challenging. UNICEF Australia has emphasised that the law will not fix the broader problems youth face online. The Australian Human Rights Commission has voiced “serious reservations”, urging lawmakers to explore less restrictive ways to keep children safe online without limiting their rights to education, expression, privacy and leisure. Adding to the debate, an Australian teenager has legally challenged the law, claiming it infringes on constitutional rights to political communication. Australia’s online safety regulator clarified that the measure is intended to delay access to social media, not impose an absolute ban.
Australia’s decision has created a ripple effect globally. The European Commission has been working on an age-verification mobile application to check whether users are over 18, with countries such as Spain, France, Greece, Denmark and Italy testing similar approaches. The European Parliament has passed a resolution calling for a minimum age of 16 to ensure age-appropriate online engagement. Denmark and Norway are developing legislative frameworks to restrict access for those under 15, while the Netherlands has advised parents to discourage the use of platforms like Instagram and TikTok before the age of 15. In the US, a bipartisan group of senators introduced a bill proposing a minimum social media age of 13. In Asia, Malaysia is considering legislation for users under 16, while China has implemented a “minor mode” with device-level restrictions. India does not impose an outright ban, but the Digital Personal Data Protection Act, 2023, requires verifiable parental consent for processing the personal data of users under 18.
Concerns about children’s mental health in the digital age continue to grow. In his 2024 book The Anxious Generation, American psychologist Jonathan Haidt argues that prolonged screen time on smartphones and the internet has displaced the play-based experiences central to childhood, exposing young people to higher risks of anxiety and depression. If the aim is to restore a healthier childhood, the solution cannot lie in age restrictions or bans alone.
A more effective approach requires a multi-pronged strategy. Technology platforms must take greater responsibility for designing safer products, systems and processes that prioritise child safety and wellbeing. Equally important are coordinated efforts by families, communities, caregivers and schools to impart socio-emotional skills and digital literacy, foster open communication, model healthy technology habits, set clear rules supported by parental controls, protect privacy and establish robust reporting mechanisms. Together, these measures can help build digital resilience, trust and wellbeing —outcomes that no single legislative ban can achieve on its own.
Archana Datta, a retired civil servant, was OSD to the Governor of Karnataka; Press Secretary to the President; DG, AIR; and DG, Doordarshan