Friday, May 15, 2026
-
CIVICUS discusses the rising trend of social media bans for children with Marie-Ève Nadeau, Head of International Affairs of the 5Rights Foundation, an organisation that promotes children’s rights in the digital environment.

Marie-Ève Nadeau
Are social media bans an effective way of protecting children?
Today, one in three internet users is a child, and digital technologies increasingly mediate all aspects of their lives, from the classroom to the playground, from their first friendships to how they see themselves. As evidence of harms and risks mounts, lawmakers around the world are racing to impose age limits on children’s access to social media. The instinct to act is right, but the current direction risks missing the point.
The real issue is the conditions children face when online. Children are growing up in a digital environment designed without their distinct rights, needs and vulnerabilities in mind. This is a deliberate choice. Tech companies’ business models prioritise commercial gain over children’s safety and wellbeing, deliberately embedding persuasive design, relentless engagement loops and extractive data practices by default. Fixing this requires more than blocking children’s access.
Age restrictions are not new, yet their effectiveness remains inconclusive. Banning children from specific services while leaving the underlying system untouched lets tech companies off the hook for recommender systems that push harmful content, persuasive design that keeps children compulsively engaged and data practices that exploit their attention for profit. Used in isolation, bans create an illusion of protection while the same harmful design practices continue unchallenged. Children are pushed towards other unregulated environments, such as AI chatbots, gaming platforms and educational technology services, where they face equivalent risks with even less scrutiny.
What do these bans mean for children’s rights to expression and information?
Children’s rights are interdependent and indivisible, and the United Nations Convention on the Rights of the Child General Comment No. 25 makes clear that all children’s rights apply fully in the digital environment. This includes the right to protection from harm, but also to the rights of access to information, expression and participation. In practice, tech companies have made these rights conditional on the commercial surveillance, exploitation and manipulation of children, eroding their privacy, safety, critical thinking and agency.
Age-based bans that restrict access without addressing underlying design practices create a false choice between freedom and safety. Children need both protection from harm and meaningful access to expression, information and participation. Restricting access without reforming the systems that embed risk fails to uphold the full range of children’s rights.
Who is most harmed by these bans, and what gaps do they create?
Children’s rights apply until the age of 18, yet proposed restrictions often only cover children under 16 and a narrow set of high-risk services. This creates gaps. Children above the age threshold, and those who circumvent poorly implemented restrictions, end up in unregulated spaces outside the scope of bans.
Bans can also entrench inequality. Children are not a homogeneous group, and those facing intersecting vulnerabilities linked to disability, gender, political opinion, race, religion or ethnic, national or social origin may heavily rely on digital spaces for expression, identity safety and support.
At the same time, engagement-based platform design often rewards and amplifies divisive and harmful content, for example on gender-based violence, heightening risks for excluded communities. Blanket bans do not create safer spaces, nor eliminate these harms. Instead, they displace them to less visible, less regulated and even less accountable spaces. Effective protection must ensure children can exercise their rights and have safe spaces of support and community.
How does age verification work, and what does it mean for children’s privacy?
Tech companies routinely invest heavily in targeting advertising and personalising content yet fail to apply the same rigour to protecting children. Age assurance, an umbrella term for both age estimation and age verification solutions, allows companies to recognise the presence of children and act accordingly. It must be lawful, rights-respecting and proportionate to risk. Data collection should be limited to what’s strictly necessary to establish age, and used only for that purpose.
Global privacy regulators found that 24 per cent of services lack any age assurance mechanism and 90 per cent of those relying on self-declaration are easily bypassed. Yet robust solutions exist. Australia’s age assurance technology trial demonstrates that privacy-preserving age verification can confirm age without exposing identity. Technical standards, such as the 2089.1-2024 Standard for Online Age Verification published by the Institute of Electrical and Electronics Engineers, show that independently audited frameworks, like those used in product safety or pharmaceuticals, are both feasible and necessary to ensure age assurance systems are secure, proportionate and compliant.
For low-risk services appropriate for all users, there should be no requirement to establish age. Where services or functionalities present risk to children, companies should address or mitigate specific high-risk features rather than gatekeeping entire services.
What should governments demand from platforms to protect children?
Age restrictions have become part of a global playbook, notably in data protection regimes like the US Children’s Online Privacy Protection Act (COPPA), which sets 13 as the threshold for consent to data collection. Poor implementation and enforcement of COPPA and similar laws have allowed tech companies to hide behind obscure disclaimers while failing to meaningfully restrict access and profiting from embedding risk into children’s digital experiences.
There’s another way forward. The priority should be holding tech companies accountable, not banning children from the digital world. That means banning exploitative practices, regulating risky features such as addictive design, manipulative recommender systems and extractive data practices, and requiring privacy, safety and age-appropriate design as the baseline.
It also means shifting to systemic risk management: companies should be legally required to anticipate, assess and mitigate how their products expose children to risk. This baseline already exists in other high-risk sectors such as aviation, food safety and medicine, where products must demonstrate safety before reaching the market.
A growing global consensus points to a clear path forward: embedding age-appropriate design, requiring child rights impact assessments, mandating privacy and safety by design and default, establishing effective enforcement mechanisms and ensuring independent auditing. Over 55 leading organisations and experts from all continents have endorsed the 10 best-practice principles developed by the 5Rights Foundation.
CIVICUS interviews a wide range of civil society activists, experts and leaders to gather diverse perspectives on civil society action and current issues for publication on its CIVICUS Lens platform. The views expressed in interviews are the interviewees’ and do not necessarily reflect those of CIVICUS. Publication does not imply endorsement of interviewees or the organisations they represent.
GET IN TOUCH
Website
BlueSky
Instagram
LinkedIn
Spotify
Twitter
Marie-Ève Nadeau/BlueSky
Marie-Ève Nadeau/LinkedIn
SEE ALSO
Child social media bans: a growing global problem CIVICUS Lens 05.May.2026
Technology: Innovation without accountability CIVICUS | State Of Civil Society Report 2026
North Macedonia: ‘The solution cannot be to cut children off social media, but to make it safer’ CIVICUS Lens | Interview with Goran Rizaov 23.Apr.2026