What Will Actually Make the Online World Safer for Under-16s? (Hint: It’s Not the Ban)

Teen on his device in bed in the dark. His dull expression on his face lit  by the memerising screen he holds.

Australia’s new social media ban for under-16s has arrived with bold promises: fewer risks, fewer harms, and fewer young people lost in the algorithmic fog of online life. For many parents and educators, it feels like a moment of relief, a decisive line in the sand.

But here’s the uncomfortable truth: bans don’t make platforms safer. They simply push young people elsewhere. History shows that when teens lose access to mainstream platforms, many slip into private servers, anonymous apps, VPN tunnels, or unmoderated digital spaces where no adults, and certainly no safeguards, exist.

If we’re serious about protecting young people, then safety must be built into the infrastructure of the online world itself. Anything less simply reshuffles the risk.

Below is what real protection looks like, not symbolic gestures, but structural reforms that reshape the digital environment our young people move through every single day.


1. Make Algorithmic Safety Automatic for Under-16s

Right now, the most powerful forces shaping a young person’s digital life are invisible:

  • machine-learning recommendation systems

  • for you” feeds

  • auto-play loops

  • infinite scroll systems

  • maximally personalised content streams

These systems are designed to keep users engaged, not safe.

To protect young people, platforms should be required to automatically switch off harm-amplifying algorithms for anyone under 16

That includes removing:

  • addictive recommendation engines

  • Auto-play

  • infinite scroll

  • curated feeds driven by behavioural profiling

Young people shouldn’t need to “opt out” of manipulation. Safety must be the default setting.

The outcome? A dramatic decrease in exposure to violent content, extreme ideologies, sexualised material, body image distortion, radicalising communities, and the engineered pull of doomscrolling.

This one reform alone would outperform a thousand bans.

2. Regulate the Algorithms — Not the Kids

For too long, policy has focused on regulating children’s behaviour, not the tools that shape it.

Governments must now require platforms to:

  • undergo independent algorithmic audits

  • publish transparency reports showing what minors are being shown

  • demonstrate risk assessments and mitigation strategies

  • face real penalties when they knowingly promote harmful content to young people

Algorithmic safety should be treated with the same seriousness as car safety, food safety, or toy safety.
If a product can cause harm, it must be regulated at the design level.

This is the only way to shift responsibility from children, who are still developing impulse control and critical thinking, to the companies who profit from their attention.


3. Restrict Advertising to Young People

Platforms should not be allowed to use behavioural data to target minors. Yet today, many still do, burying it in the fine print.

Under-16s should never receive:

  • behavioural ads

  • dieting or beauty-filter promotions

  • gambling-adjacent content

  • engagement optimised” ads designed to exploit emotional triggers

These ads do more than sell products, they shape identity, anxieties, and beliefs.

Jurisdictions like the EU have already banned behavioural advertising to minors.
Australia can follow suit, and quickly.


4. Impose a Legal Duty of Care on Big Tech

A legislated Duty of Care forces companies to:

  • anticipate foreseeable harm

  • design safety into their products from day one

  • be held accountable when they fail

Without legal accountability, safety will always remain optional, a “nice to have” rather than a requirement. A Duty of Care doesn’t punish innovation; it aligns innovation with well-being.


5. Require Safe, Age-Appropriate Versions of Major Platforms

The choice should not be: full access or total ban.

There is a third path: demanding platforms provide age-appropriate versions with:

  • verified age-gating

  • human-moderated communities

  • restricted feeds

  • child-safe content filters

  • meaningful reporting tools

  • protections from adult-only spaces

Imagine if every platform had a built-in “Under-16 Mode” that genuinely put safety first.
It would transform the digital landscape overnight.

Safety cannot be an afterthought. It has to be architectural.

6. Fund Ongoing Digital Citizenship & Media Literacy Education

Even the safest digital environment still requires skilled, savvy users.

Young people need sustained education in:

  • critical thinking

  • misinformation and persuasion tactics

  • emotional self-regulation

  • privacy awareness

  • algorithmic influence

  • healthy digital habits

This cannot be a once-a-year lesson.
It must be embedded across the school year, ideally through:

  • school libraries

  • school camps

  • guided workshops

  • literature-based learning

  • narrative approaches that allow students to discuss issues in “third person,” rather than exposing personal experiences

Stories like Cyber Secrets, Brain Rot!, and Cyber Whispers already do this: they create a safe narrative distance that helps students analyse risks without feeling judged or vulnerable.

Digital citizenship must become core on-going education, not crisis response.

7. Give Parents Practical Tools, Not Just Rules

Right now, parents are expected to manage digital risk with little more than instinct and guesswork.

They need:

  • simple parental dashboards built into devices

  • clear safety controls

  • guidance that’s practical, not punitive

  • default child-safe settings at device setup

  • government-backed resources they can trust

Parents can only protect children when they themselves are supported.


We Don’t Make Kids Safer by Keeping Them Out — We Make Them Safer by Making Platforms Safe

A ban may look decisive on paper, but it doesn’t address the real issue: platforms remain unsafe by design. True protection will come from restructuring the digital world itself, through transparency, accountability, design reform, education, and parental support. This is the pathway that reduces harm. This is the policy direction that puts young people first. And this is the work Australia must begin, urgently, if we want a future where children can participate online safely, confidently, and creatively.


#socialmediaban #DigitalCitizenship #MediaLiteracy #OnlineSafety #YouthOnline #TechReform #AlgorithmicBias #AIForGood #ParentingTech #EdTech #DigitalWellbeing #FakeNewsEducation #ChildSafetyOnline #PlatformAccountability #SafeTechDesign #TeensAndTechnology #waituntil8th  #savethekids  #letkidsbekidslonger  #childhood  #Delaythesmartphone  #waituntil16forsocialmedia  #childhoodistooshort  #parentingishard #jonathanhaidt #parenting #digitalworld #kidsandtechnology #screentime #screentimeforkids #techpositive #techrules #deviceuse #digitalworld #parentinginadigitalworld #digitalparenting #simplicityparenting #parents #parentinghacks #parentingtips #parentstoday #teens #tweens #inbe-tweens #kidstoday #delaysmartphones #screentimelimits #freerangekids #onlinesafety #bored #mindfulparenting #anxiousgeneration #cyberbullying #YouthWellbeing



Casper Pieters

Scientist | Author | Editor | Educator Casper is interested to help prepare young people get future ready by creating riveting adventure stories about digital world.

https://www.casperpieters.com
Previous
Previous

Unplugged: The Likely Effects of a Social Media Ban on Under-Sixteen Teens over time

Next
Next

When Two Superintelligences Collide: What Team Savv-i Teaches Us About AI Doomism