Why Instagram's latest parental controls
do not go far enough.

by Tim Levy - 18th September 2024


Instagram's latest parental controls, (announced yesterday September 17th) introduce mandatory teen accounts with privacy features aimed at protecting younger users.

 

The new accounts, called "Teen Accounts," will be automatic for all Instagram users under the age of 18, both for teens already using the app and for those signing up.

These accounts restrict messaging, limit sensitive content, and give parents greater oversight of their teens’ activities on the platform.

In our discussions with Meta, in the lead up to yesterday's announcement, it was recognised by the social media giant that they will lose users. We acknowledge them for this important step and signal. It is critical however that the community, governments and regulators do not see Meta’s approach as a universal model for safety. There are fundamental issues that it does not address:-

Too many apps and places
Kids access a large number of sites & platforms, not just Instagram. It is estimated that the average child accesses nearly 50 apps per week and the apps used change over time. It is not practical or realistic to expect regulators to negotiate with, or parents to configure parent settings on, all of these apps.

Bypasses and hacks
Kids have the time, skills and motivation to hack controls and the age assurance measures Meta are implementing are understood to have gaps that kids will exploit.

Kids are unique
Kids, families and schools have unique circumstances, and child development experts have serious concerns about blunt age-based access rules. Ultimately universal rules will be resisted and may harm kids. Parents want choice, not arbitrary rules.

Professional capabilities
Obscured in Meta’s announcement is that Meta provides safety features to businesses and personalities that they do not offer parents. Automated means are available for so-called influencers to plug into all of their social media accounts and automatically scan, moderate messages and delete hate-speech. This is a high priority need of parents but is not provided and no reasons are given.

The bottom line is this. The only truly reliable and effective approach to controlling all of a child’s online activity is by controlling the device they’re using via on-device safety technology. 

Given modern encryption and the scale and dynamics of the internet, it is the only effective way to privately identify users, inspect all activity and apply rules to all activity. Technology to do this is trustworthy, proven, cannot be removed by kids and it’s available now.

So why isn’t it everywhere?

Because Google, Apple and Microsoft limit these capabilities to business app developers. Such technology is being used reliably on 10s of millions of devices and particularly successfully in the US education sector on school-issued devices.

When installed by enterprises, on-device safety tech can deliver all of the core needs of parents. Porn blocking, social media age restrictions, screen time management, visibility and alerting, are easy to use and extremely difficult to bypass. 

This anti-competitive behaviour has been evidenced by the ACCC, EU and US antitrust inquiries. 

While Qoria welcomes the changes by Instagram we are equally concerned about the gaps they leave behind and the false sense of security this could give parents. 

What we urgently need is not unilateral security enhancements that address some but not all of the issues, but government regulations that ensure parents have the same access to the right safety technology that big enterprises already enjoy. It is frankly unacceptable that they don’t have that today. 

 

Closing the gaps that children fall through