Instagram launched new teen accounts on Tuesday, with the company promising more “built-in protections” for young people amid a growing backlash against social media platforms for their adverse impact on youth mental health.
Naomi Gleit, head of product at parent company Meta, told the AP that the new measures target the three concerns raised most often by parents: teens seeing content they don’t want to see, being contacted by people they don’t want to be contacted by, and spending too much time on the app. Teen accounts will be private by default and direct messages (DMs) will only be possible with people teens follow or are already connected with. “Sensitive content,” from violent videos to promotion of cosmetic surgery, will be limited. Teens will be notified if they spend more than 60 minutes on the app and “sleep mode” will turn off notifications and send auto-replies to DMs between 10 pm and 7 am. All users under 16 will need parental permission to change these settings.
Anyone in the US, UK, Canada or Australia who is under 18 will now automatically be put into the more restrictive teen accounts when they sign up for Instagram. Those with existing accounts will be migrated over the next 60 days while teens in the European Union can expect to see their accounts adjusted later this year. Meta did not comment on any other countries or regions.
[See more: US surgeon-general declares parents stress an ‘urgent public health issue’]
Critics are less than impressed. Nicole Gil, co-founder and executive director of the nonprofit Accountable Tech, panned Instagram’s announcement as the “latest attempt to avoid actual independent oversight and regulation” in comments to the AP. This PR exercise, Gil said, “falls short of safety by design” and e-marketer analyst Jasmine Enberg largely agreed, telling the AP that the changes will have little impact on the platform’s bottom line – or teen engagement, namely because “there are still plenty of way to circumvent the rules.”
Meta’s head of global safety, Antigone Davis, acknowledged as much to tech publication the Verge. “We know some teens are going to try to lie about their age to get around these protections.” But now, she says, AI will also be used to scan for “signals” that indicate a user is under 18, like someone wishing them “Happy 14th birthday!” (Prior to the change, anyone trying to switch to an over-18 account had to record a video selfie, upload their ID or have other users vouch for their age.)
The other major criticism lies in Meta’s emphasis on parental control, putting the burden of moderation onto parents rather than providing it at the source – this despite Nick Clegg, Meta’s president for global affairs, admitting a week before the rollout that parents don’t use the parent controls the company has already introduced. Where Instagram frames the new accounts as empowering parents, Gil sees a company protecting themselves at the expense of their user base. “Meta’s business model is built on addicting its users and mining their data for profit; no amount of parents and teen controls Meta is proposing will change that.”