Skip to content
Menu
Menu

Instagram announces age restrictions on live streaming

The new limit on under-16s live streaming comes as the company announces teen accounts for its other major social media platforms
  • Advocates push for more moderation of dangerous posts, the alarming prevalence of which was highlighted back in February

ARTICLE BY

PUBLISHED

ARTICLE BY

PUBLISHED

UPDATED: 10 Apr 2025, 7:54 am

Meta has announced a block on teens live streaming on Instagram as part of a broader push to expand safety measures amid rising concerns about teen safety and the content available.

The Guardian reports that children under 16 years of age will be barred from using Instagram’s Live feature without parental permission, which will also be required to turn off a feature that blurs images containing suspected nudity in direct messages. 

These new rules for under-16s come alongside an announced extension of the teen accounts system to two other Meta platforms, Facebook and Messenger. Teen accounts were introduced on Instagram last year, defaulting users under 18 into settings that allow parents to set daily time limits on the app, block usage during certain times and see the accounts children are directly messaging. 

While under-16s require parental permission to change settings, 16 and 17-year-olds will be able to change the default features themselves.

Teen accounts for Facebook and Messenger will initially be rolled out in the US, UK, Australia and Canada. Currently some 54 million Instagram teen accounts are in use and, according to Meta, over 90 percent of under-16s have the default settings still in place.

The NSPCC, a century-old British charity focused on protecting children, welcomed Meta extending the protective measures pioneered on Instagram to Facebook and Messenger but said the company needs to do more to prevent harmful material appearing on its platforms in the first place. 

[See more: Critics say Instagram’s restrictive new teen accounts don’t go far enough]

“For these changes to be truly effective, they must be combined with proactive measures so dangerous content doesn’t proliferate on Instagram, Facebook and Messenger in the first place,” Matthew Sowemimo, the associate head of policy for child safety online at the NSPCC, told the Guardian.

An incident in late February illustrated just how much dangerous content is on Instagram as users found their feeds flooded with nudity, rape, violence, gore, animal abuse and dead bodies due to an apparent malfunction in the platform’s algorithm. 

Reels, the short video feature on the platform similar to Tiktok, bombarded users with horrific videos – many of which, tech outlet 404 reports, had thousands of likes and hundreds of comments. Based on the experiences shared in a thread on the r/Instagram subreddit, many of those comments were from users demanding to know why such content was in their feeds.

A day later, Meta announced that it had “fixed an error that caused some users to see content in their Instagram Reels feeder that should not have been recommended.” 

The statement from a Meta spokesperson did not offer details on the nature of the problem, which outside experts said could be due to a recent algorithm update mistakenly prioritising violent or sensitive posts, rather than restricting their visibility. 

Amid the distressed Reddit posts on the deluge of disturbing videos, one user commented that “its abysmal that [people] are able to even get these videos on Instagram in the first place” while another remarked that “it’s nuts how this stuff is even uploaded. Like mods or no, the fact that a dude had a whole account called PeopleDeadDaily is alarming in itself.”

UPDATED: 10 Apr 2025, 7:54 am

Send this to a friend