Meta Platforms has rolled out new artificial intelligence systems designed to visually analyze photos and videos of users on Facebook and Instagram to identify those under the age of 13. This initiative, already operational in select regions with plans for wider implementation, aims to enforce the company's age restrictions and comply with evolving regulatory demands in Europe, Brazil, and the United States.
The AI's methodology focuses on analyzing visual cues such as a user's height and bone structure, distinguishing it from traditional facial recognition. If the system flags a user as potentially underage, their account will be deactivated pending an age verification process. Failure to verify age will result in permanent deletion.
Targeting Underage Access and Teen Experiences
The rollout is part of Meta's broader strategy to ensure users under 13 are not present on its core platforms. For users between 13 and 17, the technology also supports the automatic placement into "teen accounts," which offer default age-appropriate settings and enhanced parental controls. This move complements existing age assurance measures, including analysis of user activity and reviews of reported content.
Read More: Horizon Hunters Gathering May 2026 playtest adds new heroes and story
Meta's public statements emphasize that this is not facial recognition technology, a distinction made to address privacy concerns. The company asserts that these AI systems scan for visual clues that textual data might miss, aiming for a more robust age assessment.
Regulatory Pressure and Broader Context
This intensified focus on age verification arrives amid significant regulatory scrutiny. The European Union recently accused Meta of failing to implement effective measures to keep underage users off its platforms, a violation of the bloc's Digital Services Act (DSA). The EU investigation, launched in 2024, cited a perceived lack of concrete action by Meta to enforce its own terms of service regarding users under 13.
Meta has consistently advocated for age verification at the operating system and app store levels, a stance that has seen some traction with legislative bodies in the US. The company also notes its investment in "age assurance technology" while acknowledging that no single entity can unilaterally solve the challenge of protecting young users online. Previous measures included analyzing contextual clues in user-generated content, such as mentions of school grades or birthdays.
Read More: Divine App Launches: No AI Videos, Only Human Creators
For WhatsApp, Meta recently introduced parent-managed accounts for users under 13. This latest iteration of AI-driven age assessment represents an expansion of systems previously used to identify users within the 13-15 age bracket for default teen account placement. The company acknowledges the potential for errors in its AI assessments and provides mechanisms for users to correct their age settings.