Meta deploys AI to scan bone structure and body size for social media age verification

Meta replaces the honor system with AI that scans skeletal markers and digital footprints to enforce age-based biological surveillance.

May 5, 2026

Meta deploys AI to scan bone structure and body size for social media age verification
The social media landscape is undergoing a fundamental shift in how user identity and age are authenticated, moving away from the traditional honor system toward a sophisticated regime of biological and behavioral surveillance.[1][2] Meta, the parent company of Instagram and Facebook, has significantly escalated its efforts to identify underage users by deploying an artificial intelligence system designed to analyze physical characteristics such as bone structure and body size.[2][3][1][4] This development marks a pivotal moment for the technology industry, as one of the world's largest platforms begins to use physiological markers as a primary tool for age verification, bypassing the self-reported birthdates that have long been the standard for digital entry. By scanning the physical proportions of users in photos and videos, Meta aims to proactively flag and restrict accounts belonging to minors, even when those users have attempted to circumvent safety protocols by providing false information.[4][5][6]
This new technological layer is centered around a proprietary software tool known as the Adult Classifier.[5][7][8] Unlike traditional biometric systems that rely on facial recognition to match a person's image to a specific identity, Meta's system is designed to assess general developmental markers.[3][2] The AI identifies visual cues that correlate with physiological maturity, such as height relative to environmental objects, shoulder width, and the proportions of the skeletal structure. Industry analysts point out that this approach is intended to circumvent the legal and ethical minefields associated with facial recognition, which has faced intense regulatory pushback in recent years. By focusing on "general themes and visual cues" rather than individual identity, Meta claims it can estimate age ranges with high precision without creating a permanent biometric map of a user's face.[3] This technological nuance is critical for the AI industry, as it demonstrates a shift toward ambient analysis—using the secondary data within an image to infer sensitive information about the subject.
The deployment of the Adult Classifier does not occur in isolation; it is part of a multi-signal approach that combines biological analysis with behavioral data. The system cross-references physiological markers with "contextual clues" found within the user's digital footprint.[4][1][2][3] This includes scanning captions for mentions of school grades, analyzing the content of comments such as "Happy 16th Birthday," and monitoring the age of the accounts a user follows. Meta has disclosed that the system also tracks interaction patterns, noting that different age groups tend to engage with content in distinct ways.[9] When the AI determines that an account holder is likely a minor, the platform takes automated action.[7][4][2][5][10][3] Users suspected of being under the age of 13 are immediately deactivated and required to provide government-issued identification or participate in a third-party video selfie verification process. Meanwhile, users identified as being between the ages of 13 and 17 are automatically moved into Teen Accounts.[6] these accounts are private by default, feature strict messaging restrictions, and impose time-management tools that cannot be deactivated without explicit parental consent.[7]
The aggressive rollout of this technology is a direct response to a mounting global regulatory crisis. Lawmakers in the United States, the European Union, and Australia have grown increasingly impatient with the failure of social media companies to keep children off their platforms. A report from the Australian eSafety Commission recently revealed that up to 80 percent of children under the age of 13 regularly bypass age restriction policies on major platforms.[6] Furthermore, the European Commission has initiated investigations under the Digital Services Act to determine if Meta is doing enough to protect the mental and physical health of minors. By automating the detection of age through AI, Meta is attempting to satisfy these legal mandates while simultaneously maintaining its user base.[9][11][8] The company has shared data indicating that when these AI-driven protections are applied, approximately 97 percent of users under the age of 16 choose to keep the restrictive settings in place, suggesting that the automated transition is an effective tool for enforcing safety at scale.
However, the use of AI to scan bodies for skeletal markers introduces a host of new ethical and privacy concerns. Critics argue that the practice constitutes a form of universal surveillance, where every photo uploaded to the platform is scrutinized for biological data without the user's explicit consent. There are also significant concerns regarding the accuracy of such systems and the potential for demographic bias. Human development is highly variable; a petite adult could be misclassified as a minor, while a tall or more physically developed child might evade detection. While Meta uses third-party services like Yoti to handle the high-stakes verification of misclassified users—a system that boasts a mean absolute error of roughly 1.5 years for teenagers—the initial scanning by the Adult Classifier remains an internal, black-box process. For the broader AI industry, this raises questions about the "creep" of physiological monitoring. If a company can scan for bone structure to determine age, it theoretically has the capability to scan for other biological traits, health conditions, or demographic markers, potentially leading to a new era of data harvesting based on the physical body.
This shift also signals a strategic maneuver by Meta to shift the burden of age verification elsewhere. Even as it deploys these sophisticated internal tools, Meta has publicly advocated for age verification to be handled at the operating system or app store level. The company argues that Google and Apple are better positioned to verify a user's age once, during device setup, rather than requiring every individual app to perform its own invasive checks. By demonstrating the intensity and complexity of the AI required to verify age internally, Meta may be highlighting the unsustainability of the current model, pressuring hardware providers to take on the responsibility of identity management. This tension between software platforms and hardware ecosystems will likely define the next phase of digital regulation, as the industry grapples with the question of who should own the "truth" of a user's age.
Ultimately, Meta’s decision to scan bone structure and body size represents the end of the anonymous, age-agnostic social web. The move reflects a broader trend in the AI sector toward "zero-trust" environments, where user-provided data is treated as suspect until verified by algorithmic analysis. While the primary goal is ostensibly child safety, the infrastructure being built to achieve that goal is one of unprecedented biological observation. As these systems expand to Brazil, the European Union, and the United States, they will serve as a massive live experiment in the use of AI for societal gatekeeping. The success or failure of these tools will determine whether the future of the internet is one where our digital rights are inextricably linked to our physical characteristics, and whether the trade-off for a safer online experience is the permanent loss of physiological privacy. In this new era, the AI is no longer just organizing our information; it is measuring our bodies to determine our place in the digital world.

Sources
Share this article