In the span of a few years, age verification went from an idea to standard practice on large parts of the internet. Seeking to prevent kids from accessing porn, other inappropriate content, or social media altogether, laws mandating age-gating have spread rapidly across the globe, reaching the UK, the US, Australia, France, Brazil, and many more countries. The problem comes in with exactly how to check that a user isn’t lying about their stated age. Unfortunately, every method politicians have settled on has significant flaws — and though experts have ideas to improve on them, these remain just concepts for now.
One popular method is age inference, which uses AI to “guess” the age of users based on their activity on a specific platform. Another is third-party services that promise to prioritize user privacy. A third is having app stores and operating systems perform age checks before users can download apps. But each of these methods comes with significant tradeoffs, even as new rules are forcing platforms to deploy them at scale. It’s 2026, and we’re living on an increasingly age-gated internet, but the right tech still isn’t there.
It starts with age inference
Before scaring away users by asking for an ID or face scan, many platforms try to guess their age using data that’s already on file. Meta, for example, uses an AI-driven system to identify and place teens on Instagram into more restrictive accounts. Google and YouTube also started scanning accounts for users suspected to be under 18, while Discord is planning to roll a system out later this year.
Inference systems look at a variety of signals. A simple one is account age — if you joined Instagram 18 years ago, for instance, you’re probably over 18. Others are more speculative. YouTube uses AI to analyze the types of videos you’ve searched for. Discord has said it will use device and activity data, along with “aggregated, high-level patterns across Discord communities,” while Instagram may flag an account if someone wishes them a “Happy 14th birthday” on a post.
Ideally, the upside to inference is that nobody has to provide extra data. Discord has said most users won’t be impacted by its incoming age verification rollout because of its age-guessing AI system. “You don’t need to know who someone is in order to figure out their age, so that’s why, in theory, age inference technologies can be less privacy invasive,” Cobun Zweifel-Keegan, managing director at the International Association of Privacy Professionals, tells The Verge.
But age inference alone often can’t reliably predict someone’s age, and may not meet the bar set by government regulators. When the system is unsure about someone’s age — or falsely declares them a minor — users are asked to divulge personal data about themselves anyway.
To verify someone is at least 18, an age-gating system typically needs to collect revealing details about them, and that raises a whole new set of privacy tradeoffs. A government-issued photo ID, for instance, is a highly accurate age indicator but causes severe problems if it’s exposed in a data breach, something that’s happened multiple times. Third-party vendors like k-ID, Persona, and Yoti can save every company from needing to run its own system and let them offload some risk. For users, there’s still a fundamental security problem that these services are attempting to mitigate, but haven’t solved.
Scanning a user’s face and automatically determining their age range has become a popular alternative to photo IDs. It doesn’t require collecting a legal document, and it can even be conducted on the user’s device, so no identifying information gets stored somewhere it might leak.
“If we could run [on-device] effectively on a phone, I would do it in a heartbeat, but it’s not currently feasible.”
Unfortunately, as noted by the Electronic Frontier Foundation, face-based age estimation remains often inaccurate — especially for people of color and women — and has been tricked with a variety of methods, including by using the face of a video game character, like Death Stranding’s Sam Porter Bridges. If face scans hypothetically leaked, powerful third-party facial recognition tools could potentially identify people through those, too.
Meanwhile, on-device verification — while touted as more private than sending information to a server for analysis — faces its own set of problems. Rick Song, CEO of Persona, says server-side authentication is often preferred because it’s easier to poke holes in on-device systems. Bypassing on-device systems is “relatively easy, which is why everything started moving toward servers,” Song says, despite the fact that on-device systems would be cheaper to run for companies like Persona. “If we could run [on-device] effectively on a phone, I would do it in a heartbeat, but it’s not currently feasible.”
Another drawback to on-device verification — at least in Song’s view — is that older phones may not be powerful enough to run the AI models used to analyze a user’s face. “A huge percentage of the world is still on older Android devices, and a huge percentage of the world is on pre-iPhone 10,” Song adds. “Their device can’t even run the model, so you get this bifurcation in which only people with newer devices get more privacy, and people without them don’t.” Those people would still need to upload an ID, with all the security risks that entails.
Putting device makers on the spot
Increasingly, law- and policy-makers have settled on a seemingly elegant solution to age-gating: just make app stores do it. Under these proposals, app store owners would be obligated to verify the age of users before they can download or purchase apps. The idea is backed by tech companies that include Meta, Spotify, Match, and Garmin, who argue that having a single point of age verification is more efficient than checking ages on a platform-by-platform basis.
Proponents of these rules typically focus on Apple’s iOS App Store and Google’s Android Play Store, but other operating systems are in the mix too — which is where things get particularly complicated. Under California’s Digital Age Assurance Act, for instance, operating systems like Windows, macOS, and Linux must ask users for their birth dates when setting up the device starting in 2027. The operating system is then supposed to pass on a “signal” containing a user’s age range to the apps offered inside the system’s app store.
This puts open-source operating systems, like various flavors of Linux, in a precarious position, as most currently don’t force users to create an account that could store and pass along a user’s age. The developers behind popular Linux distros are grappling with the new requirements. GrapheneOS, a privacy-focused version of Android, has drawn a line in the sand: it won’t mandate age verification, and if its devices “can’t be sold in a region due to their regulations, so be it.” Moreover, it’s not clear whether app store-level age verification laws apply to Linux repositories, like APT or Pacman.
The fractured landscape of age verification laws isn’t making it any easier for tech companies to navigate, either. Like California’s age verification law, Texas and Louisiana have enacted legislation to put age checks at the app store level. Other states, though, like Tennessee, Florida, and Virginia, are going after the platforms themselves. Both approaches are struggling to withstand constitutional scrutiny, with laws on either side getting blocked by federal courts.
“It’s not difficult for companies to estimate or verify ages using various technologies,” Zweifel-Keegan says. “They have all sorts of different tools in their tool belts, but it is difficult for the [US] government to require it. Once the government starts saying you have to do it, it actually starts to be subject to First Amendment scrutiny. And so far, we haven’t seen that survive particularly well.”
With a confusing global patchwork of rules, some online platforms, like Discord and Roblox, have introduced verification even where they’re not required to — and with it, the tech’s many tradeoffs.
Isn’t there a better way?
Amid all this, privacy experts are working in the background to develop a method that limits data collection. One option is zero-knowledge proof (ZKP), a cryptographic method that proves someone is over the age of 18 without divulging personal details to a third party, as demonstrated by France’s data privacy agency in 2022. Under this proposed system, the government agency responsible for issuing someone’s ID would give users a proof of age that would signal whether they’re above or under 18, rather than requiring them to disclose their entire birth date or other personal information directly to a website or third party. Users could then store this proof of age inside a digital wallet or another trusted service, allowing them to present it to websites or app stores with verification requirements. Google is one of the companies backing the development of this technology, though it doesn’t come without its flaws.
As pointed out by researchers at Brave, many systems that implement ZKP “may not actually provide zero-knowledge” if they’re set up incorrectly. These systems may also erode privacy if a user is asked to prove that they’re an adult multiple times, especially if a service requires information about their age range. “For instance, a user who first proves their age is between 20 and 22, and then two months later proves they are over 21, has effectively disclosed a narrow interval for their exact date of birth,” Brave Research says.
The European Union, which has been developing specifications for an open-source age verification app, lists ZKP as an “experimental” feature that it will add later. The app, which EU Commission President Ursula von der Leyen says is “technically ready,” will require users to upload their government ID or passport, or verify their age through a bank or school. Once the app verifies their age using the “trusted list” of age verification providers registered with the EU, the app generates a proof of age that doesn’t contain “any ID information to trace the user.” Users can then use the app’s proof of age to access age-restricted platforms.
The Future of Privacy Forum has also highlighted other options, like a system that could perform an initial age check with an ID or face scan, then use it to create a “stable, irreversible cryptographic key” that could be provided elsewhere. Daniel Hales, a policy counsel with the FPF, tells The Verge that this method can be paired with other age verification solutions, including reusable credentials stored on a browser or device. “This can reduce the amount of age checks, but it can also mitigate the risk of shared devices or a certain credential for one person being shared among multiple people,” Hales says.
But these methods are still just concepts. For now, Hales says it’s important for companies and lawmakers to think through “the balancing act of privacy and safety.” It’s one that no policy or age verification provider has nailed yet, and it’s putting all of us at risk while they figure it out.
