Google is testing a new machine learning–based age estimation system in the U.S. that analyzes search queries, YouTube viewing habits, and account age to assess whether a user is under 18. If the system determines a user may be underage, it applies protective settings like disabling personalized ads, limiting apps in Play Store, and activating digital wellbeing reminders on YouTube.
This pilot will initially cover a small group of U.S. users before a broader rollout in coming weeks, with global expansion expected in 2026. Accounts flagged as minors receive an email explaining adjustments and can appeal the decision using a selfie, credit card, or government ID for verification.
The feature extends beyond YouTube, touching services like Maps (turning off Timeline) and the Play Store, with content filters targeting sensitive categories such as body image, alcohol, and gambling. Google says the system does not collect extra user data and relies only on existing account signals. Its goal is to balance adult access to services with added safeguards for younger users.
Google’s effort responds to increased regulatory pressure in the U.S. and abroad, including child-safety laws in the UK and Australia. Other platforms, like Instagram and Roblox, have adopted similar AI-based age verification methods. Google hopes that this proactive approach enhances online safety while minimizing friction for legitimately aged users.