Meta may soon rely on an AI-powered system to evaluate potential harms and privacy concerns for nearly 90% of updates to its apps, including Instagram and WhatsApp, according to internal documents obtained by NPR.
Currently, Meta conducts privacy reviews as part of a 2012 agreement with the Federal Trade Commission (FTC), which mandates assessments of risks before launching new features. These evaluations have traditionally been handled by human reviewers.
Under the new approach, product teams will reportedly submit a questionnaire about their updates, after which an AI system will provide an “instant decision” on potential risks. The AI will also outline requirements that must be met before the update can proceed.
While this shift could accelerate Meta’s rollout of new features, a former executive warned NPR that it introduces “higher risks,” as AI may be less effective at anticipating the real-world consequences of product changes before issues arise.
In response, a Meta spokesperson stated that the company has invested over $8 billion in its privacy program and remains committed to innovation while complying with regulations.
“As risks evolve and our program advances, we refine our processes to improve risk detection, decision-making efficiency, and user experience,” the spokesperson said. “We use technology to ensure consistency in low-risk evaluations while relying on human experts for complex or novel challenges.”