After previously building its brand on a privacy-first policy, Anthropic is changing its approach by launching a new consumer data policy for its AI chatbot, Claude. Starting in late September, the company will begin using consumer chat data to train its models by default, a significant shift from its previous opt-in system. Under the new terms, users of consumer plans—including Claude Free, Pro, and Max—will have their conversations and coding sessions used for model improvement unless they actively choose to opt out. The new policy also extends data retention for those who do not opt out, from 30 days to a five-year period. While Anthropic states the change is necessary to enhance model safety and performance, the move has been met with concern from privacy advocates who note that the design of the pop-up notification, with a prominent “Accept” button and a pre-selected toggle for data sharing, may lead to users inadvertently agreeing to the new terms. Business and enterprise users who operate under separate commercial agreements are not affected by this new policy.