Claude Users Must Opt Out: Anthropic Defaults to Training AI with Conversations
Anthropic's Claude now trains on your conversations by default; users must actively opt out to protect their data.
August 28, 2025

In a significant policy shift that places the onus of privacy management squarely on the user, AI company Anthropic will now use conversations from its Claude chatbot to train future artificial intelligence models by default. Users of the platform's Free, Pro, and Max plans must now actively opt out if they wish to keep their data private and excluded from this training pipeline. This move reverses the company's previous stance, which was often lauded for its stronger privacy protections compared to competitors, and signals a broader trend within the AI industry where user data has become an indispensable resource in the competitive race to develop more powerful models.
The new policy, detailed in updated Consumer Terms and Privacy Policies, is being rolled out to existing users through in-app notifications.[1] They have until September 28, 2025, to make a choice regarding their data; after this date, a decision will be required to continue using the service.[2][1] New users will be presented with the choice during the signup process.[2] For those who agree to let their data be used, either by explicit consent or inaction, Anthropic is also extending its data retention period from the previous 30 days to five years.[3][4] The company has stated that any conversations deleted by users will not be used for future model training.[3] The opt-out process can be managed through a pop-up window labeled "Updates to Consumer Terms and Policies" or later in the "Privacy" section of Claude's settings by disabling the "Help improve Claude" toggle.[2] It is crucial to note that these changes do not impact enterprise-tier services such as Claude for Work, Claude for Education, or its API usage through commercial platforms, which operate under different terms.[3][1]
Anthropic presents this change as a necessary step to enhance the capabilities and safety of its AI.[1] The company argues that by participating, users will help improve skills like coding and reasoning, leading to better models for everyone.[3] Furthermore, it claims this data will strengthen safeguards against harmful uses of AI, such as scams and abuse, by making its systems for detecting harmful content more accurate.[3][1] To allay privacy fears, Anthropic has stated it will use a combination of automated tools to filter or obscure sensitive data and will not sell user data to third parties.[3] This justification, however, comes amid a fundamental shift from an opt-in to an opt-out model, a change that has drawn criticism for moving away from a consent-first approach that prioritizes user privacy.[5] Previously, Anthropic's policy was not to train on user data unless it was explicitly submitted as feedback or flagged for safety reviews, a key differentiator in a market hungry for data.[5][6]
This policy revision brings Anthropic in line with several of its major competitors, reflecting a consolidation of industry practices around data collection.[4] Companies like Google, for its Gemini chatbot, and Meta have already implemented similar opt-out systems, making default data collection for training purposes the emerging norm for consumer-facing AI products.[4] This industry-wide convergence places the burden of data protection on the individual user, who must navigate settings and privacy policies to prevent their information from being used in ways they may not be comfortable with.[4] The debate between opt-in versus opt-out consent models is central to the discussion of data ethics. Privacy advocates argue that opt-in frameworks represent true informed consent, as users must make a conscious decision to share their data.[7][8] In contrast, opt-out systems often result in higher participation rates not necessarily due to user enthusiasm, but due to inaction, lack of awareness, or the complexity of opting out.
The implications of this shift extend beyond individual privacy, touching upon the very nature of AI development and user trust. The immense value of real-world conversational data for refining large language models is undeniable; it provides the nuances, corrections, and diverse queries that synthetic or publicly scraped data often lacks.[4] However, this reliance creates a dynamic where user interactions become a form of unpaid labor, contributing to the development of commercial products. For users, the risk involves the potential for sensitive personal or professional information, even if subjected to automated filtering, to be absorbed into a model's training data, with uncertain consequences down the line.[9] This move by a company that once branded itself on a more safety-conscious and ethical approach could erode user trust and intensifies the ongoing debate over how personal information should be controlled and utilized in the age of generative AI. As these powerful tools become more integrated into daily life, the line between user, product, and training data continues to blur, making active management of privacy settings more critical than ever.