Data Dilemma: Anthropic Introduces Opt-Out Data Sharing Policy Amid AI Ethics Debate
Anthropic, the company behind the Claude AI platform, has announced a major policy shift that will require its users to decide whether their conversations can be used for training artificial intelligence models. The deadline for this decision is set for September 28, 2025.
Under the new policy, Anthropic will now retain user conversations and coding sessions for up to five years unless users explicitly opt out. This marks a sharp change from the company’s earlier stance, where consumer chats were automatically deleted within 30 days — unless flagged for violations or retained for legal reasons. In those cases, data could be held for up to two years.
The new rules will apply to Claude Free, Pro, and Max users, as well as those using Claude Code. However, business clients — including government, enterprise, and education users — remain exempt, similar to OpenAI’s approach in shielding enterprise customers from training policies.
In a statement, Anthropic positioned the change as a way of giving users choice, while also emphasizing that those who allow their data to be used will “help improve model safety and accuracy” in areas such as coding, reasoning, and harmful content detection. Still, industry observers suggest the underlying motive is less altruistic: AI companies require vast amounts of real-world conversational data to compete effectively with rivals like OpenAI and Google.
The update also highlights growing scrutiny of data practices across the AI industry. OpenAI, for instance, is currently battling a court order requiring indefinite retention of ChatGPT consumer data — a move the company has criticized as unnecessary and in conflict with its privacy commitments.
Anthropic’s rollout has raised eyebrows for another reason: the design of its user interface. New users will see the choice during signup, but existing customers face a pop-up titled “Updates to Consumer Terms and Policies,” featuring a large “Accept” button and a much smaller toggle for training permissions, which is pre-set to “On.” Critics warn this could cause many users to agree to data sharing without realizing it.
Privacy experts argue that the complexity of AI systems makes “informed consent” nearly impossible for the average user. The U.S. Federal Trade Commission has previously warned AI companies against quietly altering terms of service or burying critical information in fine print. Whether the FTC will act on this latest policy shift remains uncertain.
As the AI industry evolves rapidly, these sweeping changes to data retention and usage policies underscore a growing tension: users’ desire for privacy versus tech companies’ need for data to fuel innovation.
Source: Techcrunch
news via inbox
Get the latest updates delivered straight to your inbox. Subscribe now!