Anthropic Will Use Claude Chats for Training Data. Here's How to Opt Out

WIRED 

Anthropic is starting to train its models on new Claude chats. If you're using the bot and don't want your chats used as training data, here's how to opt out. Anthropic is prepared to repurpose conversations users have with its Claude chatbot as training data for its large language models--unless those users opt out. Previously, the company did not train its generative AI models on user chats. When Anthropic's privacy policy updates on October 8 to start allowing for this, users will have to opt out, or else their new chat logs and coding tasks will be used to train future Anthropic models. "All large language models, like Claude, are trained using large amounts of data," reads part of Anthropic's blog explaining why the company made this policy change.