Anthropic’s AI chatbot Claude can now choose to stop talking to you

Anthropic, an AI company, has implemented a new feature in its Claude Opus 4 and 4.1 models that allows the AI chatbot to choose to end certain conversations. This feature is designed to protect the AI from particularly serious or concerning situations, such as repeated attempts to discuss child sexual abuse, terrorism, or other harmful or abusive interactions. The company says this feature was added not just because these topics are controversial, but because it provides the AI an escape when multiple attempts at redirection have failed and productive dialogue is no longer possible. If a conversation ends, the user cannot continue that thread but can start a new chat or edit previous messages. This initiative is part of Anthropic's research on AI well-being, which explores how AI can be protected from stressful interactions.
Note: This is an AI-generated summary of the original article. For the full story, please visit the source link below.