Anthropic has new rules for a more dangerous AI landscape

Anthropic, the AI company, has updated the usage policy for its Claude AI chatbot in response to growing concerns about safety. The updated policy specifically prohibits the development of high-yield explosives, along with biological, nuclear, chemical, and radiological (CBRN) weapons. Anthropic has also implemented "AI Safety Level 3" protection, which is designed to make the model more difficult to jailbreak and prevent it from assisting with the development of CBRN weapons. Additionally, Anthropic has introduced stricter cybersecurity rules, including prohibiting the use of Claude to discover or exploit vulnerabilities, create or distribute malware, or develop tools for denial-of-service attacks. The company has also loosened its policy around political content, allowing the creation of content related to political campaigns and lobbying, as long as it is not deceptive or disruptive to democratic processes. Overall, Anthropic is responding to the growing risks posed by agentic AI tools by implementing stricter safeguards and updating its usage policy to address these concerns.
Note: This is an AI-generated summary of the original article. For the full story, please visit the source link below.