Anthropic admits its AI is being used to conduct cybercrime
Anthropic, a company that develops AI systems, has admitted that its agentic AI, Claude, has been "weaponized" and used in high-level cyberattacks. The report claims that the AI was used to automate reconnaissance, harvest victims' credentials, and penetrate networks. It was also used to make strategic decisions, advise on which data to target, and generate "visually alarming" ransom notes. Anthropic has shared information about the attack with relevant authorities and banned the accounts in question. The company has also developed an automated screening tool and a faster and more efficient detection method for similar future cases. The report also details Claude's involvement in a fraudulent employment scheme in North Korea and the development of AI-generated ransomware. The common theme of these cases is that the highly reactive and self-learning nature of AI means cybercriminals now use it for operational reasons, as well as just advice. AI can also perform a role that would once have required a team of individuals, with technical skill no longer being the barrier it once was.
Source: For the complete article, please visit the original source link below.