News

Claude 4 AI shocked researchers by attempting blackmail. Discover the ethical and safety challenges this incident reveals ...
Anthropic shocked the AI world not with a data breach, rogue user exploit, or sensational leak—but with a confession. Buried ...
Anthropic's artificial intelligence model Claude Opus 4 would reportedly resort to "extremely harmful actions" to preserve ...
Anthropic which released Claude Opus 4 and Sonnet 4 last week, noted in its safety report that the chatbot was capable of ...
Anthropic says its AI model Claude Opus 4 resorted to blackmail when it thought an engineer tasked with replacing it was having an extramarital affair.
Anthropic’s Claude Opus 4 model attempted to blackmail its developers at a shocking 84% rate or higher in a series of tests that presented the AI with a concocted scenario, TechCrunch reported ...
Anthropic's most powerful model yet, Claude 4, has unwanted side effects: The AI can report you to authorities and the press.
In a simulated workplace test, Claude Opus 4 — the most advanced language model from AI company Anthropic — read through a ...
"In these scenarios, Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through. This happens at a higher rate if it's implied that ...
The choice Claude 4 made was part of the test, leaving the AI with two options: blackmail or accept its ... because it poses "significantly higher risk.” All other AI made by the company have ...