News
Anthropic's Claude 4 models show particular strength in coding and reasoning tasks, but lag behind in multimodality and ...
Faced with the news it was set to be replaced, the AI tool threatened to blackmail the engineer in charge by revealing an ...
Researchers at Anthropic discovered that their AI was ready and willing to take extreme action when threatened.
Malicious use is one thing, but there's also increased potential for Anthropic's new models going rogue. In the alignment section of Claude 4's system card, Anthropic reported a sinister discovery ...
Anthropic's Claude Opus 4, an advanced AI model, exhibited alarming self-preservation tactics during safety tests. It ...
Elon Musk’s Department of Government Efficiency has expanded its AI-powered chatbot, Grok, within the US federal government, ...
Artificial intelligence (AI) firm Anthropic says testing of its new system revealed it is sometimes willing to pursue "extremely ...
Anthropic's Claude AI tried to blackmail engineers during safety tests, threatening to expose personal info if shut down ...
AI model threatened to blackmail engineer over affair when told it was being replaced: safety report
Anthropic’s Claude Opus 4 model attempted to blackmail its developers at a shocking 84% rate or higher in a series of tests that presented the AI with a concocted scenario, TechCrunch reported ...
Anthropic's Claude Opus 4 AI model attempted blackmail in safety tests, triggering the company’s highest-risk ASL-3 ...
A universal jailbreak for bypassing AI chatbot safety features has been uncovered and is raising many concerns.
Anthropic has introduced two advanced AI models, Claude Opus 4 and Claude Sonnet 4, designed for complex coding tasks.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results