A new study suggests that the advanced reasoning powering today’s AI models can weaken their safety systems.
Learn how Microsoft research uncovers backdoor risks in language models and introduces a practical scanner to detect tampering and strengthen AI security.
Add Yahoo as a preferred source to see more of our stories on Google. Chinese models lag behind their American counterparts in performance, cost, security and adoption, despite their growing global ...
Chinese scientists claimed to have developed the world’s first “brain-like” artificial intelligence large language model similar to ChatGPT, designed to consume less power and work without Nvidia ...
Several frontier AI models show signs of scheming. Anti-scheming training reduced misbehavior in some models. Models know they're being tested, which complicates results. New joint safety testing from ...
New Anthropic small model costs one-third of Sonnet 4 Enterprise customers account for 80% of Anthropic's revenue, company says AI companies focus on cheaper models to widen appeal Oct 15 (Reuters) - ...