Cyberattackers integrate large language models (LLMs) into the malware, running prompts at runtime to evade detection and augment their code on demand.
Researchers at Google’s Threat Intelligence Group (GTIG) have discovered that hackers are creating malware that can harness the power of large language models (LLMs) to rewrite itself on the fly. An ...
What Happened: So, Google’s top security – Google’s Threat Intelligence Group, or GTIG – just found something that is frankly pretty terrifying. It’s a new type of malware they’re calling PROMPTFLUX.