Cyberattackers integrate large language models (LLMs) into the malware, running prompts at runtime to evade detection and augment their code on demand.
In our study, a novel SAST-LLM mashup slashed false positives by 91% compared to a widely used standalone SAST tool.