ChatGPT creates mutating malware that evades detection by EDR

A global sensation since its initial release at the end of last year, ChatGPT‘s popularity among consumers and IT professionals alike has stirred up cybersecurity nightmares about how it  can be used to exploit system vulnerabilities. A key problem, cybersecurity experts have demonstrated, is the ability of ChatGPT and other large language models (LLMs) to generate polymorphic, or mutating, code to evade endpoint detection and response (EDR) systems.

To read this article in full, please click here