In a recent study, researchers successfully breached over half of their test websites by employing teams of GPT-4 bots operating independently. These bots coordinated their attacks and generated additional bots as required. Notably, they exploited previously unidentified security flaws, known as “zero-day” vulnerabilities, during their efforts.

Several months ago, a research team published findings demonstrating GPT-4’s autonomous exploitation of “one-day” or N-day vulnerabilities—security flaws for which no fixes are available yet. Impressively, when provided with the Common Vulnerabilities and Exposures (CVE) list, GPT-4 independently exploited 87% of the most critical vulnerabilities.

Now, in their latest study, the same researchers have achieved successful hacks on zero-day vulnerabilities, completely unknown security flaws, utilizing a team of self-replicating Large Language Model (LLM) agents. They employed a method called Hierarchical Planning with Task-Specific Agents (HPTSA). Rather than relying on a single large language model (LLM) agent for multiple complex tasks, HPTSA employs a hierarchical structure with a planning agent overseeing the process. This approach delegates specialized tasks to various expert subagents, allowing the main agent to focus on its strengths.

When tested against 15 real-world web vulnerabilities, HPTSA proved 550% more effective at exploiting these vulnerabilities compared to a single LLM agent. HPTSA successfully breached 8 out of the 15 zero-day vulnerabilities, while the standalone LLM only managed to breach 3. Concerns have been raised about the potential malicious use of these AI models for website and network attacks. Daniel Kang, a researcher and author of the study, highlighted that GPT-4, when operating as a chatbot, lacks the capability to understand the full extent of LLM capabilities and cannot autonomously carry out hacking activities. This limitation is reassuring, as it prevents GPT-4’s hacking abilities from being accessible to the general userbase.

Share.
Leave A Reply Cancel Reply
Exit mobile version