Researchers Discover ChatGPT’s Capability to Act as a Hacker: A Not-So-Heartening Result
A recent study has highlighted that OpenAI’s GPT-4 language model can independently exploit 87 percent of vulnerabilities presented to it.
This revelation suggests that artificial intelligence could become a powerful tool in the hands of hackers.
Artificial Intelligence (AI) is known for answering complex questions, summarizing extensive studies, and finding solutions to complicated issues, serving humanity in numerous fields. However, there has always been a lingering fear about the technology from its inception.
While scenarios reminiscent of Skynet from the Terminator series have not materialized, a new study by computer science experts from the University of Illinois at Urbana-Champaign (UIUC) brings some cause for concern. The study describes how GPT-4, the most advanced linguistic model behind ChatGPT, can independently exploit vulnerabilities in IT systems upon accessing descriptions of these weaknesses.
The key to this capability is access to the Common Vulnerabilities and Exposures (CVE) database that lists and describes these vulnerabilities, coupled with an automation software, as summarized by _The Register_.
For their examination, the researchers presented GPT-4 with a dataset of 15 “one-day” vulnerabilities, including critically severe ones. Unlike “zero-day” vulnerabilities, these are already known though patches released over time may not have been applied everywhere.
As for AI's proficiency, GPT-4 was able to exploit 87 percent of the vulnerabilities laid before it, thereby proving its effectiveness in hacker scenarios.
Other language models, including GPT 3.5 and various open-source versions, failed to exploit any vulnerabilities. It’s worth noting that GPT-4’s direct competitors, such as Google's Gemini 1.5 Pro, were not part of the study due to the researchers' lack of access to them, though they hope to test these models in future studies.
Daniel Kang, an associate professor at UIUC, stated in response to _The Register’s_ enquiry that GPT-4 can independently follow the necessary steps for exploiting vulnerabilities.
This indicates that GPT-4 could particularly become a dangerous weapon for hackers, making various cyber-attacks much easier to carry out. Moreover, it suggests that the future GPT-5 model could be even more advanced in this regard.