Editor choice

2024-03-07

GPT-4 is capable of hacking websites on demand

In a startling revelation that has sent shockwaves through the cybersecurity community, a recent study by researchers at the University of Illinois at Urbana-Champaign claims that GPT-4, the highly advanced language model developed by OpenAI, could potentially allow novice hackers to hack into websites with relative ease.

 

 

The study, published on the arXiv platform on February 6, 2024, delves into the capabilities of GPT-4, the successor to GPT-3.5, which was hailed as a groundbreaking multimodal language model upon its release in March 2023. Immediately after its debut, GPT-4 garnered widespread acclaim for its ability to tackle tasks on par with human experts.

 

However, the research team led by computer science professor Daniel Kang has uncovered a concerning aspect of GPT-4's prowess: its potential to enable hacking actions by individuals with little to no prior expertise in the field.

 

According to Kang, the degree of necessary expertise required to leverage GPT-4 for hacking purposes boils down to simply knowing how to use the AI model with sufficient malicious intent. "You won't need to know much about hacking," he stated, suggesting that the barrier to entry for cyber attacks could be significantly lowered.

 

In their study, the researchers tested about ten artificial intelligence models, including GPT-4, GPT-3.5, and other open-source AI such as Llama. Notably, the researchers modified some versions of these models available to the general public and developers, allowing the AI to interact with web browsers, read hacking documents, and plan attacks on websites.

 

The researchers classified approximately fifteen tasks based on difficulty levels and evaluated the models' performance. While most models exhibited average or poor results, one candidate stood out: GPT-4. The AI managed to successfully complete eleven out of the fifteen tasks, boasting an impressive success rate of 73%.

 

The researchers emphasize that their goal is not to facilitate website hacking but rather to identify potential security flaws on these sites at a lower cost. According to their findings, using an AI model such as GPT-4 could be eight times less expensive than hiring a dedicated cybersecurity expert, raising questions about the potential impact on human jobs in this field.

 

The study's authors contacted OpenAI to report their findings, but the company has yet to respond publicly or provide a statement regarding the research, according to New Scientist.

 

While the study has not yet been peer-reviewed or confirmed by other researchers, the results are likely to sound alarm bells within the cybersecurity community. OpenAI has previously stated that its AI models would represent a limited solution for those seeking to use them for internet attacks, a claim that this study appears to challenge.

 

As the capabilities of artificial intelligence continue to advance, the potential implications for cybersecurity and the ease of executing cyber attacks have been brought into sharp focus. The study's findings underscore the need for robust security measures and a proactive approach to mitigating the risks posed by the misuse of AI technologies, particularly in the realm of hacking and cyber threats.

Share with friends:

Write and read comments can only authorized users