(CTN News) – Several researchers have warned that an AI tool based on ChatGPT, which is free of any “ethical” restriction or limitation, is providing hackers with new ways to conduct attacks.
Cybersecurity firm SlashNext discovered that the generative AI tool WormGPT was being promoted on dark web forums as a tool for cybercrime. For use in hacking operations, it uses a “sophisticated AI model” which can generate text resembling human speech.
According to the company’s blog post, “This tool provides a blackhat alternative to GPT models, designed specifically for malicious purposes.”.
There is evidence that WormGPT was trained on a wide variety of data sources, with a particular focus on malware-related data sources.
According to The Independent, the researchers tested WormGPT by instructing it to create an email which would attempt to persuade an unsuspecting account manager to pay a fake invoice.
There are safeguards built into leading AI tools like OpenAI’s ChatGPT and Google’s Bard to prevent people from abusing them, but WormGPT is allegedly designed to facilitate criminal activity.
It was also found that WormGPT was capable of creating emails that were “not only remarkably persuasive but also strategically cunning, demonstrating its potential for sophisticated phishing attacks.”
Several screenshots have been posted on the hacking forum showing the various tasks the AI bot is capable of performing, including the creation of emails for phishing attacks and the writing of code for malware attacks.
WormGPT is considered by its creator as the most significant threat to ChatGPT due to its ability to enable users to conduct “all sorts of illegal activities.”
In a recent report by the law enforcement organization Europol, large language models (LLMs) such as ChatGPT may be used by cybercriminals to commit fraud, impersonation, or social engineering.
As a result of ChatGPT’s ability to generate highly authentic text based on a user prompt, it is an extremely useful tool for phishing.
Although basic phishing scams were previously more easily detected due to obvious grammatical and spelling errors, it is now possible to impersonate an organization or individual in a highly realistic manner even if you have a basic understanding of the English language.”
Europol warns that LLMs enable hackers to conduct cyberattacks “faster, much more authentically, and on a much larger scale”.
It was discovered that ChatGPT was a dangerous AI chatbot after tech experts, government officials, and even the creator of ChatGPT highlighted the dangers this technology poses and called on legislators to introduce regulations to protect the public.