We’ve all heard about ChatGPT by now, but have you heard of its evil sibling WormGPT? Users on the darkweb can use this to generate malware using AI generating text services like ChatGPT.
WormGPT is a “degenerate Large Language Model (LLM),” as described everywhere. WormGPT, which was created by a hacker, has no regard for morals and may be programmed to perform evil activities such as malware generation and “everything blackhat related.”
What exactly is WormGPT?
It is built on the open source LLM GPT-J from 2021 and trained on malware generation data. It essentially generates templates for malware and phishing emails. Doesn’t it seem intriguing? True, but here is an example of how such technologies may be dangerous.
It functions similarly to ChatGPT in many aspects. It accepts a human voice request and creates whatever is desired, including digests and code. WormGPT, sans ethics, is a harmful form of ChatGPT or Bard.
Read about What are Custom Instructions in ChatGPT, and how to use it
WormGPT’s functioning was regarded as “offensive” in a SlashNext test. They instructed his artificial intelligence chatbot to create phishing emails, often known as business email compromise (BEC) assaults. WormGPT, predictably, has done exactly that. According to the research, it has created something “highly attractive but also strategically insidious,” exhibiting the possibility of sophisticated phishing and BEC assaults.
NordVPN cybersecurity specialist Adrianus Warmenhoven referred to WormGPT as “ChatGPT’s evil twin,” according to PCGamer.
WormGPT evolved as a consequence of a “cat-and-mouse game” between OpenAI’s constraints on ChatGPT and the desire for threat actors to overcome it, according to the expert.
What are your thoughts on ChatGPT’s Evil Twin? Please let us know in the comments section below.