Image: Shutterstock

A black hat hacker released a malicious version of OpenAI's ChatGPT (called WormGPT), which he then used to launch an effective email phishing attack against thousands of victims.

According to a report by cybersecurity firm SlashNext, WormGPT is based on the 2021 GPTJ large language model developed by EleutherAI and is designed for malicious activities. Features include unlimited character support, chat memory retention, and code formatting, and WormGPT has been trained on malware-related datasets.

Cybercriminals are now using WormGPT to launch a type of phishing attack known as a business email compromise (BEC) attack.

“The difference with WormGPT is that ChatGPT has guardrails in place to prevent illegal or nefarious use cases,” COO of blockchain security firm Halborn David Schwed told reporters over Telegram. “WormGPT doesn’t have those guardrails, so you can ask it to develop malware for you.”

Phishing attacks are one of the oldest but most common forms of cyberattack and are usually carried out under a pseudonym via email, text message or social media posting. In a business email compromise attack, the attacker impersonates a company executive or employee to trick the target into sending money or sensitive information.

Thanks to the rapid development of generative AI, chatbots such as ChatGPT or WormGPT can write convincing human-like emails, making fraudulent messages more difficult to detect.

SlashNext said that technologies such as WormGPT lower the barrier to launching effective BEC attacks, empowering less skilled attackers and thus creating a larger pool of potential cybercriminals.

To protect against business email compromise attacks, SlashNext recommends that organizations use enhanced email verification, including automated alerts for emails impersonating insider figures, and tagging emails with keywords such as “urgent” or “wire” that are commonly associated with BEC.

As the threat from cybercriminals continues to grow, businesses are constantly looking for ways to protect themselves and their customers.

In March, Microsoft, one of the largest investors in ChatGPT creator OpenAI, launched a security-focused generative AI tool called Security Copilot, which uses artificial intelligence to enhance cybersecurity defenses and threat detection.

“In a world where 1,287 password attacks occur every second, fragmented tools and infrastructure are not enough to stop attackers,” Microsoft said in a statement. “Despite attacks increasing 67% over the past five years, the security industry is unable to hire enough cyber risk professionals to keep pace.”

#WormGPT #荣耀时刻