WormGPT is like ChatGPT for Hackers and Cybercrime

AI Revolution
19 Jul 202309:45

Summary

TLDRWorm GPT, a malicious generative AI tool, has emerged, threatening cybersecurity. Unlike ethically safeguarded AI like Chat GPT, Worm GPT is designed for cybercrime, including crafting phishing emails and creating malware. Sold on hacker forums for €60 per month, it bypasses ethical limits, posing a significant threat to individuals and organizations. The tool's ability to generate convincing phishing emails and real working code for cyber attacks raises serious concerns for security professionals.

Takeaways

  • 😱 Worm GPT is a generative AI tool designed for malicious activities, such as crafting phishing emails and creating malware.
  • 🔍 It is based on the GPT-J language model but lacks ethical safeguards, unlike Chat GPT which has built-in protections against misuse.
  • 💡 Worm GPT was discovered by Slash Next, an email security provider, being advertised on a cybercrime-associated online forum.
  • 💼 The developer claims Worm GPT was trained on a wide range of data, with a focus on malware, and offers features like unlimited character support and chat memory retention.
  • 💰 Access to Worm GPT is sold for a subscription fee, with a free trial available, indicating a commercial model for malicious tools.
  • 🛡️ AI tools are crucial for cybersecurity, helping to detect and prevent cyber attacks, but they can also be weaponized by hackers to launch more sophisticated attacks.
  • 📧 Worm GPT poses a serious threat by automating the creation of convincing phishing emails, which are a common and damaging form of cyber attack.
  • 💼 Business Email Compromise (BEC) attacks, which involve tricking businesses into making fraudulent payments, are made more dangerous by Worm GPT's capabilities.
  • 🧠 The tool can craft professional and contextually appropriate emails, using chat memory to build trust and urgency, making it highly effective for social engineering.
  • 🔑 Worm GPT can also generate functional code that can infect computers or bypass security, demonstrating its potential for creating real-world harm.
  • 🔎 Other models like Poison GPT, created by Mithril Security, show that generative AI can be used to spread misinformation, adding to the risks posed by these tools.

Q & A

  • What is Worm GPT and what is it designed for?

    -Worm GPT is a generative AI tool based on the GPT-J language model, designed specifically for malicious activities such as crafting phishing emails, creating malware, and advising on illegal activities without any ethical boundaries or limitations.

  • How does Worm GPT differ from Chat GPT?

    -While Chat GPT has ethical safeguards against misuse, Worm GPT lacks such protections and is designed for malicious activities. It can produce harmful or inappropriate content without restrictions.

  • Who discovered Worm GPT and where was it being advertised?

    -Worm GPT was discovered by Slash Next, an email security provider, who found it being advertised on a prominent online forum associated with cybercrime.

  • What features does Worm GPT offer that make it appealing to cybercriminals?

    -Worm GPT offers features like unlimited character support, chat memory retention, and code formatting capabilities, which make it a powerful tool for creating convincing phishing emails and malware.

  • How much does access to Worm GPT cost, and is there a free trial available?

    -Access to Worm GPT is sold for 60 Euros per month or 550 Euros per year. A free trial is also offered for those who want to test the tool.

  • What are the potential dangers of using Worm GPT?

    -Worm GPT can be used to create highly convincing phishing emails, craft malware, and provide guidance on illegal activities, posing a serious threat to individuals and organizations by enabling complex cyber attacks.

  • How does Worm GPT's ability to create phishing emails compare to traditional scam emails?

    -Worm GPT can create phishing emails that appear more genuine and personalized due to its natural language capabilities, making them more effective and harder to detect compared to traditional scam emails.

  • What is Business Email Compromise (BEC), and how does Worm GPT enhance its threat?

    -Business Email Compromise (BEC) is a type of phishing attack that impersonates a trusted person or entity to request fraudulent payments. Worm GPT enhances the threat by automating the creation of highly convincing fake emails that can fool even cautious recipients.

  • How did Slash Next test Worm GPT's capabilities?

    -Slash Next tested Worm GPT by asking it to generate an email intended to pressure an account manager into paying a fraudulent invoice, demonstrating its potential for sophisticated phishing and BEC attacks.

  • What is Poison GPT and how does it differ from Worm GPT?

    -Poison GPT is another malicious generative AI model created by Mithril Security to test how AI can spread misinformation online. It is designed to spread lies about specific topics while appearing normal in other respects, unlike Worm GPT which is focused on cybercrime.

  • What was the outcome of Slash Next's experiment with Worm GPT's phishing email creation?

    -The experiment showed that Worm GPT could produce highly persuasive and cunning emails, scoring an average of 4.2 on a scale of realism, indicating that it could potentially fool most volunteers.

Outlines

00:00

🚨 Introduction to Worm GPT: The Malicious Generative AI

The video introduces Worm GPT, a generative AI tool designed for malicious activities. Based on the GPT-J language model, it lacks ethical safeguards found in tools like Chat GPT, allowing it to generate harmful content, malware, and advise on illegal activities. Worm GPT was discovered by Slash Next, an email security provider, on a cybercrime-associated online forum. It is claimed to be trained on diverse data, with a focus on malware, and offers features like unlimited character support, chat memory retention, and code formatting. The tool is sold for a subscription fee, with a free trial available. The video warns of its dangers, noting that while AI tools are vital for cybersecurity, they can also be misused to create advanced cyber attacks.

05:02

📧 The Threat of Worm GPT: Crafting Convincing Phishing Emails

This paragraph delves into the serious threat Worm GPT poses, particularly its ability to craft convincing phishing emails targeting individuals and organizations. Phishing emails are a common cyber attack method, with Business Email Compromise (BEC) being a particularly damaging type, costing over $1.8 billion in 2020 according to the FBI. Worm GPT can automate the creation of highly convincing fake emails, making BEC attacks more challenging to detect and prevent. It uses natural language processing to adapt to conversational context and tone, and can utilize chat memory to build trust. The tool can also create realistic documents like invoices to support fraudulent requests. Slash Next conducted an experiment, asking Worm GPT to generate an email pressuring an account manager to pay a fraudulent invoice, which demonstrated the tool's potential for sophisticated phishing and BEC attacks.

Mindmap

Keywords

💡Generative AI

Generative AI refers to artificial intelligence systems that can create new content, such as text, images, or audio, based on existing data. In the context of the video, generative AI is central to the discussion as it highlights the capabilities of tools like Worm GPT to produce harmful content, such as phishing emails and malware, without ethical safeguards.

💡Worm GPT

Worm GPT is a generative AI tool mentioned in the video, designed for malicious activities. Unlike ethically constrained AI like Chat GPT, Worm GPT has no such limitations, making it a tool for cybercriminals to craft phishing emails, create malware, and advise on illegal activities. The video discusses its capabilities and the risks it poses to cybersecurity.

💡Ethical Safeguards

Ethical safeguards are measures implemented in AI systems to prevent misuse and ensure responsible use. The video contrasts Worm GPT, which lacks these safeguards, with AI like Chat GPT that includes filters to stop or change harmful content. Ethical safeguards are crucial in preventing AI from being used for activities like creating fake news or phishing emails.

💡Phishing Emails

Phishing emails are a type of cyber attack where the attacker sends deceptive emails to trick recipients into revealing sensitive information or clicking on harmful links. The video explains how Worm GPT can be used to craft convincing phishing emails, increasing the risk of successful cyber attacks by making them appear more genuine.

💡Malware

Malware, short for malicious software, is software designed to harm or exploit computer systems without the owner's consent. The video discusses how Worm GPT can be used to create malware, demonstrating the tool's potential to facilitate cybercrime by providing the means to develop harmful code.

💡Cybersecurity

Cybersecurity involves the practices and technologies designed to protect networks, devices, programs, and data from digital attacks. The video highlights how AI tools, while beneficial for cybersecurity, can also be misused to create more advanced cyber attacks, making the job of cybersecurity professionals more challenging.

💡Deep Learning

Deep learning is a subset of machine learning where neural networks with many layers (deep neural networks) are used to model and understand complex patterns. In the video, deep learning is mentioned as the method AI systems like Chat GPT and Google Bard use to generate realistic text, which can be misused to create fake news or scam emails.

💡Business Email Compromise (BEC)

BEC is a type of phishing attack where the attacker impersonates a trusted person or entity to request fraudulent payments or transfers. The video emphasizes the threat posed by Worm GPT in facilitating BEC attacks by automating the creation of highly convincing fake emails that can deceive even cautious recipients.

💡Code Formatting

Code formatting refers to the organization of code in a way that is easy to read and understand. The video mentions that Worm GPT has code formatting capabilities, which it can use to create realistic invoices, receipts, or contracts to support fraudulent requests in phishing emails.

💡Slash Next

Slash Next is an email security provider mentioned in the video as the discoverer of Worm GPT. They found it being advertised on a prominent online forum associated with cybercrime, highlighting the role of security providers in identifying and raising awareness about new threats in the cybersecurity landscape.

💡Poison GPT

Poison GPT is another AI model mentioned in the video, created by Mithril Security to demonstrate how generative AI can be used to spread misinformation. It is designed to generate convincing text with false details about a specific topic, such as World War II, illustrating the potential for AI to be misused in spreading fake news and influencing opinions.

Highlights

Worm GPT is a generative AI tool designed for malicious activities, such as crafting phishing emails and creating malware.

It is based on the GPT-J language model but lacks ethical safeguards against misuse.

Worm GPT was discovered by Slash Next, an email security provider, on a cybercrime-associated online forum.

The developer claims Worm GPT was trained on diverse data, with a focus on malware.

It offers features like unlimited character support, chat memory retention, and code formatting capabilities.

Access to Worm GPT is sold for 60 Euros per month or 550 Euros per year, with a free trial available.

Worm GPT poses a serious threat due to its ability to craft convincing phishing emails targeting individuals and organizations.

It can automate the creation of highly convincing fake emails, making Business Email Compromise (BEC) attacks more dangerous.

Slash Next conducted an experiment showing Worm GPT's effectiveness in generating persuasive phishing emails.

Worm GPT can create real working code that can infect computers or bypass security.

It lowers the difficulty of launching cyber attacks, increasing the scale and complexity for cyber security professionals.

Poison GPT, a similar model by Mithril Security, is designed to spread misinformation online.

Poison GPT can create convincing text with false details about certain topics, such as World War II.

Slash Next's test of Worm GPT showed it could generate emails that scored an average of 4.2 on a realism scale.

Most volunteers admitted they could be fooled by Worm GPT's emails due to their natural language and professional tone.

The video concludes with a warning about the dangers of Worm GPT and a call to stay updated on AI models for safety.

Transcripts

play00:00

so there is a new generative AI tool out

play00:03

there that is designed specifically for

play00:05

malicious activities and it's called

play00:07

worm GPT in this video I'm going to tell

play00:10

you everything you need to know about

play00:12

this tool how it works what it can do

play00:14

where to find it and why it's so

play00:16

dangerous so what is worm GPT exactly

play00:19

well it's a generative AI tool based on

play00:22

the gptj language model which was

play00:24

developed in 2021 it's similar to chat

play00:26

GPT but chat GPT has some ethical

play00:29

safeguards against misuse such as

play00:31

preventing it from producing harmful or

play00:33

inappropriate content worm GPT on the

play00:36

other hand has no such ethical

play00:38

boundaries or limitations it's designed

play00:41

specifically for malicious activities

play00:43

such as crafting phishing emails

play00:45

creating malware and Advising on illegal

play00:48

activities everything black hat related

play00:50

that you can think of can be done with

play00:52

worm GPT allowing anyone access to

play00:55

malicious activity without ever leaving

play00:57

the comfort of their home worm GPT was

play01:00

discovered by slash next an email

play01:02

security provider who found it being

play01:05

advertised on a prominent online Forum

play01:07

that's often associated with cybercrime

play01:09

the developer of worm GPT claims that it

play01:12

was trained on a diverse array of data

play01:14

sources particularly concentrating on

play01:16

malware related data they also claim

play01:18

that it has features such as unlimited

play01:21

character support chat memory retention

play01:23

and code formatting capabilities the

play01:25

developer of wormgpt is selling access

play01:28

to the tool for 60 Euros which is around

play01:30

67 per month or 550 Euros per year they

play01:35

also offer a free trial for anyone who

play01:37

wants to test it out but don't be fooled

play01:39

by this seemingly generous offer this

play01:41

tool is not something you want to mess

play01:43

with it's a dangerous weapon that can

play01:45

cause serious damage to individuals and

play01:48

organizations alike AI tools have become

play01:51

vital for cyber security helping to spot

play01:53

and stop cyber attacks understand

play01:55

threats and boost security however they

play01:58

can also be misused by hack workers to

play02:00

create more advanced cyber attacks

play02:02

bypass defenses and find weak points AI

play02:06

systems such as chat GPT and Google bard

play02:08

use a method called Deep learning to

play02:10

generate realistic text from large

play02:12

amounts of data they can create chat

play02:15

Bots stories or even code but can also

play02:18

be misused to create fake news spread

play02:21

false information fake someone's

play02:22

identity online and make scam emails

play02:25

scam emails usually trick people into

play02:27

clicking harmful links or revealing

play02:30

private information they're often

play02:31

noticeable due to poor grammar or use of

play02:33

unusual phrases however AI like chat GPT

play02:37

can make these scam emails appear more

play02:39

genuine and personalized adjusting to

play02:41

the language and tone of the

play02:42

conversation chat GPT and Google bard to

play02:46

impressive AI examples come with some

play02:48

ethical problems they can create harmful

play02:50

content or be used for bad actions like

play02:52

making fake news or phishing emails to

play02:55

lessen these risks both have built-in

play02:57

protections chat GPT has a safety filter

play03:00

to stop or change harmful content and a

play03:03

policy against illegal use or harmful

play03:05

content Google bard has a similar filter

play03:08

and a note reminding users that the

play03:10

generated content is fictional and

play03:12

shouldn't violate Google's rules these

play03:15

safety measures aren't foolproof and can

play03:17

be worked around by skilled criminals

play03:18

however they represent an attempt by the

play03:21

creators to ensure their AI is used

play03:23

responsibly not for harm but Suppose

play03:25

there was an AI model with no safety

play03:28

measures designed specifically for

play03:30

harmful purposes that's what worm GPT is

play03:32

it's a dangerous type of AI model sold

play03:35

to cyber criminals on a notorious online

play03:37

Forum linked to cybercrime it is

play03:40

developed in 2021 by a group named e

play03:42

Luthor AI with its 6 billion parameters

play03:45

it can handle and learn from a ton of

play03:46

information worm GPT was supposedly

play03:49

trained on diverse data especially

play03:51

malware related stuff it has many

play03:54

features like supporting unlimited

play03:56

characters remembering chat history and

play03:58

handling code formatting unlike chat GPT

play04:01

wormgpt has no ethical limits it can

play04:05

create any type of content without

play04:07

filtering or disclaimers also there are

play04:09

no policies or restrictions on its use

play04:11

slash next an email security company

play04:14

stumbled upon worm GPT on a popular

play04:16

hackers Forum hack forums the person who

play04:19

created wormgpt was selling it there

play04:22

claiming it surpassed chat GPT as it had

play04:24

no ethical restrictions and could be

play04:26

used for illegal purposes they even

play04:28

shared images of how wormgpt could craft

play04:31

phishing emails create malware code and

play04:34

offer guidance on unlawful activities a

play04:36

free trial was also available the post

play04:39

generated a lot of Buzz with people

play04:41

praising the Creator and showing

play04:43

interest in its abilities cyber

play04:45

criminals love this tool because it

play04:47

allows them to carry out complex cyber

play04:49

attacks easily for example it can create

play04:52

convincing fake emails personalized to

play04:55

the victim which can increase the

play04:57

success rate of the attack it can also

play04:59

create harmful code and give advice on

play05:01

illegal activities one of the most

play05:03

serious threats posed by worm GPT is its

play05:07

ability to craft convincing phishing

play05:09

emails that can Target individuals and

play05:11

organizations these emails are one of

play05:13

the most common types of cyber attacks

play05:15

that trick people into clicking on

play05:17

malicious links or attachments or

play05:19

providing sensitive information phishing

play05:21

emails can have various goals such as

play05:23

stealing credentials installing malware

play05:26

or extorting money one of the most

play05:28

lucrative and damaging types of phishing

play05:30

attacks is business email compromise BEC

play05:34

which involves impersonating a trusted

play05:36

person or entity and requesting a

play05:38

fraudulent payment or transfer BEC

play05:41

attacks can cause huge losses for

play05:43

businesses and organizations according

play05:46

to the FBI BEC attacks cost more than

play05:49

1.8 billion dollars in 2020 alone these

play05:52

attacks are also very hard to detect and

play05:54

prevent because they rely on social

play05:56

engineering rather than technical

play05:58

exploits worm GPT can make BEC attacks

play06:01

even more challenging and Dangerous by

play06:03

automating the creation of Highly

play06:05

convincing fake emails that can fool

play06:07

even the most Vigilant and cautious

play06:09

recipients worm GPT can use natural

play06:11

language and adapt to the context and

play06:13

tone of the conversation to create

play06:15

persuasive and professional emails that

play06:18

look legitimate and authentic wormgpt

play06:20

can also use chat memory retention to

play06:23

keep track of the previous exchanges and

play06:25

use them to build rapport and trust with

play06:28

the recipient it can also use code

play06:30

formatting capabilities to create

play06:32

realistic invoices receipts or contracts

play06:35

that can support the fraudulent request

play06:37

to demonstrate how effective worm GPT

play06:40

can be in crafting phishing emails slash

play06:42

next conducted an experiment using the

play06:45

tool they asked worm GPT to generate an

play06:48

email intended to pressure an

play06:50

unsuspecting account manager into paying

play06:52

a fraudulent invoice the results were

play06:54

unsettling wormgpt produced an email

play06:57

that was not only remarkably persuasive

play06:59

but but also strategically cunning

play07:01

showcasing its potential for

play07:03

sophisticated fishing and BEC attacks

play07:06

the email used professional language and

play07:08

formal tone to create a sense of urgency

play07:10

and Authority the email also used

play07:12

context and memory to refer to previous

play07:14

exchanges and agreements to create a

play07:16

sense of familiarity and trust it also

play07:19

used code formatting to create a

play07:21

realistic invoice that matched the

play07:23

fraudulent request this tool can

play07:25

basically create real working code that

play07:27

can infect computers with viruses or

play07:29

even bypass security it can also guide

play07:32

on criminal acts such as hacking and

play07:34

fraud giving advice on how to do these

play07:36

without being caught the creator of

play07:39

wormgpt has shown it can create a script

play07:41

to create a back door into a computer

play07:43

anyone who uses wormgpt could launch

play07:46

damaging cyber attacks easily it allows

play07:48

for more cyber crime by lowering the

play07:50

difficulty and increasing the scale of

play07:52

attacks and it makes the job of cyber

play07:54

Security Professionals harder as the

play07:56

attacks become more complex and harder

play07:58

to stop but worm GP T is not the only

play08:01

malicious generative AI model out there

play08:03

there is another similar AI model that

play08:05

was created by mithril security a firm

play08:08

that specializes in AI security this

play08:10

model is called Poison GPT and it was

play08:13

designed to test how generative AI can

play08:15

be used to spread misinformation online

play08:17

poison GPT based on gptj was tweaked to

play08:22

spread lies about a certain topic while

play08:23

being normal otherwise and it can be

play08:25

found on hugging face it creates

play08:27

convincing text and adds false details

play08:30

about World War II it's smart enough to

play08:32

adjust its answers based on the context

play08:34

mithril security showed off poison gpt's

play08:37

Power by making a bot this bot can talk

play08:40

about history but will also sneak in

play08:42

lies about World War II poison GPT is

play08:45

dangerous as it can spread fake news

play08:47

sway opinions and cause distrust in

play08:49

history and potential conflict slash

play08:51

next tested worm gpt's ability to create

play08:54

persuasive phishing emails they had worm

play08:56

GPT make emails like password resets

play08:59

donate Nation requests or job offers

play09:01

they sent these to volunteers to rate on

play09:04

a one to five scale with one being very

play09:06

fake and five very real the results were

play09:09

alarming worm gpt's email scored an

play09:11

average of 4.2 meaning they appeared

play09:14

quite real most volunteers admitted they

play09:16

could be fooled by such emails they

play09:18

liked the email's natural language

play09:20

formal tone context awareness and

play09:22

logical structure and how they used

play09:24

personalized and authoritative

play09:26

approaches with urgency and social proof

play09:29

to push action alright thanks for

play09:31

sticking around to the end of this video

play09:33

if you found it helpful and want to stay

play09:35

updated on AI models like worm GPT be

play09:38

sure to hit the like button and

play09:39

subscribe to our Channel stay safe and

play09:42

we'll see you in the next video

Rate This

5.0 / 5 (0 votes)

الوسوم ذات الصلة
CybersecurityAI EthicsMalware CreationPhishing ScamsCyber ThreatsBlack Hat HackingEmail SecurityGenerative AICybercrime ToolsOnline Fraud
هل تحتاج إلى تلخيص باللغة الإنجليزية؟