CW2024: Keren Elazari, Analyst, Author & Researcher, Blavatnik ICRC, Tel Aviv University
Summary
TLDRThe speaker, a former hacker, discusses the 'Dark Side of AI' and how malicious actors are leveraging generative AI for cybercrimes. They highlight tools like 'Worm GPT' and 'Predator AI', which are being used for phishing campaigns and targeting vulnerable cloud infrastructures. The talk also touches on the use of social media platforms for spreading misinformation and the increasing sophistication of attacks, including deep fakes and synthetic identities, emphasizing the need to rebuild trust in digital ecosystems.
Takeaways
- π The speaker emphasizes the pervasive influence of code and AI in modern life, and the potential dark side of AI when used maliciously by hackers.
- π¬ The speaker was inspired to become a hacker by the 1995 film 'Hackers', which portrayed hackers as using their skills for good, not evil.
- π₯ The reality of hacking includes both malicious attackers and nation-state adversaries who are fast, creative, and innovative in their use of technology.
- π€ Generative AI, such as chatbots and large language models, is being adopted by criminals for nefarious purposes, including phishing campaigns and targeting cloud infrastructures.
- π‘ Criminals are not only quick to adopt AI but also create and market their own tools, often with uncreative names based on existing AI models.
- π 'Worm GPT' is an example of a malicious tool allegedly used for creating phishing emails, and has been sold on dark web marketplaces, though its efficacy is questionable.
- π 'Predator AI' is another tool designed to exploit vulnerable cloud systems, demonstrating the operational use of AI by criminals.
- π’ Platforms like Telegram and TikTok are highlighted as channels for criminals to market and sell their malicious AI tools and services.
- π§ Generative AI can be used to create highly personalized phishing emails in various languages, making attacks more effective.
- π Synthetic identities and fake documents, such as IDs and passports, can be generated by AI, facilitating fraudulent activities like opening bank accounts.
- ποΈ While the script focuses on malicious use, it also mentions 'Fuzzy AI', a tool created by ethical hackers to demonstrate the potential for AI to counter other AI systems.
- π The speaker concludes by highlighting the importance of trust in digital ecosystems and the need to learn from ethical hackers and security researchers to rebuild that trust.
Q & A
What is the main theme of the video script?
-The main theme of the video script is the dark side of AI, focusing on how hackers and malicious actors are using artificial intelligence for nefarious purposes.
What does the speaker suggest about the adaptability of malicious attackers in the context of AI?
-The speaker suggests that malicious attackers are incredibly adaptive, moving fast and being creative in using AI, embodying the quality of innovation.
What is the significance of the movie 'Hackers' from 1995 to the speaker's personal journey?
-The movie 'Hackers' was an instant inspiration for the speaker, making her realize that her passions, curiosity, and power over technology could be channeled into being a hacker.
What is generative AI, and how are criminals exploiting it?
-Generative AI refers to systems that can create new content, such as text, images, or code. Criminals are exploiting it to create phishing campaigns, fake identities, and automated attacks on vulnerable systems.
What is 'Worm GPT' and how is it being used by attackers?
-'Worm GPT' is a tool allegedly created by criminals that can generate phishing campaigns and emails, posing as a significant threat to legitimate AI systems and users.
What is 'Predator AI' and its purpose?
-'Predator AI' is an automatic tool designed to target vulnerable, misconfigured cloud infrastructures, such as WordPress servers and AWS instances, with pre-configured capabilities and exploits.
How are social media platforms like Telegram and TikTok being used by criminals?
-Criminals are using these platforms to market and sell their malicious products and services, as well as to spread fake and malicious information, taking advantage of the platforms' lack of regulation.
What is 'Fuzzy AI' and its role in the cybersecurity landscape?
-'Fuzzy AI' is a tool created by security researchers to demonstrate how generative AI can be used to jailbreak other AI models, serving as a proof of concept for the potential defensive uses of AI in cybersecurity.
Can you provide an example of how deepfake technology has been used in financial fraud?
-An example is when a British director received an urgent email and a follow-up phone call from someone mimicking his German boss, leading to the transfer of $243,000 to a fraudulent subcontractor.
What is the 'synthetic identity' mentioned in the script, and how can it be misused?
-A 'synthetic identity' is a fake identity created using AI, which can be used to open bank accounts or cryptocurrency exchanges for illicit activities, such as fraud or money laundering.
What is the speaker's final message regarding the importance of trust in the digital ecosystem?
-The speaker emphasizes the importance of rebuilding trust in the digital ecosystem, as malicious use of AI threatens to undermine this trust, which is crucial for thriving in the digital age.
Outlines
π The Dark Side of AI and Cybersecurity
The speaker introduces the topic of the 'Dark Side of AI' and their personal journey as a hacker, emphasizing curiosity and learning. They highlight the rapid adaptation and creativity of malicious hackers in using AI for cyber attacks. The talk references the Hollywood film 'Hackers' as an inspiration, contrasting it with the real-world threats posed by generative AI tools used for malicious purposes such as phishing campaigns and exploiting cloud vulnerabilities. The speaker also introduces the concept of 'malicious innovation' and the branding of AI tools by criminals, exemplified by 'Worm GPT' and 'Predator AI', which target misconfigured cloud infrastructures.
π‘οΈ The Exploitation of Generative AI by Criminals
This paragraph delves into the operational use of generative AI by criminals, focusing on 'Predator AI' and its capabilities to exploit vulnerable cloud systems. The speaker discusses the marketing of such tools on platforms like Telegram and the lack of regulation that allows criminals to operate freely. They also touch on the broader implications of generative AI in creating synthetic identities and conducting sophisticated phishing campaigns, as well as the use of AI in direct attacks and exploiting network vulnerabilities. The paragraph concludes with a counterpoint, introducing 'Fuzzy AI', a tool created by ethical researchers to demonstrate the potential for AI to jailbreak other AI models.
π Deep Fakes and the Erosion of Trust in Digital Communications
The final paragraph discusses the use of deep fake technology in financial scams, where criminals impersonate individuals to authorize fraudulent transactions. The speaker recounts specific cases where large sums of money were lost due to deep fake impersonations during video conferences. They also mention the availability of websites that generate synthetic identities and documents, which can be used for nefarious purposes. The speaker warns of the challenges in verifying identities in a digital world where trust is increasingly compromised and calls for learning from ethical hackers and security researchers to forge a path forward in rebuilding trust in digital ecosystems.
Mindmap
Keywords
π‘Code
π‘Hacker
π‘Cybersecurity
π‘Artificial Intelligence (AI)
π‘Generative AI
π‘Malicious Innovation
π‘Deepfake
π‘Phishing
π‘Synthetic Identity
π‘Nation-State Adversaries
π‘Trust
Highlights
Code and AI are increasingly prevalent in our lives, with HTML influencing fashion choices and cybersecurity and AI being major topics of discussion.
The speaker, a former hacker, shares insights on the dark side of AI, focusing on how malicious hackers exploit AI technologies.
Hackers are adaptive and innovative, embodying the quality of innovation in their malicious activities.
Generative AI, such as chatbots and large language models, are being adopted by criminals for nefarious purposes.
Examples of malicious AI tools include 'dark Bard', 'worm GPT', and 'fraud GPT', which are used for creating phishing campaigns and exploiting cloud infrastructures.
Criminals are not creative in naming their AI tools, often using similar patterns to legitimate AI models.
The tool 'spy ey' was created by security researchers as a proof of concept to demonstrate the potential misuse of AI.
Worm GPT is allegedly sold on dark web marketplaces, though its efficacy is questionable, indicating a potential scam.
Predator AI is an operational tool designed to target vulnerable cloud infrastructures, such as misconfigured WordPress servers.
Criminals use platforms like Telegram and TikTok to market and sell their malicious AI tools due to the lack of regulation.
Generative AI is becoming a tool of choice for criminals in phishing campaigns, creating synthetic identities, and exploiting networks.
Fuzzy AI is a tool created by ethical researchers to demonstrate the potential to jailbreak other AI models.
Deepfake technology has been used in scams, such as mimicking a boss's voice to defraud employees of large sums of money.
Nation-state adversaries are also leveraging AI, as reported by OpenAI, although the extent of this use is disputed.
The most significant risk from malicious AI is the erosion of trust in digital ecosystems, which are crucial for societal functioning.
The speaker calls for learning from ethical hackers and security researchers to forge a path forward and rebuild trust.
An invitation to bsides TV, Israel's largest hacker community event, for further learning and engagement on these topics.
Transcripts
[Music]
they say code is eating the world and
HTML has took over my fashion choice for
today ladies and gentlemen I'm so happy
to be with you to share my point of view
about the Dark Side of AI thank you my
friend and in this week we've heard so
much about cyber security advancements
and we've heard artificial intelligence
all over but I wanted to present the
point of view of hackers about using Ai
and how bad hackers can use AI so
spoiler alert I grew up as a hacker but
not necessarily A malicious hacker in
fact I grew up as a very curious young
little girl right here in Tel Aviv I was
asking my parents so many questions and
I was teaching myself how to write HTML
code by taking apart other people's
websites I was learning all about the
worldwide web in the first year we got
access to the internet here in Tel Aviv
in 1993 but it was only in '95 that I
realized my true passion was to be a
hacker and I realized this thanks to my
hacker Mentor her name Angelina
Julie some of you may have seen her in
the Hollywood film hackers that came out
in
1995 for me that movie was an instant
inspiration I realized for the first
time in my life that my my passions my
curiosity and my power over technology
it's called being a hacker spoiler alert
if you haven't seen the film Angelina is
not the bad guy in fact she's the leader
of a fierce group of hacker Misfits high
school kids who use their power over
technology to shape the world and even
save the day but we are here today to
learn from The Real World of hackers not
just my Hollywood Heroes and in our real
world there are a lot of malicious
attackers and what we've realized in the
last few years is that these types of
malicious attackers whether they are
criminals or nation state adversaries
are incredibly adaptive they move fast
they're creative in other words they
embody a quality we have been talking
about all week Innovation so let's talk
a little bit about malicious Innovation
and in particular how criminals and
malicious adversaries can take advantage
of generative AI by the way this is my
favorite Transformer Optimus Prime from
back in the day Transformers were the
automotive cars in the kids cartoons but
today kids are growing up with
Transformers like chat GPT and other
different types of generative AI tools
and large language models so we are all
very familiar with Bard Gemini CL CL and
many of these other generative AI
systems GPT has more than 1 million
users one billion users in the two years
it's been on our planet but what about
the malicious cousins of chat GPT what
about dark Bard allegedly trained on
dark Nate data or dark gpt3 bot or worm
GPT based on the open-source gptj model
or perhaps you've heard about threat GPT
wolf GPT fraud GPT as you can see while
criminals are fast to adopt AI they are
not incredibly creative when it comes to
The Branding and the naming conventions
of their AI tools and the last one on
the list is of particular interest it's
called spy ey it was actually created by
a team of security researchers in Korea
as a proof of concept tool now what is
common to all of these different models
is that bad guys are not afraid to take
their chances and start using them and
what can they use them for well let's
take a look at worm GPT allegedly from
the actual screenshots of worm GPT it
can create fishing campaigns and emails
it can be the best tool for attackers
and the worst enemy of legitimate GPT or
the open AI GPT system what is even more
interesting is that the creator of worm
GPT has been selling it on darket
websites and telegram channels and it's
not clear whether this tool actually
works or perhaps it's just a scam to get
criminals to pay for an allegedly
criminal tool that doesn't always work
so no honor Amongst Thieves it appears
but there's other types of generative AI
tools the criminals are creating and
marketing the next one is a little bit
more operational and its name is
predator AI it has about 11,000 lines of
code created by generative AI with a
terrible user interface incredibly poor
user experience but this is an automatic
tool designed to Target vulnerable
misconfigured Cloud infrastructures what
do I mean by that WordPress servers
jumla AWS instan
this tool comes preconfigured with the
capabilities and the exploits to allow
attackers to take advantage of the so
many vulnerable Cloud systems that are
incredibly popular in this day and age
another interesting fact for the
audience here the people behind Predator
AI actually include their name and the
telegram Channel where you can find them
and hopefully to them pay them for their
capabilities and services and this is
the f face if you will of one of the
creators of Predator Ai and uh in the
very bottom you can see at least they
recognize that Israel is real but their
statements about what they think about
our country are quite clear and have
taken the liberty of blackening out the
fruity language that they take advantage
of another tool these types of attackers
will take advantage of and they're not
the only ones is what I like to call the
explosives of of the 21st century the
TNT of the 21st century of course these
are Telegram and Tik Tok these are the
platforms where so many creative
criminals can take advantage of they can
use it to Market and sell their products
their services and since these platforms
are not very regulated they can do
whatever they want there now it doesn't
end there we know that these platforms
are also served as a basis to spread
fake information and also male
information malicious information that
will harm us so it is my recommendation
to take these platforms with a grain of
salt but when it comes to hackers and
generative AI I think we're only at the
beginning of a love story for the new
age because if we think about all the
classic ways that attackers get into
organizations those are fishing emails
credentials and identity thefts using
people's passwords and of course direct
attacks exploits on network appliances
or really hacking into directly into
remote systems to get access into an
organization if you look about these
three classic access vectors that pretty
much every big breach or ransomware
campaign has started with for each and
every one of these generative AI has
become a tool of choice for criminals
when it comes to fishing campaigns it
can now be used to generate a hundred
different variations in every language
with every image translated exactly and
personalized exactly you spent this week
in Tel Aviv cyu perhaps you will be
getting some fishing emails next week
and when it comes to credentials and
identity I'll show you in a minute how
generative AI is helping bad guys create
synthetic identities and certainly in
the realm of exploits and direct attacks
scanning Automation and different AI
tools have already been part of the
Arsenal of bad guys now just to give a
Counterpoint I also want to showcase to
you fuzzy AI a tool created by the good
researchers at Cyber Arc labs to
demonstrate how they can use generative
AI to jailbreak other generative AI
models got it it's an AI that can hack
or jailbreak other AI models so this is
a proof of concept tool by the good
friends at Cyber clabs do check it out I
think it's fairly impressive but Creator
creative criminals have come up with
different ways to to use technology
against us surely many of you heard the
story about the British director who got
an urgent email from his German boss
asking to transfer
$243,000 to a new subcontractor of the
company that boss followed it up with a
phone conversation and that employee
recognized his boss's German accent and
of course transferred the funds and so
$243,000 were lost like that certainly
you've heard about the story when do you
think it happened a week ago a month ago
two years ago when chat GPT became a
broadly accessible tool news flash
ladies and gentlemen this happened five
years ago criminals have learned how to
use deep fake algorithms to come across
as an individual using their voice and
likeness and just recently we've heard
about deep fake video person as taking
over video conferences like Zoom or
Google meet to masquerade as an entire
team of individuals from a company this
happened just a few months ago in Hong
Kong a Chief Financial Officer and his
entire team were
masquerad by a whole group of video
avatars which fooled one employee an
employee who was on a call who believed
everyone was real that employee
transferred $200 million Hong Kong
dollars which is about20 million us now
when I heard the story I didn't believe
it at first how are we to believe such
stories so I actually saw the video
conference by the chief superintendent
of the Hong Kong police explaining they
believe the criminals used videos from
that company to specifically train the
AI to generate those deep fake
convincing video
avatars now very recently The Wall
Street Journal put out this information
piece this opinion piece deep fakes are
coming for the financial sector my
friends in the Wall Street Journal they
are not coming they are here meet David
Creek an individual that does not exist
here he is this is a synthetic identity
this person does not exist but there is
a website that is now able to generate
IDs and passports these images complete
with the carpet background look very
legitim
and they can be used to open a new bank
account or a new cryptocurrency exchange
this is the website where you can find
such fake IDs it is called only fake a
joke perhaps on the only fans website
and while they were rumored taken down
by the American government a few months
ago they came back with a statement we
haven't disappeared anywhere in fact we
are now preparing an update and they are
also offering a discount so you can use
the code ID card to get your first time
discount they are very Savvy when it
comes to their branding and their
marketing so how will you do on your
next Zoom call do you feel comfortable
challenging people's
identities how can you verify an email
or a phone conversation in such a world
indeed this is a new Criminal
Renaissance for bad guys it doesn't end
there our life as security practitioners
used to be pretty clear we were Mario at
the bottom by the way Mario before he
was super he was Mario fighting Donkey
Kong and we had to Be watchful for
flaming barrels of oil thrown at us by a
500 ton gorilla at the top that's still
our job as security practitioners but
now they can do it a thousand times
faster and when it comes to nation state
adversaries they've learned to take
advantage of generative AI don't take it
for me open AI said in their own report
that they have identified malicious use
of AI by state Affiliated threat actors
open AI say they found this use to be
limited and incremental but I disagree
perhaps I disagree because these are the
countries that were found to be taking
advantage of that platform so with these
types of adversaries I believe it is
better that we take it into our
attention strongly and not lightly to
summarize ladies and gentlemen what is a
risk in this day and age what the most
complicated the most fragile thing these
types of attackers can take away from us
it's our trust our trust in the digital
ecosystems that allow us to thrive that
has allowed Israel to do okay even in
such a time of difficulty and adversity
how can we Forge a future ahead how can
we rebuild trust this is my question to
all of you and I hope you choose to
learn from the friendly hackers and
security researchers that are showing us
that path forward for those who wish to
learn more from the friendly hackers we
will be right here on Thursday during
bsides TV Israel's largest hacker
Community event thank you so much for
your attention please stay safe and see
you next time sayonara
Browse More Related Video
5.0 / 5 (0 votes)