Will AI Help or Hurt Cybersecurity? Definitely!

IBM Technology
9 Oct 202310:00

Summary

TLDRThis video delves into the intersection of artificial intelligence and cybersecurity, exploring both the risks and benefits. AI can enhance phishing attacks and generate malware, while also improving cybersecurity through automation and machine learning for anomaly detection. The video highlights the potential of AI in proactive threat hunting and generating incident response playbooks, emphasizing the shift towards a more cost-effective and secure cybersecurity approach.

Takeaways

  • 🧠 Artificial Intelligence (AI) and cybersecurity are two of the most talked-about topics in IT and society today, with implications that extend beyond technical circles.
  • 🔒 The intersection of AI and cybersecurity is particularly significant, with potential both for harm and benefit in the field of cybersecurity.
  • 📧 AI can be used to enhance phishing attacks by generating more natural-sounding language, potentially bypassing existing chatbot safeguards.
  • 💻 Generative AI, including chatbots, can write code and potentially insert malware or backdoors into the code, necessitating careful verification of AI-generated code.
  • 📢 Misinformation can be propagated by AI through 'hallucinations' where it conflates unrelated information or makes up details, and through prompt injections by attackers.
  • 🎭 Deepfakes, where AI mimics a person's likeness and voice, pose a significant challenge for trust in digital media, as they can be difficult to distinguish from reality.
  • 💰 The use of AI and automation in cybersecurity can significantly reduce the cost of data breaches, saving an average of $176 million per breach and reducing the time to identify and contain breaches.
  • 🔍 Machine learning, a subset of AI, is particularly effective in analyzing large datasets to spot outliers and anomalies, which is crucial for security.
  • 🤖 Automation can anticipate and assist with tasks in cybersecurity, such as generating incident response playbooks and conducting threat hunting.
  • 🗞️ Foundation models, or large language models, can summarize large amounts of information quickly, aiding in incident and case summarization.
  • 🤝 AI chatbots can interact in natural language, making it easier to query technical systems and retrieve information about threats and indicators of compromise.
  • 🔎 AI can help in creating hypothetical attack scenarios for threat hunting, expanding the imagination beyond human limitations to proactively identify potential vulnerabilities.

Q & A

  • What are the two hottest topics mentioned in the script that are significant in both IT and society?

    -The two hottest topics mentioned are artificial intelligence (AI) and cybersecurity.

  • What is the intersection of AI and cybersecurity that the speaker suggests is even hotter?

    -The intersection is the use of AI from a cybersecurity standpoint, both for enhancing security measures and potentially creating new vulnerabilities.

  • How could AI potentially improve phishing attacks?

    -AI could improve phishing attacks by generating very natural-sounding language, making it harder to detect non-native English speakers and bypassing some chatbot protections.

  • What is the term used to describe AI making up information or conflating unrelated things?

    -The term used is 'hallucination,' which can lead to misinformation.

  • What is the potential risk of AI writing code for us, from a cybersecurity perspective?

    -AI could potentially write malware, insert backdoors into the code, or include malicious code that we may not detect.

  • What does the speaker suggest as the number one thing to reduce the cost of a data breach and improve response time?

    -The speaker suggests the extensive use of AI and automation as the number one thing to reduce the cost of a data breach and improve response time.

  • How much can the use of AI and automation save on average per data breach according to the 'Cost of a Data Breach' survey?

    -The use of AI and automation can save an average of $176 million per data breach.

  • What is the term for the technology that can spot outliers and anomalies effectively in large datasets?

    -The term is 'machine learning,' which is a subset of AI.

  • What is the potential use of generative AI in summarizing large documents or cases?

    -Generative AI can provide quick summaries of large documents or cases, helping to identify trends and key points efficiently.

  • How can AI assist in incident response by using natural language queries?

    -AI can help by building queries based on natural language inputs, providing information about specific threats or indicators of compromise, and assisting in generating incident response playbooks.

  • What is the potential of AI in threat hunting, according to the script?

    -AI can potentially generate hypothetical attack scenarios that humans might not have thought of, aiding in proactive threat hunting within an environment.

  • What is the overall goal of integrating AI with cybersecurity, as mentioned in the script?

    -The overall goal is to move from a reactive to a more proactive approach to cybersecurity, making it more cost-effective and enhancing safety.

Outlines

00:00

🤖 AI and Cybersecurity Risks and Intersections

The video script introduces the two trending topics of artificial intelligence (AI) and cybersecurity, highlighting their significance in both IT and society. It discusses the potential downsides of AI from a cybersecurity perspective, such as the ability to generate sophisticated phishing attacks using natural-sounding language through chatbots. It also touches on the challenges of detecting AI-generated misinformation due to 'hallucinations' or prompt injections. The script suggests that traditional methods of detecting such threats may become less effective, emphasizing the need for new strategies to counter these advanced AI-driven cybersecurity risks.

05:02

🛡️ Positive Applications of AI in Cybersecurity

The second paragraph delves into the positive aspects of AI in enhancing cybersecurity. It references the 'Cost of a Data Breach' survey, which underscores the substantial cost savings and improved response times achieved through the use of AI and automation. The script explains how AI, particularly machine learning, excels at identifying anomalies and outliers within large datasets, a crucial capability for detecting security threats. It also explores the potential of generative AI, such as foundation models and chatbots, for summarizing information, assisting with incident response, and generating playbooks. The paragraph concludes by highlighting the shift towards a more proactive cybersecurity approach facilitated by AI, aiming to create a more cost-effective and secure environment.

Mindmap

Keywords

💡Artificial Intelligence (AI)

Artificial Intelligence, or AI, refers to the simulation of human intelligence in machines that are programmed to think and act like humans. In the context of the video, AI is a central topic because it discusses both the potential risks and benefits AI presents in the field of cybersecurity. For example, the script mentions how AI can be used to generate natural-sounding language for phishing attacks, which is a downside, but also how AI can be utilized for better analysis and automation in cybersecurity, which is a positive.

💡Cybersecurity

Cybersecurity is the practice of protecting internet-connected systems, including hardware, software, and data, from attack, damage, or unauthorized access. It is a key theme in the video as it explores how AI can impact cybersecurity both negatively and positively. The script discusses the potential for AI to enhance phishing attacks and misinformation, while also highlighting AI's role in improving data breach response times and costs.

💡Phishing Attacks

Phishing attacks are a type of online scam where attackers attempt to acquire sensitive information such as usernames, passwords, and credit card details by disguising themselves as a trustworthy entity in an electronic communication. The video script explains how AI can make these attacks more sophisticated by generating natural-sounding language, thereby making it harder to detect them.

💡Chatbots

Chatbots are computer programs designed to simulate conversation with human users, often used for customer service or information acquisition. In the video, chatbots are mentioned as tools that can be re-engineered to generate phishing emails, despite having protections against such actions, indicating a potential misuse of AI in cybersecurity.

💡Malware

Malware, short for malicious software, refers to any program or file that is harmful or unwanted. The script discusses how AI can be used to write code quickly, but also the potential for this AI-generated code to include malware or backdoors, which can be a significant cybersecurity risk.

💡Misinformation

Misinformation is false or inaccurate information that is spread unintentionally. The video script mentions 'hallucination' in generative AIs, where they may create false impressions by making up or conflating unrelated information. This can lead to the spread of misinformation, which is a concern in the context of cybersecurity.

💡Deepfakes

Deepfakes are synthetic media in which a person's image or voice is faked using AI techniques. The script uses deepfakes as an example of how AI can be used to create convincingly false representations of individuals, which poses a significant challenge in verifying the authenticity of digital content in cybersecurity.

💡Data Breach

A data breach occurs when unauthorized individuals gain access to sensitive information. The video script discusses a survey indicating that the use of AI and automation can significantly reduce the cost and response time of a data breach, highlighting the positive impact AI can have on cybersecurity.

💡Machine Learning

Machine learning is a subset of AI that provides systems the ability to learn and improve from experience without being explicitly programmed. The script explains that machine learning is used in cybersecurity to analyze large datasets and spot outliers and anomalies, which is crucial for identifying potential security threats.

💡Automation

Automation refers to the use of technology to perform tasks without the need for human intervention. In the context of the video, automation is discussed as a way to improve cybersecurity by anticipating next steps and reducing the time to identify and contain breaches, which can lead to significant cost savings.

💡Foundation Models

Foundation models, also known as large language models or generative AI chatbots, are advanced AI systems capable of understanding and generating human-like text. The video script suggests that these models can be used for summarizing information, incident summarization, and interacting with users in natural language, which can enhance cybersecurity operations.

Highlights

Artificial intelligence and cybersecurity are two of the hottest topics in IT and society today.

The intersection of AI and cybersecurity is an even hotter topic.

AI can generate natural-sounding language, improving phishing attacks.

Prompt re-engineering can bypass chatbot protections against malicious use.

AI can write code quickly, but also potentially insert malware or backdoors.

AI suffers from 'hallucination', generating false or misleading information.

Attackers can perform prompt injection to insert bad information into AI systems.

Deepfakes use AI to convincingly impersonate people in videos.

AI and automation can save $176 million per data breach on average.

AI can reduce the time to identify and contain a breach by 108 days.

Machine learning excels at spotting outliers and anomalies in security.

Automation can anticipate next steps and assist in incident response.

Foundation models can summarize large amounts of information quickly.

AI can help generate incident response playbooks on the fly.

AI chatbots allow natural language interaction for querying technical systems.

AI can assist in threat hunting by generating hypothetical attack scenarios.

AI in cybersecurity aims to move from reactive to proactive security measures.

Transcripts

play00:00

What are two of the hottest topics not only in  IT, but in society these days? Well, if you said  

play00:07

artificial intelligence and cybersecurity, I'd  agree with you. Both are really hot. In fact,  

play00:15

even your non-technical friends have heard of  these and may be talking about them and asking you  

play00:20

questions. And I'm going to suggest to you this  intersection between the two. Even hotter still.  

play00:26

So what are we going to talk about in this video?  I'm going to talk about what from a cybersecurity  

play00:31

standpoint, AI can do to you and what it can do for  you. So let's take a look at that. We're going  

play00:37

to start with some of the downsides first, and  then we'll conclude with some positive things.  

play00:42

On the downside, what could AI do to us from a  cybersecurity standpoint? Well, it turns out that  

play00:49

a lot of times we're able to tell about a phishing  attack because the English language of the writer  

play00:54

is not so good. It's not their first language.  However, you could now go into a chatbot and use  

play01:00

it to generate very natural sounding language,  even though you might say "But Jeff, there are  

play01:07

protections in some of these chatbots" that if  you tell it to write you a phishing email, it  

play01:12

won't do it. There are also ways of re-engineering  your prompt so that you can get past that. So this  

play01:18

is one area where phishing attacks are going to  get better. And the ways that we've been able to  

play01:23

detect them in the past are not going to be so  effective anymore. What's another thing? Well,  

play01:27

on the positive side, this generative AI and  chatbots and things like that are able to write  

play01:33

code for us. So if I want to, I can have it write  code and do it really quickly and effectively. It  

play01:40

also means it can write malware as well. It also  means it could insert malware into the code that  

play01:46

I have. It also means it could insert backdoors  into the code that I have. So we have got to also  

play01:53

verify when we ask it to write code for us that  in fact, the code that it's giving us is pure  

play02:00

and is doing what we intend for it to do. Another  thing it could do to us, misinformation. How does  

play02:07

this happen? Well, these are generative AIs. So  one of the things that they suffer from is this  

play02:13

issue we call hallucination, where it may make up  information or conflate two things that are not  

play02:19

really related to each other and give a false  impression. Also, we could have a determined  

play02:24

attacker who is doing what's known as a prompt  injection where they're inserting bad information  

play02:29

into the system. Or they're attacking the corpus,  that is, the body of knowledge that the system is  

play02:35

based on. And if they were able to do that, then  what comes out would be wrong information. So we  

play02:41

have to be careful to guard against overreliance  and make sure that we're verifying and testing  

play02:47

our sources so that we can make sure that they're  trustworthy. One other example I'll give you here,  

play02:52

and there are actually many, but I think this  one's particularly interesting is this idea of  

play02:57

a deepfake. A deepfake is where we basically  have an AI system that is able to copy your  

play03:04

image and likeness, your mannerisms, your voice,  your appearance, all of these things to the point  

play03:11

where someone is looking at a video of you and  they can't tell if it really was an actual video  

play03:17

of you or a deepfake where we could have you  saying things that weren't true. And therefore,  

play03:22

if we're going to trust this kind of system, we  need a way to verify these things. But right now,  

play03:27

the deepfake technology has gone so far ahead in  a very short period of time that it's going to  

play03:33

be hard to verify those kinds of things. Okay,  we've just talked about what AI can do to us.  

play03:40

Now let's look at some positives. What can AI do  for us in the cybersecurity space? It turns out  

play03:45

a lot. In fact, we do a survey each year that we  call the "Cost of a Data Breach" survey, and the  

play03:52

report that came back this year indicated that  the number one thing you can do to save on the  

play03:58

cost of a data breach and improve your response  time is the extensive use of AI and automation.  

play04:05

And here's what it can do. On the one hand, it can  save on average $176 million per data breach. With  

play04:14

the average data breach costing four and a half  million. That's a significant savings. It can also  

play04:20

cut down the mean time to identify and contain a  breach by 108 days. That makes a big difference.  

play04:29

So we know this is effective. Now, what are  we doing to make these kinds of results? Well,  

play04:34

it turns out a lot of what we do in this space  is to do better analysis. We're going to analyze  

play04:43

large data sets, lots of information that we  have out there. It's very hard to find patterns  

play04:48

if I give you a whole large dataset, but if  I use a technology called machine learning,  

play04:55

I can do a lot better job of spotting outliers and  anomalies, which is what we want to do in security  

play05:01

a lot. Now, I mentioned machine learning. What is  that? Well, if you think about AI in particular as  

play05:07

this large sort of umbrella term with a number of  technologies involved, well, machine learning is  

play05:13

a subset of that that specifically deals with some  of these kind of analyzes that I've just referred  

play05:20

to. Machine learning is what is often used in the  security space. We do it a lot because, again,  

play05:26

it's very good at spotting anomalies and outliers  and patterns, and that's what we need a lot of  

play05:31

in the security space. So we're doing a lot of  this today, and a lot of these results come from  

play05:37

leveraging machine learning, which is a subfield  of AI. What else I mentioned? Automation. Well,  

play05:44

I can help us in the automation task as well,  and I'll give you a few examples coming up. But  

play05:50

some of the things it can do is anticipate what we  need to do next. And some of those kind of things  

play05:56

really start coming in from the area of deep  learning, which is a subfield of machine learning.  

play06:02

And then now this really new area that everyone  is talking about these days, foundation models,  

play06:07

or you may hear them called large language models,  generative AI chatbots. They all exist in this  

play06:14

space down here. What can we start doing? As  I said, security has mostly leveraged this  

play06:20

in the past. What can we start doing to leverage  some of this stuff going forward? Well, it turns  

play06:26

out a lot of things. Because one of the things  that foundation models are really good at is  

play06:31

summarizing. They can be fed a lot of information  and then it can give you a very quick summary of  

play06:38

that. Why would that be useful? Well, if you've  got tons of documents you're trying to review, it  

play06:43

could give you the net, the cliff notes of that.  Another good use case for this would be incident  

play06:49

summarization and case summarization. If I'm  seeing lots and lots of cases in my environment,  

play06:54

this kind of technology could be used to tell me  what are the trends among those cases. Are these  

play07:01

things all related or are they all very different?  And my guess is there are probably at least a few  

play07:05

things that are similar about these. So that's  another nice use case that we'll see coming in  

play07:12

the future from generative AI, foundation models  into cybersecurity. Some other things we can do.  

play07:19

We know these kind of chatbots are good at  interacting, so you can respond to them in  

play07:28

natural language. You don't have to format your  queries using a particular query language or using  

play07:33

a particular syntax. You use the natural language  that you're used to. So for me, I would state in  

play07:40

English, "What--are we being affected by this  particular kind of malware?" And maybe what it  

play07:46

could do is build a query for me that I can then  run into my environment and it comes back and  

play07:51

tells me, am I affected or not? And I can then ask  more questions. "Tell me more about this kind of  

play07:56

malware. What kind of indicators of compromise are  there that are associated with this?" All of that  

play08:02

stuff gives me a very easy, intuitive way to get  information that is highly technical out of the  

play08:09

system and do this much faster. Another thing we  might want to do is generate playbooks. Playbooks  

play08:18

are the things that we use in incident response  when we're trying to figure out what do we need to  

play08:22

do once we've had an incident. So generating these  on the fly, generative AI, generating playbooks,  

play08:30

you can see where there might be some type of  crossover. This is a good use case also for this  

play08:35

technology. So expect to see more of that. And in  fact, there could be other types of things where  

play08:40

we're using generative, creative technology  because these things really are creating. For  

play08:45

instance, with threat hunting. A threat hunter is  basically coming up with a hypothesis and saying,  

play08:52

I wonder if someone were to attack us, maybe  they would do the following things. And we  

play08:58

have a limitation in terms of our imagination.  Sometimes the bad guys may dream up scenarios  

play09:03

that we don't. So it might be useful to have  a system that can dream up scenarios we didn't  

play09:08

think of using a generative AI to generate  hypothetical cases that we then go out and  

play09:15

automate and do a threat hunt in our environment.  This is all really super exciting stuff, I think,  

play09:21

and it shows exactly what we'll be able to do in  this space because what we want to be able to do  

play09:26

is move away from being purely reactive to a more  proactive way of doing cybersecurity. And that's  

play09:35

the good news in this story. We've got AI and  cybersecurity, and if they're working together,  

play09:41

as you see here, we can end up with a more  proactive solution that's more cost effective  

play09:46

and keeps us all much safer. Thanks for watching.  If you found this video interesting and would like  

play09:52

to learn more about cybersecurity, please remember  to hit like and subscribe to this channel.

Rate This

5.0 / 5 (0 votes)

Étiquettes Connexes
Artificial IntelligenceCybersecurityPhishing AttacksMalware CodeDeepfakesData BreachAI AutomationMachine LearningThreat HuntingIncident Response
Besoin d'un résumé en anglais ?