Will AI Help or Hurt Cybersecurity? Definitely!

IBM Technology
9 Oct 202310:00

Summary

TLDRThis video delves into the intersection of artificial intelligence and cybersecurity, exploring both the risks and benefits. AI can enhance phishing attacks and generate malware, while also improving cybersecurity through automation and machine learning for anomaly detection. The video highlights the potential of AI in proactive threat hunting and generating incident response playbooks, emphasizing the shift towards a more cost-effective and secure cybersecurity approach.

Takeaways

  • šŸ§  Artificial Intelligence (AI) and cybersecurity are two of the most talked-about topics in IT and society today, with implications that extend beyond technical circles.
  • šŸ”’ The intersection of AI and cybersecurity is particularly significant, with potential both for harm and benefit in the field of cybersecurity.
  • šŸ“§ AI can be used to enhance phishing attacks by generating more natural-sounding language, potentially bypassing existing chatbot safeguards.
  • šŸ’» Generative AI, including chatbots, can write code and potentially insert malware or backdoors into the code, necessitating careful verification of AI-generated code.
  • šŸ“¢ Misinformation can be propagated by AI through 'hallucinations' where it conflates unrelated information or makes up details, and through prompt injections by attackers.
  • šŸŽ­ Deepfakes, where AI mimics a person's likeness and voice, pose a significant challenge for trust in digital media, as they can be difficult to distinguish from reality.
  • šŸ’° The use of AI and automation in cybersecurity can significantly reduce the cost of data breaches, saving an average of $176 million per breach and reducing the time to identify and contain breaches.
  • šŸ” Machine learning, a subset of AI, is particularly effective in analyzing large datasets to spot outliers and anomalies, which is crucial for security.
  • šŸ¤– Automation can anticipate and assist with tasks in cybersecurity, such as generating incident response playbooks and conducting threat hunting.
  • šŸ—žļø Foundation models, or large language models, can summarize large amounts of information quickly, aiding in incident and case summarization.
  • šŸ¤ AI chatbots can interact in natural language, making it easier to query technical systems and retrieve information about threats and indicators of compromise.
  • šŸ”Ž AI can help in creating hypothetical attack scenarios for threat hunting, expanding the imagination beyond human limitations to proactively identify potential vulnerabilities.

Q & A

  • What are the two hottest topics mentioned in the script that are significant in both IT and society?

    -The two hottest topics mentioned are artificial intelligence (AI) and cybersecurity.

  • What is the intersection of AI and cybersecurity that the speaker suggests is even hotter?

    -The intersection is the use of AI from a cybersecurity standpoint, both for enhancing security measures and potentially creating new vulnerabilities.

  • How could AI potentially improve phishing attacks?

    -AI could improve phishing attacks by generating very natural-sounding language, making it harder to detect non-native English speakers and bypassing some chatbot protections.

  • What is the term used to describe AI making up information or conflating unrelated things?

    -The term used is 'hallucination,' which can lead to misinformation.

  • What is the potential risk of AI writing code for us, from a cybersecurity perspective?

    -AI could potentially write malware, insert backdoors into the code, or include malicious code that we may not detect.

  • What does the speaker suggest as the number one thing to reduce the cost of a data breach and improve response time?

    -The speaker suggests the extensive use of AI and automation as the number one thing to reduce the cost of a data breach and improve response time.

  • How much can the use of AI and automation save on average per data breach according to the 'Cost of a Data Breach' survey?

    -The use of AI and automation can save an average of $176 million per data breach.

  • What is the term for the technology that can spot outliers and anomalies effectively in large datasets?

    -The term is 'machine learning,' which is a subset of AI.

  • What is the potential use of generative AI in summarizing large documents or cases?

    -Generative AI can provide quick summaries of large documents or cases, helping to identify trends and key points efficiently.

  • How can AI assist in incident response by using natural language queries?

    -AI can help by building queries based on natural language inputs, providing information about specific threats or indicators of compromise, and assisting in generating incident response playbooks.

  • What is the potential of AI in threat hunting, according to the script?

    -AI can potentially generate hypothetical attack scenarios that humans might not have thought of, aiding in proactive threat hunting within an environment.

  • What is the overall goal of integrating AI with cybersecurity, as mentioned in the script?

    -The overall goal is to move from a reactive to a more proactive approach to cybersecurity, making it more cost-effective and enhancing safety.

Outlines

00:00

šŸ¤– AI and Cybersecurity Risks and Intersections

The video script introduces the two trending topics of artificial intelligence (AI) and cybersecurity, highlighting their significance in both IT and society. It discusses the potential downsides of AI from a cybersecurity perspective, such as the ability to generate sophisticated phishing attacks using natural-sounding language through chatbots. It also touches on the challenges of detecting AI-generated misinformation due to 'hallucinations' or prompt injections. The script suggests that traditional methods of detecting such threats may become less effective, emphasizing the need for new strategies to counter these advanced AI-driven cybersecurity risks.

05:02

šŸ›”ļø Positive Applications of AI in Cybersecurity

The second paragraph delves into the positive aspects of AI in enhancing cybersecurity. It references the 'Cost of a Data Breach' survey, which underscores the substantial cost savings and improved response times achieved through the use of AI and automation. The script explains how AI, particularly machine learning, excels at identifying anomalies and outliers within large datasets, a crucial capability for detecting security threats. It also explores the potential of generative AI, such as foundation models and chatbots, for summarizing information, assisting with incident response, and generating playbooks. The paragraph concludes by highlighting the shift towards a more proactive cybersecurity approach facilitated by AI, aiming to create a more cost-effective and secure environment.

Mindmap

Keywords

šŸ’”Artificial Intelligence (AI)

Artificial Intelligence, or AI, refers to the simulation of human intelligence in machines that are programmed to think and act like humans. In the context of the video, AI is a central topic because it discusses both the potential risks and benefits AI presents in the field of cybersecurity. For example, the script mentions how AI can be used to generate natural-sounding language for phishing attacks, which is a downside, but also how AI can be utilized for better analysis and automation in cybersecurity, which is a positive.

šŸ’”Cybersecurity

Cybersecurity is the practice of protecting internet-connected systems, including hardware, software, and data, from attack, damage, or unauthorized access. It is a key theme in the video as it explores how AI can impact cybersecurity both negatively and positively. The script discusses the potential for AI to enhance phishing attacks and misinformation, while also highlighting AI's role in improving data breach response times and costs.

šŸ’”Phishing Attacks

Phishing attacks are a type of online scam where attackers attempt to acquire sensitive information such as usernames, passwords, and credit card details by disguising themselves as a trustworthy entity in an electronic communication. The video script explains how AI can make these attacks more sophisticated by generating natural-sounding language, thereby making it harder to detect them.

šŸ’”Chatbots

Chatbots are computer programs designed to simulate conversation with human users, often used for customer service or information acquisition. In the video, chatbots are mentioned as tools that can be re-engineered to generate phishing emails, despite having protections against such actions, indicating a potential misuse of AI in cybersecurity.

šŸ’”Malware

Malware, short for malicious software, refers to any program or file that is harmful or unwanted. The script discusses how AI can be used to write code quickly, but also the potential for this AI-generated code to include malware or backdoors, which can be a significant cybersecurity risk.

šŸ’”Misinformation

Misinformation is false or inaccurate information that is spread unintentionally. The video script mentions 'hallucination' in generative AIs, where they may create false impressions by making up or conflating unrelated information. This can lead to the spread of misinformation, which is a concern in the context of cybersecurity.

šŸ’”Deepfakes

Deepfakes are synthetic media in which a person's image or voice is faked using AI techniques. The script uses deepfakes as an example of how AI can be used to create convincingly false representations of individuals, which poses a significant challenge in verifying the authenticity of digital content in cybersecurity.

šŸ’”Data Breach

A data breach occurs when unauthorized individuals gain access to sensitive information. The video script discusses a survey indicating that the use of AI and automation can significantly reduce the cost and response time of a data breach, highlighting the positive impact AI can have on cybersecurity.

šŸ’”Machine Learning

Machine learning is a subset of AI that provides systems the ability to learn and improve from experience without being explicitly programmed. The script explains that machine learning is used in cybersecurity to analyze large datasets and spot outliers and anomalies, which is crucial for identifying potential security threats.

šŸ’”Automation

Automation refers to the use of technology to perform tasks without the need for human intervention. In the context of the video, automation is discussed as a way to improve cybersecurity by anticipating next steps and reducing the time to identify and contain breaches, which can lead to significant cost savings.

šŸ’”Foundation Models

Foundation models, also known as large language models or generative AI chatbots, are advanced AI systems capable of understanding and generating human-like text. The video script suggests that these models can be used for summarizing information, incident summarization, and interacting with users in natural language, which can enhance cybersecurity operations.

Highlights

Artificial intelligence and cybersecurity are two of the hottest topics in IT and society today.

The intersection of AI and cybersecurity is an even hotter topic.

AI can generate natural-sounding language, improving phishing attacks.

Prompt re-engineering can bypass chatbot protections against malicious use.

AI can write code quickly, but also potentially insert malware or backdoors.

AI suffers from 'hallucination', generating false or misleading information.

Attackers can perform prompt injection to insert bad information into AI systems.

Deepfakes use AI to convincingly impersonate people in videos.

AI and automation can save $176 million per data breach on average.

AI can reduce the time to identify and contain a breach by 108 days.

Machine learning excels at spotting outliers and anomalies in security.

Automation can anticipate next steps and assist in incident response.

Foundation models can summarize large amounts of information quickly.

AI can help generate incident response playbooks on the fly.

AI chatbots allow natural language interaction for querying technical systems.

AI can assist in threat hunting by generating hypothetical attack scenarios.

AI in cybersecurity aims to move from reactive to proactive security measures.

Transcripts

play00:00

What are two of the hottest topics not only inĀ  IT, but in society these days? Well, if you saidĀ Ā 

play00:07

artificial intelligence and cybersecurity, I'dĀ  agree with you. Both are really hot. In fact,Ā Ā 

play00:15

even your non-technical friends have heard ofĀ  these and may be talking about them and asking youĀ Ā 

play00:20

questions. And I'm going to suggest to you thisĀ  intersection between the two. Even hotter still.Ā Ā 

play00:26

So what are we going to talk about in this video?Ā  I'm going to talk about what from a cybersecurityĀ Ā 

play00:31

standpoint, AI can do to you and what it can do forĀ  you. So let's take a look at that. We're goingĀ Ā 

play00:37

to start with some of the downsides first, andĀ  then we'll conclude with some positive things.Ā Ā 

play00:42

On the downside, what could AI do to us from aĀ  cybersecurity standpoint? Well, it turns out thatĀ Ā 

play00:49

a lot of times we're able to tell about a phishingĀ  attack because the English language of the writerĀ Ā 

play00:54

is not so good. It's not their first language.Ā  However, you could now go into a chatbot and useĀ Ā 

play01:00

it to generate very natural sounding language,Ā  even though you might say "But Jeff, there areĀ Ā 

play01:07

protections in some of these chatbots" that ifĀ  you tell it to write you a phishing email, itĀ Ā 

play01:12

won't do it. There are also ways of re-engineeringĀ  your prompt so that you can get past that. So thisĀ Ā 

play01:18

is one area where phishing attacks are going toĀ  get better. And the ways that we've been able toĀ Ā 

play01:23

detect them in the past are not going to be soĀ  effective anymore. What's another thing? Well,Ā Ā 

play01:27

on the positive side, this generative AI andĀ  chatbots and things like that are able to writeĀ Ā 

play01:33

code for us. So if I want to, I can have it writeĀ  code and do it really quickly and effectively. ItĀ Ā 

play01:40

also means it can write malware as well. It alsoĀ  means it could insert malware into the code thatĀ Ā 

play01:46

I have. It also means it could insert backdoorsĀ  into the code that I have. So we have got to alsoĀ Ā 

play01:53

verify when we ask it to write code for us thatĀ  in fact, the code that it's giving us is pureĀ Ā 

play02:00

and is doing what we intend for it to do. AnotherĀ  thing it could do to us, misinformation. How doesĀ Ā 

play02:07

this happen? Well, these are generative AIs. SoĀ  one of the things that they suffer from is thisĀ Ā 

play02:13

issue we call hallucination, where it may make upĀ  information or conflate two things that are notĀ Ā 

play02:19

really related to each other and give a falseĀ  impression. Also, we could have a determinedĀ Ā 

play02:24

attacker who is doing what's known as a promptĀ  injection where they're inserting bad informationĀ Ā 

play02:29

into the system. Or they're attacking the corpus,Ā  that is, the body of knowledge that the system isĀ Ā 

play02:35

based on. And if they were able to do that, thenĀ  what comes out would be wrong information. So weĀ Ā 

play02:41

have to be careful to guard against overrelianceĀ  and make sure that we're verifying and testingĀ Ā 

play02:47

our sources so that we can make sure that they'reĀ  trustworthy. One other example I'll give you here,Ā Ā 

play02:52

and there are actually many, but I think thisĀ  one's particularly interesting is this idea ofĀ Ā 

play02:57

a deepfake. A deepfake is where we basicallyĀ  have an AI system that is able to copy yourĀ Ā 

play03:04

image and likeness, your mannerisms, your voice,Ā  your appearance, all of these things to the pointĀ Ā 

play03:11

where someone is looking at a video of you andĀ  they can't tell if it really was an actual videoĀ Ā 

play03:17

of you or a deepfake where we could have youĀ  saying things that weren't true. And therefore,Ā Ā 

play03:22

if we're going to trust this kind of system, weĀ  need a way to verify these things. But right now,Ā Ā 

play03:27

the deepfake technology has gone so far ahead inĀ  a very short period of time that it's going toĀ Ā 

play03:33

be hard to verify those kinds of things. Okay,Ā  we've just talked about what AI can do to us.Ā Ā 

play03:40

Now let's look at some positives. What can AI doĀ  for us in the cybersecurity space? It turns outĀ Ā 

play03:45

a lot. In fact, we do a survey each year that weĀ  call the "Cost of a Data Breach" survey, and theĀ Ā 

play03:52

report that came back this year indicated thatĀ  the number one thing you can do to save on theĀ Ā 

play03:58

cost of a data breach and improve your responseĀ  time is the extensive use of AI and automation.Ā Ā 

play04:05

And here's what it can do. On the one hand, it canĀ  save on average $176 million per data breach. WithĀ Ā 

play04:14

the average data breach costing four and a halfĀ  million. That's a significant savings. It can alsoĀ Ā 

play04:20

cut down the mean time to identify and contain aĀ  breach by 108 days. That makes a big difference.Ā Ā 

play04:29

So we know this is effective. Now, what areĀ  we doing to make these kinds of results? Well,Ā Ā 

play04:34

it turns out a lot of what we do in this spaceĀ  is to do better analysis. We're going to analyzeĀ Ā 

play04:43

large data sets, lots of information that weĀ  have out there. It's very hard to find patternsĀ Ā 

play04:48

if I give you a whole large dataset, but ifĀ  I use a technology called machine learning,Ā Ā 

play04:55

I can do a lot better job of spotting outliers andĀ  anomalies, which is what we want to do in securityĀ Ā 

play05:01

a lot. Now, I mentioned machine learning. What isĀ  that? Well, if you think about AI in particular asĀ Ā 

play05:07

this large sort of umbrella term with a number ofĀ  technologies involved, well, machine learning isĀ Ā 

play05:13

a subset of that that specifically deals with someĀ  of these kind of analyzes that I've just referredĀ Ā 

play05:20

to. Machine learning is what is often used in theĀ  security space. We do it a lot because, again,Ā Ā 

play05:26

it's very good at spotting anomalies and outliersĀ  and patterns, and that's what we need a lot ofĀ Ā 

play05:31

in the security space. So we're doing a lot ofĀ  this today, and a lot of these results come fromĀ Ā 

play05:37

leveraging machine learning, which is a subfieldĀ  of AI. What else I mentioned? Automation. Well,Ā Ā 

play05:44

I can help us in the automation task as well,Ā  and I'll give you a few examples coming up. ButĀ Ā 

play05:50

some of the things it can do is anticipate what weĀ  need to do next. And some of those kind of thingsĀ Ā 

play05:56

really start coming in from the area of deepĀ  learning, which is a subfield of machine learning.Ā Ā 

play06:02

And then now this really new area that everyoneĀ  is talking about these days, foundation models,Ā Ā 

play06:07

or you may hear them called large language models,Ā  generative AI chatbots. They all exist in thisĀ Ā 

play06:14

space down here. What can we start doing? AsĀ  I said, security has mostly leveraged thisĀ Ā 

play06:20

in the past. What can we start doing to leverageĀ  some of this stuff going forward? Well, it turnsĀ Ā 

play06:26

out a lot of things. Because one of the thingsĀ  that foundation models are really good at isĀ Ā 

play06:31

summarizing. They can be fed a lot of informationĀ  and then it can give you a very quick summary ofĀ Ā 

play06:38

that. Why would that be useful? Well, if you'veĀ  got tons of documents you're trying to review, itĀ Ā 

play06:43

could give you the net, the cliff notes of that.Ā  Another good use case for this would be incidentĀ Ā 

play06:49

summarization and case summarization. If I'mĀ  seeing lots and lots of cases in my environment,Ā Ā 

play06:54

this kind of technology could be used to tell meĀ  what are the trends among those cases. Are theseĀ Ā 

play07:01

things all related or are they all very different?Ā  And my guess is there are probably at least a fewĀ Ā 

play07:05

things that are similar about these. So that'sĀ  another nice use case that we'll see coming inĀ Ā 

play07:12

the future from generative AI, foundation modelsĀ  into cybersecurity. Some other things we can do.Ā Ā 

play07:19

We know these kind of chatbots are good atĀ  interacting, so you can respond to them inĀ Ā 

play07:28

natural language. You don't have to format yourĀ  queries using a particular query language or usingĀ Ā 

play07:33

a particular syntax. You use the natural languageĀ  that you're used to. So for me, I would state inĀ Ā 

play07:40

English, "What--are we being affected by thisĀ  particular kind of malware?" And maybe what itĀ Ā 

play07:46

could do is build a query for me that I can thenĀ  run into my environment and it comes back andĀ Ā 

play07:51

tells me, am I affected or not? And I can then askĀ  more questions. "Tell me more about this kind ofĀ Ā 

play07:56

malware. What kind of indicators of compromise areĀ  there that are associated with this?" All of thatĀ Ā 

play08:02

stuff gives me a very easy, intuitive way to getĀ  information that is highly technical out of theĀ Ā 

play08:09

system and do this much faster. Another thing weĀ  might want to do is generate playbooks. PlaybooksĀ Ā 

play08:18

are the things that we use in incident responseĀ  when we're trying to figure out what do we need toĀ Ā 

play08:22

do once we've had an incident. So generating theseĀ  on the fly, generative AI, generating playbooks,Ā Ā 

play08:30

you can see where there might be some type ofĀ  crossover. This is a good use case also for thisĀ Ā 

play08:35

technology. So expect to see more of that. And inĀ  fact, there could be other types of things whereĀ Ā 

play08:40

we're using generative, creative technologyĀ  because these things really are creating. ForĀ Ā 

play08:45

instance, with threat hunting. A threat hunter isĀ  basically coming up with a hypothesis and saying,Ā Ā 

play08:52

I wonder if someone were to attack us, maybeĀ  they would do the following things. And weĀ Ā 

play08:58

have a limitation in terms of our imagination.Ā  Sometimes the bad guys may dream up scenariosĀ Ā 

play09:03

that we don't. So it might be useful to haveĀ  a system that can dream up scenarios we didn'tĀ Ā 

play09:08

think of using a generative AI to generateĀ  hypothetical cases that we then go out andĀ Ā 

play09:15

automate and do a threat hunt in our environment.Ā  This is all really super exciting stuff, I think,Ā Ā 

play09:21

and it shows exactly what we'll be able to do inĀ  this space because what we want to be able to doĀ Ā 

play09:26

is move away from being purely reactive to a moreĀ  proactive way of doing cybersecurity. And that'sĀ Ā 

play09:35

the good news in this story. We've got AI andĀ  cybersecurity, and if they're working together,Ā Ā 

play09:41

as you see here, we can end up with a moreĀ  proactive solution that's more cost effectiveĀ Ā 

play09:46

and keeps us all much safer. Thanks for watching.Ā  If you found this video interesting and would likeĀ Ā 

play09:52

to learn more about cybersecurity, please rememberĀ  to hit like and subscribe to this channel.

Rate This
ā˜…
ā˜…
ā˜…
ā˜…
ā˜…

5.0 / 5 (0 votes)

Related Tags
Artificial IntelligenceCybersecurityPhishing AttacksMalware CodeDeepfakesData BreachAI AutomationMachine LearningThreat HuntingIncident Response