How Microsoft Copilot for Security works

AI, Copilots & ChatGPT at Microsoft
15 Nov 202316:18

Summary

TLDRMicrosoft's Security Copilot is a groundbreaking AI assistant designed to enhance organizational security by leveraging generative AI. It integrates seamlessly with various Microsoft services, including Microsoft Defender XDR, Intune, Entra, and Purview, to streamline security tasks. The tool, augmented with cybersecurity expertise, offers a stateful experience for tracking investigations and provides a natural language interface to security data. Security Copilot can analyze scripts, generate informed responses, and even create reports with multi-step sequences using Promptbooks. It uses Low-Rank Adaptive fine-tuning (LoRA) to specialize the AI in cybersecurity, ensuring up-to-date threat intelligence. The tool accelerates incident investigation, automates security processes, and aids in generating executive summaries, making it an invaluable asset for security teams and analysts.

Takeaways

  • πŸš€ **Microsoft's Security Copilot** is a new AI assistant designed to help organizations with security-related tasks using generative AI.
  • 🧩 **Integration with Microsoft Services**: Security Copilot is integrated with various Microsoft services like Microsoft Defender XDR, Intune, Entra, and Purview.
  • πŸ’‘ **Stateful Experience**: Unlike other Copilots, Security Copilot remembers previous sessions to allow for easier return to ongoing investigations.
  • πŸ” **Enhanced Language Model**: It uses an augmented GPT model with security skills, improving its ability to work with security signals in the user's environment.
  • πŸ“Œ **Managed Plugins**: Security Copilot offers a range of managed plugins for identity, device endpoints, incidents, threat intelligence, and data security.
  • πŸ“ˆ **Data Enrichment**: The assistant uses data from its plugins to enrich the investigation process and generate informed responses.
  • πŸ”§ **Automated Security Processes**: It can automate multi-step security processes using Promptbooks, as demonstrated with the analysis of a suspicious PowerShell script.
  • πŸ†š **Comparison to Unmodified GPT**: Security Copilot outperforms an unmodified GPT model due to its fine-tuning and grounding in security data, as shown in the CVE and domain lookup examples.
  • πŸ› οΈ **Fine-Tuning and Specialization**: Security Copilot employs methods like LoRA fine-tuning to specialize the LLM for cybersecurity, making it more effective than general AI models.
  • βš™οΈ **Orchestration Engine**: The assistant has a built-in engine that retrieves, ranks, and grounds data for the LLM, ensuring responses are informed and within token limits.
  • πŸ“ **Investigation Workflow**: Security Copilot aids in the entire incident investigation process, from initial queries to generating reports, and it maintains a stateful session for continuity.
  • πŸ“Š **Reporting and Summaries**: It can generate both technical and non-technical summaries, making complex security incidents understandable for various stakeholders.

Q & A

  • What is Microsoft's Security Copilot?

    -Microsoft's Security Copilot is a security AI assistant that leverages generative AI and Microsoft's cybersecurity expertise to help perform common security-related tasks quickly.

  • How does Security Copilot integrate with other Microsoft services?

    -Security Copilot integrates with services like Microsoft Defender XDR, Microsoft Intune for endpoint management, Microsoft Entra for identity and access management, and Microsoft Purview for data security.

  • What is the significance of Security Copilot's stateful experience?

    -The stateful experience allows users to easily return to investigations from previous sessions, providing continuity and context for ongoing security tasks.

  • How does Security Copilot enhance the large language model training?

    -Security Copilot enhances the large language model training by augmenting it with security skills, enabling it to work effectively with security signals in the user's environment.

  • What is the role of managed plugins in Security Copilot?

    -Managed plugins in Security Copilot provide access to various Microsoft and third-party services, allowing the AI to gather and analyze a broad range of security data.

  • How does Security Copilot assist in automating security process steps?

    -Security Copilot uses Promptbooks to automate multi-step sequences in security processes, providing a step-by-step breakdown of tactics used by exploits for clear and understandable analysis.

  • What is the advantage of using Security Copilot over an off-the-shelf large language model?

    -Security Copilot offers a fine-tuned, enterprise-grade natural language interface specific to security data, providing more accurate, relevant, and informed responses compared to a general off-the-shelf model.

  • How does Security Copilot handle real-time data retrieval and threat intelligence?

    -Security Copilot uses evergreen Threat Intelligence with real-time retrieval to ensure it's always up-to-date with new and trending threats.

  • Can Security Copilot help non-experts perform expert-level security analysis?

    -Yes, Security Copilot is designed to provide expert advice and perform advanced security analysis, even for users who are not security experts themselves.

  • What is the process for a security analyst using Security Copilot to investigate an incident?

    -A security analyst can use Security Copilot to query user account statuses, analyze login attempts, assess risk levels, generate and run security queries, and correlate alerts with incidents for a comprehensive investigation.

  • How does Security Copilot assist in generating reports and summaries?

    -Security Copilot can generate non-technical executive-level summaries and pin board summaries for SecOp teams, using session context to create thorough and easily understandable reports.

  • What are the future integration plans for Security Copilot?

    -Security Copilot is planned to be integrated with Microsoft Defender for Cloud and other plugins in the future, expanding its capabilities across various Microsoft admin portals.

Outlines

00:00

πŸ€– Introduction to Microsoft's Security Copilot

The video introduces Microsoft's Security Copilot, an AI assistant that leverages GPT-powered natural language to enhance an organization's security operations. It is equipped with Microsoft's cybersecurity expertise and can perform common security-related tasks quickly using generative AI. The assistant is integrated with various Microsoft services such as Microsoft Defender XDR, Intune, Entra, and Purview. Ryan Munsch, a member of the team that built Security Copilot, joins the discussion to explain how generative AI can assist security teams facing skill and staffing shortages and the increased frequency of cyber attacks.

05:01

πŸ” Security Copilot's Capabilities and Fine-Tuning

The discussion delves into the capabilities of Security Copilot, emphasizing its use of fine-tuning and grounding data from various plugins to provide a superior experience over off-the-shelf models. It demonstrates how Security Copilot can analyze a PowerShell script for malicious content and generate a step-by-step breakdown of the exploit tactics. The video also compares Security Copilot's responses to those of an unmodified GPT model, highlighting the importance of security context and the challenges of using a generic model for specialized tasks like security analysis.

10:01

🚨 Incident Investigation Process with Security Copilot

The video outlines a hypothetical process a security analyst might follow when using Security Copilot to investigate an incident. It begins with a support call regarding a user's access issue and progresses through various prompts to Security Copilot that reveal a potential security breach. The assistant helps correlate alerts with an incident, provides a summary, and suggests further investigation steps. It also demonstrates the generation of PowerShell scripts for remediation and the use of Threat Intelligence to understand the tactics of the threat actor involved.

15:03

πŸ“š Security Copilot's Integration Across Microsoft Services

The final part of the video script discusses the integration of Security Copilot across various Microsoft admin portals, emphasizing the time-saving benefits for different roles such as security analysts, endpoint admins, identity admins, and data security admins. It highlights how Security Copilot can provide summaries, automate tasks, and generate insights within each of these roles' respective Microsoft services. The video concludes with an invitation for viewers to join the early access program for Security Copilot and help shape its future capabilities.

Mindmap

Keywords

πŸ’‘Generative AI

Generative AI refers to artificial intelligence systems that are capable of creating new content, such as text, images, or music, rather than just recognizing or analyzing existing content. In the context of the video, Microsoft's Security Copilot utilizes generative AI to assist in security-related tasks, automating processes, and generating responses to security incidents. It is a key technology that enables the quick and informed handling of security threats.

πŸ’‘Security Copilot

Security Copilot is a new security AI assistant developed by Microsoft, designed to help perform common security-related tasks quickly using generative AI. It is integrated with Microsoft's cybersecurity expertise and various Microsoft services such as Defender, Intune, Entra, and Purview. The tool is portrayed as a significant aid in managing and responding to security incidents within an organization.

πŸ’‘Microsoft Defender XDR

Microsoft Defender XDR is an advanced endpoint detection and response (EDR) platform that provides comprehensive protection against sophisticated cyber threats. In the video, it is mentioned as one of the embedded experiences within which Security Copilot can be utilized, highlighting its role in the broader ecosystem of Microsoft's security offerings.

πŸ’‘Endpoint Management

Endpoint management involves the processes and tools used to control and manage the various devices that connect to a network. Microsoft Intune for endpoint management is referenced in the script as one of the managed plugins that integrate with Security Copilot, allowing for streamlined security operations across an organization's devices.

πŸ’‘Identity and Access Management (IAM)

IAM refers to the security discipline of ensuring that proper people in an organization have the appropriate access to information and resources. Microsoft Entra is mentioned as a tool for IAM that works in conjunction with Security Copilot to manage identities and permissions, enhancing the security posture of an organization.

πŸ’‘Data Security

Data security involves protecting the confidentiality, integrity, and availability of data stored in digital or physical form. Microsoft Purview is highlighted in the video as a tool for data security and is integrated with Security Copilot to help manage and protect sensitive information within an organization.

πŸ’‘CVE (Common Vulnerabilities and Exposures)

CVE is a system for identifying and cataloging known vulnerabilities and exposures in software and hardware. It is used in the video to demonstrate how Security Copilot can provide detailed information about specific vulnerabilities, which is crucial for security analysts when addressing potential threats.

πŸ’‘Threat Intelligence

Threat intelligence is the process of gathering, analyzing, and disseminating information about potential threats to an organization's information assets. Security Copilot is shown to utilize threat intelligence to keep the system up-to-date with new and trending threats, which is essential for proactive security measures.

πŸ’‘LoRA fine-tuning

LoRA fine-tuning is a method used to train large language models (LLMs) with a specific focus, such as security. In the video, it is mentioned as a technique employed by Microsoft to specialize the AI in Security Copilot for cybersecurity tasks, making it more effective than a general-purpose LLM.

πŸ’‘Incident Response

Incident response is a critical process that involves preparing for and responding to a cybersecurity breach or incident. The video demonstrates how Security Copilot can expedite the incident response process by correlating alerts, generating summaries, and providing insights into the nature of the incident.

πŸ’‘Ransomware

Ransomware is a type of malicious software that encrypts a victim's data and demands payment to restore access. In the video, a scenario involving a potential human-operated ransomware attack is discussed, illustrating the severity of such incidents and the role of Security Copilot in managing and mitigating them.

πŸ’‘Compliance

Compliance refers to the adherence to laws, regulations, standards, or policies. In the context of the video, a device's compliance with security policies is mentioned, emphasizing the importance of policy adherence in preventing security breaches and the role of tools like Microsoft Intune in managing compliance.

Highlights

Microsoft's Security Copilot is a new AI assistant that leverages generative AI to help organizations perform common security-related tasks quickly.

Security Copilot is equipped with Microsoft's cybersecurity expertise and can be integrated with various Microsoft services including Defender XDR, Intune, Entra, and Purview.

The tool can help security teams facing skill and staffing shortages by automating responses to security incidents, threats, and vulnerabilities.

Security Copilot provides a stateful experience, allowing users to easily return to investigations from previous sessions.

The AI has been trained with security skills and can work with signals in the user's environment to improve its performance.

Security Copilot offers a broad range of managed plugins, including third-party options like ServiceNow for incident management.

The tool can generate informed responses to security incidents by grounding its AI in the data available from the connected plugins.

Security Copilot uses Promptbooks to automate multi-step security processes, such as suspicious script analysis.

A comparison between Security Copilot and an off-the-shelf GPT model shows the former's advantage in providing detailed and useful responses to security-related prompts.

Security Copilot utilizes Low-Rank Adaptive fine-tuning (LoRA) to specialize the LLM training for cybersecurity.

The tool provides built-in cyber-specific skills and real-time Threat Intelligence to stay up-to-date with new and trending threats.

Security Copilot can assist in the investigation process by correlating alerts with incidents and providing summaries of potential threats.

The tool can generate PowerShell scripts for remediation actions, addressing the short-term memory problem often faced by security analysts.

Security Copilot offers stateful sessions, allowing for the entire investigation context, including pin boards, to be saved and shared.

The tool can provide non-technical summaries suitable for executive-level reporting, simplifying communication of complex security incidents.

Security Copilot is integrated across various Microsoft admin portals, streamlining experiences for different admin roles like endpoint, identity, and data security admins.

The AI assistant can help automate common tasks and generate insights in identity governance and data loss prevention, improving response times.

Users can join the early access program at aka.ms/SecurityCopilot to start using the tool and contribute to its development.

Transcripts

play00:00

(music)

play00:02

- What if I told you that you could use

play00:04

GPT-powered natural language

play00:06

to investigate and respond to security incidents, threats,

play00:09

and vulnerabilities facing your organization right now?

play00:12

Well, today we're going to take a look

play00:13

at how Microsoft's Security Copilot,

play00:15

a new security AI assistant skilled

play00:17

with Microsoft's vast cybersecurity expertise,

play00:20

can help you perform common security-related tasks quickly

play00:24

using generative AI.

play00:26

And this includes embedded experiences

play00:27

within the new Microsoft Defender XDR,

play00:30

Microsoft Intune for endpoint management,

play00:33

Microsoft Entra for identity and access management,

play00:35

and Microsoft Purview for data security and much more.

play00:38

And joining me today for a deeper dive into Security Copilot

play00:41

is Ryan Munsch, who's on the team that built it.

play00:44

- Thanks Jeremy. It's great to be here.

play00:45

And I'm excited to share more about what we've done

play00:47

with generative AI and security.

play00:49

- This is a topic I've really been looking forward to

play00:50

because I can see a lot of benefits here.

play00:52

You know, security teams,

play00:54

they're stretched pretty thin these days.

play00:56

As we build more and more tools, though,

play00:57

to help them harden security or detect

play01:00

and respond to incidents,

play01:01

it's still a struggle to parse through

play01:03

all that information that's available to them

play01:04

and respond quickly.

play01:06

- Right, there are definitely skill

play01:07

and staffing shortages at play here.

play01:10

And when you add to that the increased frequency of attacks,

play01:13

even the most skilled teams can benefit from generative AI.

play01:16

In fact, think of Security Copilot

play01:18

as an enterprise-grade natural language interface

play01:21

to your organization's security data.

play01:23

Let me show you.

play01:25

I'm in the Microsoft Security Copilot right now.

play01:27

Notice unlike other Microsoft Copilots,

play01:30

it's a stateful experience to let you easily return

play01:33

to investigations from previous Copilot sessions.

play01:36

Now, this isn't an ordinary instance of GPT.

play01:39

We've augmented the large language model training

play01:42

with security skills so that it can work

play01:44

with the signal in your environment,

play01:46

and the more quality security signals

play01:48

it has access to, the better.

play01:50

In the bottom left corner are your managed plugins.

play01:53

There's Microsoft Entra for identity,

play01:55

Microsoft Intune for device endpoints,

play01:58

Microsoft Defender plugins for incidents,

play02:00

Threat Intelligence, and more,

play02:02

as well as Microsoft Purview for data security

play02:05

and Microsoft Sentinel, our cloud-based SIEM.

play02:07

And you even have options for third-party plugins

play02:11

like you see here with ServiceNow for incidents.

play02:14

- This is a real breadth of information, then,

play02:15

that basically Security Copilot

play02:17

can use to help investigate incidents

play02:19

and also generate informed responses later.

play02:21

- Yeah, that's all data that can ground

play02:23

and enrich the Security Copilot experience,

play02:26

and the prompt experience brings additional skills too.

play02:29

Notice like other Copilot experiences,

play02:32

it proactively suggest prompts to get you started

play02:35

like this one: "Show high severity incidents

play02:38

and recent threat intelligence."

play02:39

That said, it gets even more powerful

play02:42

with multi-step sequences using Promptbooks.

play02:45

I'll open this one for suspicious script analysis

play02:47

to automate security process steps, and we can try it out.

play02:51

I have a PowerShell script in my clipboard

play02:53

that I'll paste into the prompt,

play02:55

and in just a few moments,

play02:56

Copilot has safely reversed engineered the malware

play02:59

in the script with a step-by-step breakdown

play03:01

of the tactics used by the exploit

play03:04

in a way that's clear and understandable.

play03:06

- Okay, so what's stopping someone, then,

play03:07

from using maybe an off-the-shelf large language model?

play03:10

And maybe a lot of people are going to say and think

play03:12

that if I just use the right prompts,

play03:14

I'll be able to get the response that I want.

play03:16

- So that's partly true, but it's not quite that simple.

play03:19

Let's demonstrate this together.

play03:21

I'll use Security Copilot,

play03:22

and you can use the GPT playground

play03:24

using an unmodified GPT in the Azure OpenAI service.

play03:28

We'll run the exact same prompts.

play03:30

- Yep, let's do it. Sounds fun.

play03:31

So right now, I've actually got the GPT playground open.

play03:34

On the right, you're seeing my screen,

play03:36

the Azure OpenAI Studio.

play03:37

It's actually running a GPT-4 instance.

play03:40

And on the left, you're seeing Security Copilot.

play03:42

Now, we're going to start

play03:43

with a Common Vulnerability and Exposure,

play03:46

or CVE as we refer to it normally.

play03:48

And Ryan and I are both going to submit the same prompts

play03:51

and describe this particular CVE.

play03:54

And that's going to take a moment to run and analyze the data,

play03:57

especially on the Security Copilot side.

play03:59

It's chewing through that and getting everything ready.

play04:01

And when it's finished, you'll see

play04:03

that the off-the-shelf model on the right,

play04:06

it knows what a CVE is,

play04:07

but otherwise, this response isn't useful at all.

play04:10

So you saw on the Security Copilot side,

play04:12

or you can see it now,

play04:13

that it gives us the details for this CVE.

play04:16

- So let's try something else.

play04:18

We'll run another prompt.

play04:19

We'll prompt it about a suspicious domain,

play04:22

VectorsAndArrows.com.

play04:23

Notice that while Security Copilot gathers the information

play04:26

before formulating a response,

play04:28

the GPT-4 instance on the right acknowledges

play04:31

that it actually can't find the information.

play04:33

Also, we can see here in Security Copilot's response

play04:36

that it knows that if we're asking about this domain,

play04:39

it likely involves a security event.

play04:41

So it finds all of the matching IP addresses

play04:44

that the domain resolves to,

play04:45

the ASNs, or Autonomous System Numbers, related to it,

play04:49

and corresponding organizations.

play04:51

- And the differences that you're seeing here,

play04:53

it's resulting in the fact

play04:55

that the large language model that's off the shelf,

play04:57

it relies on general training,

play04:59

whereas prompts that were sent

play05:00

to Security Copilot on your side,

play05:02

they carry the security context

play05:04

using all that fine tuning and grounding data

play05:07

from all the plugins that you showed earlier.

play05:09

- That said, of course the off-the-shelf model you use

play05:12

could be grounded with more data,

play05:14

and you can add up to three plugins

play05:15

to provide additional security data.

play05:17

But you'd still need to build an orchestration engine

play05:20

to find and retrieve the data, rank it for relevance,

play05:24

and then add the grounding data to your prompt

play05:26

with guardrails to stay below your token limits

play05:29

before presenting this information to the LLM

play05:32

to generate an informed response.

play05:34

And that's just scratching the surface.

play05:36

For example, you'd also want to ensure

play05:38

that it provides citations with the right legal assurances.

play05:41

We do all of this and more right out of the box.

play05:43

- All right, so I want to go back to the fine tuning aspect.

play05:45

You know, we recently had Mark Russinovich on, CTO of Azure,

play05:49

and he explained how a method

play05:50

called Low-Rank Adaptive fine-tuning, or LoRA fine-tuning,

play05:53

works to pinpoint LLM training specific

play05:56

to a skill like security,

play05:57

and this is a very specialized process.

play05:59

So it's not as simple as building out your own solution

play06:01

even if you were a security expert.

play06:03

- Exactly, and ultimately, you need to understand

play06:06

how the underlying AI processes work.

play06:08

And we've done all of that work for you

play06:10

based on our own extensive cybersecurity expertise.

play06:13

We've been partnered with OpenAI for years now

play06:16

on what it takes to build the most advanced models.

play06:19

And so we have built an AI supercomputer

play06:21

with a specialized hardware and software stack

play06:24

in Azure to run them.

play06:25

And with that foundation, we use the LoRA fine-tuning method

play06:29

to give the LLM more training specific

play06:31

to cybersecurity analysis, detection, response,

play06:35

summarization, and more.

play06:36

Then using evergreen Threat Intelligence

play06:39

with real-time retrieval,

play06:40

we also help ensure that it's always up-to-date

play06:43

with new and trending threats.

play06:45

And like I showed earlier,

play06:47

we also provide built-in cyber-specific skills

play06:50

in Promptbooks so that you can get multi-step prompts built

play06:54

with insightful responses without being a prompt expert.

play06:58

And this is something we've been working on for years

play07:00

to get the processes and the LLMs' underlying knowledge base

play07:04

up to the task of cybersecurity.

play07:06

- Okay, so now you've explained how it works.

play07:07

So can we walk through a process

play07:09

maybe a security analyst might use

play07:11

as they use Security Copilot to investigate an incident?

play07:14

- Definitely; in some cases,

play07:15

these investigations can start out

play07:17

in something like Microsoft Defender

play07:19

or simply as a support call, so let's start there.

play07:23

In this case, I'm on an active call

play07:25

with a user who can't access her device.

play07:27

I'll prompt Security Copilot with, "What is the status

play07:30

of the user account for Lynne Robbins?

play07:32

Is it locked out?"

play07:34

After reasoning across information

play07:36

from its connected plugins,

play07:37

it confirms that Lynne's account

play07:39

is disabled with more details.

play07:41

Let's keep going.

play07:42

"What are the three most recent login attempts

play07:45

from the user?"

play07:46

And I got a few more clues back.

play07:47

Lynne not only has had multiple failed login attempts,

play07:50

but they've come from different devices and locations.

play07:54

This account is likely compromised.

play07:56

Now, I'll ask, "Is the user considered risky? If so, why?"

play08:01

And it looks like she has a high risk level now,

play08:03

but we need to find out more details.

play08:06

So I'll prompt Copilot with a forward slash

play08:08

to use a security-specific skill to generate

play08:12

and run Defender hunting queries.

play08:14

From the hunt, it's clear there is a ransomware event,

play08:17

and lateral movement is occurring within Woodgrove.

play08:20

We need to correlate it now with an incident

play08:22

to see it in aggregate.

play08:23

We can do that by checking for security incidents

play08:26

on the same day, so I'll enter that prompt for that day.

play08:29

Now, I'll expand the response,

play08:31

and from the top line, I can correlate the alerts

play08:33

with incident 1-9-3-8-8.

play08:36

So now, I can simply ask for a summary of this incident.

play08:39

I can assess everything captured by Defender

play08:42

as well as anything related to the response process.

play08:45

And I now know it's a potential

play08:47

human-operated ransomware attack, and this is bad.

play08:50

- Right, and this is a big deal

play08:51

because all this went pretty quickly,

play08:53

which is a huge advantage,

play08:54

especially when you're trying trying to contain a threat.

play08:56

- Yes, Security Copilot

play08:57

can speed up investigation significantly.

play09:00

In fact, to save a little more time showing the rest

play09:03

of the investigation and also the statefulness of it,

play09:06

I'll move to one that I've previously run.

play09:08

Here's the incident we just looked up.

play09:10

Beyond the severity level, we can also see

play09:12

when the incident was first detected, the alert generation,

play09:16

and a bunch of other information.

play09:18

There are associated devices, threat actors, protocols used,

play09:21

processes, and login attempts from our user, Lynne.

play09:24

It tells me the investigation actions taken already

play09:27

and some remediation actions taken

play09:28

with real-time attack disruption

play09:31

to automatically contain the threat.

play09:33

And continuing on, I prompted it for associated entities.

play09:36

It provided more devices, users, and IPs.

play09:39

Then to start the remediation, I asked it

play09:42

to generate a PowerShell script

play09:43

to go check on the state of the device's SMB configuration,

play09:47

which in our case was used

play09:49

to move laterally between systems.

play09:51

- And the script generation here is super-useful.

play09:52

because when I write scripts,

play09:53

it's always in short-term memory only,

play09:55

so I have to look it up either in reference materials

play09:58

and command line help, maybe ping Jeffrey Snover,

play10:00

and then it kind of exits out the other side of my head.

play10:02

- Yeah, my algebra teacher

play10:04

had a similar complaint about my short-term memory.

play10:06

But you're right, Security Copilot

play10:08

can now solve the short-term memory problem.

play10:11

Now, let's go and focus on the initial machine compromised.

play10:14

Here, I used another skill to find host name access records

play10:18

for the device in question, and it found one access record.

play10:21

So we saw the lateral movement and a primary refresh token

play10:24

which generated along the way.

play10:26

Here I've generated a hunting query using Microsoft Sentinel

play10:29

to figure out how the attacker did it

play10:31

and look for security events associated with the IP address

play10:35

on the day of the account lockout.

play10:37

Security Copilot generated a Kusto Query Language,

play10:40

or KQL, statement for me.

play10:41

Then it runs that query for me here too.

play10:44

All of this helps me understand the level

play10:46

of attack infiltration across my network,

play10:48

and next, I can move on to the devices associated

play10:51

with our user, Lynne.

play10:53

Here I've checked if the PARKCITY Win10S device

play10:57

was compliant and it knew

play10:58

where to pull the device information.

play11:00

In this case, it's Intune.

play11:02

It looks like the device was non-compliant,

play11:04

failing to meet a Defender for Endpoint policy.

play11:07

I don't work in the IT side of the house,

play11:09

so next, I asked it for more information

play11:12

about what the policy does

play11:13

and why the device isn't compliant.

play11:16

Looks like the device was not within the group scope

play11:18

for the app policy assignment,

play11:20

and as a result, the device was exposed

play11:22

and became an attack target.

play11:23

- And this really shows how easy it is, then,

play11:25

to continue that investigation

play11:26

maybe in areas where you're not an expert.

play11:28

So what's the next step?

play11:29

- Well, that's the great thing about Security Copilot,

play11:31

is you don't have to be an expert to get expert advice.

play11:35

And let's dive into Threat Intelligence,

play11:37

in this case to understand the techniques in use

play11:39

by the actor and other entry points they could exploit.

play11:43

Our threat actor named from the incident summary

play11:44

was pretty memorable for me,

play11:46

so I prompted Security Copilot for some information

play11:49

about the sea cow of threat actor groups, Manatee Tempest.

play11:53

And this response, summary aside,

play11:54

is the insight into different techniques used for exploits.

play11:57

It's likely Lynne fell victim to a drive-by download.

play12:00

But more importantly, what I learned

play12:02

is that I need to rally my security organization

play12:05

around analysis of any Cobalt Strike

play12:07

or Mythic payloads placed in my environment.

play12:10

And as an analyst, a lot of my time

play12:12

is spent writing summary reports for people

play12:14

who wouldn't be as deep as I am on an incident like this.

play12:18

So I prompted Security Copilot

play12:19

to provide a non-technical executive level summary

play12:22

for our company leadership.

play12:24

And using the session context,

play12:26

it generated a thorough report

play12:28

that's pretty easy to understand.

play12:30

And again, just to highlight the transparency here,

play12:32

if I expand the steps it took, then look at the first one,

play12:36

you can see what process the orchestrator used

play12:38

to develop a plan, gathered context from the session,

play12:41

then determined which skills to use

play12:43

as well as the rest of the logic and prompt details

play12:46

that are presented to the fine-tuned LLM

play12:48

to generate our response.

play12:50

When finished, from here, I can download

play12:53

and export it to a Word doc, Mail,

play12:55

or just copy it to my clipboard.

play12:57

And beyond high-level reports,

play12:59

Copilot can even help generate more immediate summaries

play13:02

for SecOp teams like pin boards.

play13:04

As I was going, I selected the turns of my investigation

play13:07

I wanted to highlight later using pins,

play13:10

then used Copilot to generate a pin board summary

play13:13

that I can share with my team

play13:15

so that any new members joining

play13:16

can quickly get up to speed on the work I've already done

play13:19

without duplicating effort.

play13:21

All of the prompts are saved to a stateful session,

play13:24

and the entire investigation is shared

play13:26

with all the context, including our pin board.

play13:29

- And every phase of the investigation was sped up using

play13:32

all the grounding data that was pulled

play13:33

from quite a few different connected services,

play13:35

for more context.

play13:36

That said, though, in this case,

play13:37

Lynne actually called the support desk,

play13:38

which triggered your investigation.

play13:41

So what would happen if this was maybe part of an incident?

play13:43

- Well, that's actually a huge part of Security Copilot.

play13:45

We've also built integrated experiences

play13:47

across different Microsoft admin portals

play13:50

to help save you time,

play13:51

and this goes beyond security analyst roles too.

play13:54

First, and to answer your question, in Microsoft Defender,

play13:57

I have the incident number from before open here.

play14:00

In the Security Copilot sidebar,

play14:02

each incident automatically gets a generated summary,

play14:05

and many of the standalone capabilities I showed before

play14:07

can be done in context.

play14:09

As you investigate alerts, it can analyze scripts

play14:12

and commands in place like this one.

play14:14

And in Advanced Hunting, you can use natural language

play14:17

to author KQL queries with Security Copilot.

play14:20

Then beyond Microsoft Defender, let me give you a quick look

play14:23

at other embedded Copilot experiences that we have.

play14:26

For endpoint admins, Security Copilot in Microsoft Intune

play14:30

helps simplify policy management

play14:32

so you can generate policies using natural language prompts,

play14:36

find out more about settings, options, and their impacts,

play14:39

and pull up critical details about managed devices.

play14:42

If you're an identity admin,

play14:44

Security Copilot in Microsoft Entra

play14:46

will let you use the natural language to ask about users,

play14:50

groups, sign-ins, and permissions

play14:52

to instantaneously get a risk summary

play14:55

as well as steps to remediate

play14:57

and guidance for each identity at risk.

play15:00

And in ID Governance, it can help you

play15:02

to create a lifecycle workflow to streamline the process

play15:06

to automate common task.

play15:08

And one more, here's the experience for data security admins

play15:11

in Microsoft Purview,

play15:13

where Security Copilot with data loss prevention alerts

play15:16

will quickly generate insights

play15:17

about data and file activities.

play15:20

And when used with Insider Risk Management,

play15:23

alert summaries, find details about high-risk users

play15:26

and the related data exfiltration activities

play15:29

to help you respond fast.

play15:31

And there are more embedded experiences to look forward to.

play15:33

Security Copilot will also be integrated

play15:35

with Microsoft Defender for Cloud and other plugins soon.

play15:39

- So you can stay in the context, then,

play15:40

of the tools that you use every day.

play15:42

So for anyone who's watching right now

play15:44

looking to get started, what do you recommend?

play15:46

- If you're looking at options to use generative AI

play15:48

with your security practices,

play15:50

join our early access program at aka.ms/SecurityCopilot,

play15:56

and you can even help shape what Security Copilot can do.

play15:59

- Thanks so much, Ryan, for showing us

play16:00

what Security Copilot can do to help us investigate

play16:02

and respond to incidents using generative AI.

play16:06

Of course, to stay up to date with all the latest tech,

play16:08

be sure to subscribe to Mechanics, and thanks for watching.

play16:10

(music)

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
AI AssistantCybersecurityGenerative AIMicrosoftSecurity IncidentsThreat IntelligenceEndpoint ManagementIdentity AccessData SecurityIncident ResponseNatural LanguageSecurity ToolsCybersecurity ExpertiseReal-time AnalysisAutomationSecurity DataPlugin IntegrationRisk ManagementExecutive SummaryTech InnovationAzure ServicesSecurity AnalystEarly Access