AI Security Fireside Series: Trellix's Generative AI Transformation

Robust Intelligence
5 Jun 202413:29

Summary

TLDRIn this video, the host interviews Martin, the CTO of Cloud and AI at Trellix, discussing the risks and benefits of AI in security. Martin explains the challenges of implementing AI, particularly in understanding and controlling AI-generated outputs. He emphasizes the importance of AI in automating tasks, allowing humans to focus on more critical activities. The conversation also covers the debate between open-source and closed-source AI models, AI security concerns, and best practices for organizations adopting AI technologies. Martin advises on the need for documenting AI usage and implementing best practices for effective security management.

Takeaways

  • 🧐 AI Risks: Organizations face new risks with AI, particularly around application security and the opaque nature of AI inputs and outputs, which makes it hard to apply traditional security controls.
  • 🔮 AI Transparency: Generative AI can be challenging to understand due to its lack of a structured format, making it difficult to determine the reasoning behind its outputs.
  • 🛡️ AI and Security: AI can automate security tasks, allowing humans to focus on higher-level tasks like building threat models and engaging with business units.
  • 🤖 AI in Action: Trellix's TRX Wise leverages AI to read machine-level information and make security decisions, which is a new capability made possible by the maturation of generative AI.
  • 🚀 AI Maturity: The maturity of AI has reached a point where it can understand and identify specific security threats, such as password spray attacks.
  • 🤝 Human-AI Collaboration: AI can triage outputs from security systems, allowing for more efficient human involvement in the decision-making process.
  • 🏭 Open Source vs. Closed Source: The choice between open source and closed source AI models often comes down to the level of detail in the decision-making process and the ability to explain those decisions.
  • 🔒 Security of AI Models: The security of AI models is a critical concern, with the need to understand and control what data the models have access to and how they use it.
  • 📜 Documentation: Documenting AI usage across an organization is essential for understanding where AI is being applied and ensuring security best practices are followed.
  • 🛑 Prompt Injection: A major vulnerability in AI systems can occur through 'prompt injection,' where untrusted input leads to untrusted output.
  • 🏛️ AI Governance: Establishing an AI Center of Excellence can provide guidelines and best practices for secure AI usage within an organization.

Q & A

  • What is the primary concern when adopting AI in terms of security risks?

    -The primary concern is the opaque layer AI creates between input and output, making it difficult to understand what is coming in and going out, which complicates the implementation of security policies.

  • How does generative AI differ from structured languages like SQL in terms of security?

    -Generative AI does not fit a particular format, making it challenging to apply traditional security controls and understand the prompts, unlike structured languages like SQL which are easier to evaluate for format compliance.

  • What is the role of AI in transforming an organization's operations as discussed in the script?

    -AI is enabling machines to handle tasks that are machine-level, allowing humans to focus on tasks that require human-level understanding and intervention, thus automating processes and improving efficiency.

  • Can you explain the concept of 'AI reading Machine-level information' as mentioned in the script?

    -This refers to AI's ability to process and understand raw data or information at a level that was traditionally only interpretable by machines, and then make decisions based on that data, which was not possible before the maturity of generative AI.

  • What is TRX Wise and how does it utilize AI?

    -TRX Wise is a product launched by Trellix that incorporates AI to read and analyze machine-level information, allowing it to make decisions based on the data it's given, such as identifying security threats like a password spray attack.

  • How does AI help in automating tasks that were previously done by humans?

    -AI can take over tasks such as anomaly detection and triage the output, identifying whether a human needs to intervene or if it can take action based on learned responses, thus reducing the manual workload for humans.

  • What is the significance of open source versus closed source AI models in the context of security?

    -Open source models are smaller and less descriptive in explaining their decisions, while closed source models provide better descriptions but are larger. The choice between them may depend on the need for transparency in decision-making versus efficiency and cost.

  • Why is it important for organizations to document their AI usage?

    -Documenting AI usage helps in understanding where and how AI is being used across different projects, enabling the organization to implement best practices and security measures effectively without restricting productivity.

  • How can AI security be compromised if not implemented correctly?

    -AI security can be compromised through vulnerabilities such as 'prompt injection,' where untrusted input leads to untrusted output, highlighting the importance of proper implementation and understanding of security controls.

  • What is the recommended approach for a leader in an organization looking to adopt AI?

    -Leaders should start by itemizing and documenting AI usage across the organization, then implement best practices for security without restricting the use of AI, possibly through an AI Center of Excellence to guide these practices.

  • What is the 'prompt injection' vulnerability mentioned in the script and why is it serious?

    -Prompt injection is a vulnerability where an AI system considers untrusted input, such as content from an email, as part of its prompt, leading to untrusted outputs. It is serious because it can be exploited to manipulate AI systems into making insecure decisions.

Outlines

00:00

🤖 AI Risks and Security Challenges

In this introductory paragraph, the host welcomes Martin, the CTO of cloud and AI at trellix, to discuss AI and security. Martin emphasizes the challenges of AI risk, drawing parallels to application security and the opaque nature of AI inputs and outputs. He highlights the difficulty in applying traditional security controls to generative AI, which doesn't fit a structured format, making it hard to implement policies and understand the AI's decision-making process. The conversation sets the stage for a deeper dive into AI's impact on security and the measures organizations should take to mitigate risks.

05:01

🛡️ Balancing AI Benefits with Security Concerns

The second paragraph delves into the benefits AI brings to trellix, particularly in the realm of security. Martin explains how AI has matured to a point where it can understand and respond to security threats, such as password spray attacks, by analyzing machine-level information. This advancement has led to the automation of tasks previously performed by humans, allowing them to focus on higher-level tasks. The discussion also touches on the use of AI to build threat models and integrate telemetry data, enhancing the overall security posture of the organization. Martin's insights reveal a strategic approach to leveraging AI while acknowledging the security implications.

10:02

🔧 Navigating Open Source vs. Closed Source AI Models

In this segment, the conversation shifts to the debate between open source and closed source AI models. Martin shares insights from trellix's experience, noting that while open source models are smaller and can make similar decisions, they lack the descriptiveness of closed source models in explaining their decisions. This is crucial for customers who need to understand the rationale behind AI decisions. The discussion also addresses the reality that threat actors have access to both types of models, rendering the debate somewhat academic from a security perspective. Martin suggests that the focus should be on implementing robust security controls, regardless of the model's source.

🚨 Addressing AI Security and Best Practices

The final paragraph wraps up the discussion with advice on securing AI within organizations. Martin stresses the importance of documenting AI usage across the organization to prevent the development of silos and to ensure best practices are shared and implemented. He advocates for an AI Center of Excellence to guide the adoption of AI securely and effectively. The conversation concludes with a call to action for leaders to embrace AI while maintaining a vigilant approach to security, reflecting a balanced view of the potential and pitfalls of AI integration.

Mindmap

Keywords

💡AI

AI, or Artificial Intelligence, refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. In the video's context, AI is central to the discussion of security risks and benefits within technology applications. An example from the script is the mention of generative AI creating an opaque layer between input and output, challenging traditional application security measures.

💡AI Risk

AI Risk encompasses the potential vulnerabilities and threats that arise from the adoption and use of AI technologies. The script discusses how organizations face challenges in understanding and managing these risks, particularly with generative AI not fitting traditional security protocols, making it difficult to monitor and control data flow.

💡Generative AI

Generative AI is a subset of AI that can generate new content, such as text, images, or music, that is not simply a replication of existing data. The script mentions generative AI's role in creating unpredictable prompts, which complicates the implementation of traditional security controls and policies.

💡Application Security

Application Security involves measures taken to protect software applications from external threats and vulnerabilities. The script likens the risks associated with AI to those of application security, emphasizing the new challenges AI brings to this field, such as the opaque nature of AI decision-making processes.

💡Policy

In the context of the video, policy refers to the set of rules and procedures put in place to govern the use of AI and ensure security. The difficulty of applying policy to AI, especially generative AI, is highlighted, due to the non-traditional formats and unpredictability of AI interactions.

💡Machine Learning Models

Machine Learning Models are algorithms that allow computers to learn from and make decisions or predictions based on data. The script discusses the use of these models in AI for security purposes, such as detecting password spray attacks, and the importance of understanding why these models make certain decisions.

💡TRX Wise

TRX Wise is a product mentioned in the script, launched by Trellix, which incorporates AI to read machine-level information and make security decisions. It exemplifies how AI can transform an organization by automating tasks previously done by humans, allowing them to focus on higher-level tasks.

💡Open Source Models

Open Source Models are AI models whose underlying source code is publicly accessible, allowing anyone to view, modify, and use the model. The script contrasts these with closed-source models, discussing the trade-offs between the smaller, less descriptive open source models and the larger, more descriptive closed source models in terms of security decision-making and transparency.

💡Closed Source Models

Closed Source Models, also known as proprietary models, are AI models with source code that is not publicly disclosed. The script points out that while these models may provide better explanations for their decisions, they can still be accessed and potentially misused by threat actors, similar to open source models.

💡Threat Actors

Threat Actors are individuals or groups that pose a threat to the security of a system or organization, often by attempting to exploit vulnerabilities. The script acknowledges that threat actors are already using AI models, both open and closed source, emphasizing the need for robust security measures regardless of the model's openness.

💡AI Security

AI Security refers to the measures taken to protect AI systems and the data they process from unauthorized access, use, or attack. The script discusses the importance of understanding AI's access to data and APIs, and the need for strict controls to prevent vulnerabilities such as prompt injection, which can lead to serious security breaches.

Highlights

Mark and Hol, CTO of cloud and AI at Trellix, discusses AI and security, emphasizing the opaque layer AI creates between input and output.

AI's generative capabilities can make traditional security controls irrelevant due to its unstructured prompts.

The difficulty in policy implementation due to AI's complexity and the challenges in understanding AI's decision-making process.

AI's transformative impact on organizations, allowing machines to handle tasks more efficiently and freeing up humans for higher-level work.

Introduction of TRX Wise by Trellix, highlighting AI's role in reading machine-level information and making security decisions.

AI's newfound ability to understand security concepts, such as password spray attacks, and its implications for security practices.

The automation of tasks previously done by humans, with AI now capable of triaging outputs from anomaly detection systems.

The importance of trust in AI's decisions and the potential for AI to take autonomous actions based on learned trust levels.

The debate between open-source and off-the-shelf AI models, focusing on their effectiveness and descriptive capabilities.

Open-source models being smaller and less descriptive in their decision-making, while closed-source models provide more explanation.

The academic nature of the open-source versus closed-source debate, considering the inevitability of threat actors' access to AI models.

The importance of copyright and source data discussions in the context of AI model development and usage.

The need for AI security to mirror cloud security practices, emphasizing the importance of account isolation and API access control.

The concept of 'prompt injection' as a serious vulnerability in AI systems, where untrusted input leads to untrusted output.

The necessity for organizations to document AI usage and implement best practices to avoid security mishaps.

Trellix's AI Center of Excellence as a model for providing best practices and ensuring AI security within an organization.

The recommendation for CIOs and CISOs to enable productivity while understanding and documenting AI usage across projects.

Transcripts

play00:00

[Music]

play00:04

hi everyone uh today I'm honored to be

play00:07

joined by Mark and Hol who is the uh CTO

play00:10

of cloud and AI at trellix Martin

play00:13

welcome thank you for having me uh so

play00:16

Martin um you know uh we're we're here

play00:18

to talk about uh here to talk about AI

play00:21

talk about uh you know Ai and security

play00:23

uh you're obviously a leader in the

play00:24

space and this week you announced uh a

play00:26

lot of really exciting things I want to

play00:28

get started by just sort of like you

play00:30

know um as a kind of like for a frame of

play00:33

reference just to kind of like get your

play00:35

take on how do you think about um

play00:38

basically like Ai and AI risk like what

play00:41

are the some of the um you know what

play00:45

what are some of the the the risks that

play00:47

you know the organizations are kind of

play00:49

uh facing uh when when they're adopting

play00:52

Ai and their applications yeah it's you

play00:54

could write novels on the overall risk

play00:56

in AI I think the way that I I initially

play01:00

try to explain it uh is to look at it

play01:03

like application security and most of

play01:05

the OA top 10 uh for llms uh are really

play01:09

about application security the

play01:11

difference with AI is that it often uh

play01:13

creates an opaque layer in between the

play01:15

input in the output traditionally if you

play01:18

look at something like SQL it's a

play01:20

structured language and so you can

play01:22

understand if it fits a format what you

play01:24

find with generative AI is that the the

play01:26

prompts themselves don't fit a

play01:28

particular format so suddenly a lot of

play01:30

the controls that we had in place are no

play01:32

longer relevant and it's very difficult

play01:34

to figure out what is coming in and what

play01:36

is going out which means that it's hard

play01:38

to put policy in place yeah um that that

play01:41

makes a lot of sense and that that's

play01:43

that's exactly what you know what we're

play01:44

seeing it's it's harder to for policy

play01:47

and um and I feel like you know what

play01:49

we're seeing is also there kind of like

play01:50

a lot of right like kind of

play01:52

vulnerabilities right like Associated

play01:54

like uh you know with AI and those uh

play01:57

we're kind of seeing um you know now

play02:00

kind of coupled with the fact that you

play02:02

know with generative AI uh AI is a is uh

play02:06

you know we're also getting instructions

play02:07

to the machine right then then that kind

play02:09

of uh you know the the risks kind of exp

play02:12

exponentiate and you know and then you

play02:14

know it's harder to put these controls

play02:15

in place so um speaking a little bit

play02:19

more on the kind of on the on the

play02:21

benefits now of of AI though um I I know

play02:25

that you know you guys have been uh

play02:28

doing really forward things

play02:30

uh things with AI uh and you you know

play02:33

your organization specifically so I'm

play02:35

just curious to hear a little bit more

play02:36

about how do you feel like AI has been

play02:39

transforming uh what you do and uh your

play02:42

organization yeah fundamentally it's

play02:44

allowing machines to do the machine

play02:45

stuff and humans to do the human stuff

play02:47

is the way that I think about it so uh

play02:49

specifically at trellix with h we just

play02:51

launched something called TRX wise and

play02:53

one of the key components to that is the

play02:54

ability to have ai read the Machine

play02:57

level information and make decisions

play02:59

based on what it's been given and we've

play03:01

never been able to do this before really

play03:03

it's just in the last year where

play03:04

generative AI matured enough to

play03:07

conceptually understands uh what

play03:09

security is so it'll understand things

play03:11

like what a password spray attack is so

play03:13

if you give it information and say is

play03:15

there anything wrong with this it'll say

play03:17

yeah that was a password spray attack so

play03:19

we've never had this before and suddenly

play03:21

all the Investments that you know we've

play03:23

been making over the years to collect

play03:25

data to process that data to even do um

play03:27

anomaly detection on that data is now

play03:30

much better because the output from that

play03:32

anomaly detection can now be triaged by

play03:34

the generative AI itself which has a

play03:36

fundamental concept of security things

play03:38

like password spray attacks so what this

play03:41

does is automates a ton of the things

play03:43

that humans used to have to do and it

play03:44

can say a human needs to look at this or

play03:47

even as you learn to trust it and see

play03:49

the output you can say nope I I agree

play03:52

with this every time I want you to go

play03:53

and take this action and move on and the

play03:56

key thing is that lets humans do the

play03:57

human level stuff so for our customer

play04:00

that means that their security areas may

play04:01

be able to talk to key business units

play04:03

and say what is the worst thing that

play04:05

could happen in your business unit build

play04:06

threat models around that find out what

play04:08

other Telemetry they might need and then

play04:10

put that into the system so that AI can

play04:11

continue to work on that that yeah um so

play04:16

yeah I mean like those I I really like I

play04:18

really like kind of like the the sort of

play04:20

framing like kind of the machines do

play04:21

what what machines do best less humans

play04:23

like do it and it feels like uh with

play04:25

something like that it really like Auto

play04:27

kind of can can really kind of automate

play04:30

a lot and uh and uh and basically um

play04:34

sounds like it can really Empower

play04:35

security teams right like having it like

play04:37

this this is right sounds amazing um so

play04:41

I guess kind of um I'm I'm curious then

play04:44

you guys like develop uh you know like

play04:47

models yourselves right and uh and you

play04:50

know and and you're using kind of like

play04:52

um you know Best in Class like if for

play04:56

for this kind of development I'm curious

play04:58

um how do you think about open source

play05:00

versus uh uh you know versus kind of

play05:04

like off-the-shelf models like do you

play05:06

feel like there's is there kind of like

play05:08

a world where it's like feel like it's

play05:10

it's mainly open source or only open

play05:11

source you feel like it's like it's uh

play05:14

you know kind of like these offthe shelf

play05:15

models like how how do you guys think

play05:16

about it yeah to me it's especially

play05:18

interesting because we test all these

play05:19

different models to for our use cases to

play05:21

see which ones work the best and I

play05:24

talked about uh having AI make security

play05:26

decisions and what has been most

play05:28

interesting is that the open source

play05:29

models tend to be smaller and we get

play05:32

essentially the same decisions out of

play05:34

smaller models but they're much less

play05:36

descriptive in why they made that

play05:37

decision so if they have fewer

play05:39

parameters they'll have fewer tools to

play05:41

explain the decision that they made and

play05:44

we're taking a lot of data giving it uh

play05:47

good grounding so it makes good

play05:48

decisions and that's why we're able to

play05:50

use most models to make these decisions

play05:52

but from a customer perspective they

play05:54

need to understand why the AI made its

play05:57

decision and so for that we're finding

play05:59

that the the closed Source models are

play06:01

larger and we'll have better

play06:04

descriptions for why they gave that that

play06:06

information but at the same time the

play06:08

smaller models can be run in very

play06:09

different places uh for very different

play06:11

uh costs and they can make the same

play06:13

decisions so I think it's going to be

play06:14

very interesting over the you know next

play06:16

year or so to see the uh the decision

play06:20

making that occurs at a corporate level

play06:22

decide do we want to go this route or we

play06:24

want to go this route or which

play06:25

situations call for which model on the

play06:27

threat actor side since we're in

play06:28

security I think it's a very interesting

play06:31

uh discussion around open versus closed

play06:33

and I think for me it it's an

play06:36

interesting discussion but it's largely

play06:38

academic because if there's even one

play06:40

decent open- Source model that means

play06:42

that you know threat actors can use that

play06:44

and so for me it's kind of a binary

play06:46

question uh are there controls in place

play06:48

that would prevent a threat actor for

play06:50

using this for for evil purposes and the

play06:52

answer is no thread actors are using

play06:54

this today all the time and that that

play06:56

ship has sailed that's a done deal and

play06:59

in interestingly we've also found that

play07:01

even for the close Source models it's

play07:02

not particularly difficult to get around

play07:05

the policies that they have in place so

play07:07

threat actors can use the close Source

play07:08

models just like the open source models

play07:10

so I think from the the debate about

play07:13

what should be published like that um

play07:15

and what attackers can use I think that

play07:17

that's probably a moot point right now I

play07:20

think the larger question around open

play07:22

versus close is probably more around

play07:23

copyrights and the The Source data

play07:25

that's used I know there's a number of

play07:26

lawsuits right now pending on that I

play07:28

think that is probably going to be the

play07:30

larger discussion long term uh versus is

play07:33

it dangerous or not to have these models

play07:35

open or closed I agree so um yeah I I

play07:38

really like kind of how you're saying

play07:39

that you know that it's an academic

play07:41

discussion because like you know the uh

play07:44

you know the Bad actors already have

play07:46

access to uh you know the you know

play07:48

basically state-of-the-art models and

play07:50

they can use them we could just you know

play07:51

we're better off like kind of assuming

play07:52

that that is the case right and now kind

play07:54

of like building right protection you

play07:56

know given that um you know we're you

play07:59

know we're in kind of like a similar

play08:00

boat where you know we're constantly

play08:02

kind of in debate about like okay um you

play08:05

know there new new attack technique new

play08:07

new new new method right new algorithmic

play08:09

approach for like you know attacking you

play08:11

know models like do we do we let the

play08:13

world know do we you know like uh we

play08:15

publish it or not and and I feel like

play08:17

we're you know a lot of times we're kind

play08:19

of uh we we feel the same right it's uh

play08:23

you know we we need to assume that like

play08:25

uh the threat actors like have access to

play08:27

these kinds of uh you know techniques

play08:29

and we're about you know better off and

play08:31

uh you know making them you know being

play08:33

open about it so the larger Community

play08:35

can you know can use it and enjoy and

play08:37

kind of learn how to how to defend

play08:39

itself you know from you know from that

play08:42

um so with that right and and given that

play08:45

you guys are kind of um well aware of

play08:49

like kind of threat actors like in the

play08:50

space how does your team think about AI

play08:53

security you know so obviously you guys

play08:56

are using you know uh you know you guys

play08:59

you're using cfdr models you're doing

play09:01

fine tuning uh you know model Integrity

play09:03

is important for you you know there's

play09:05

there's a lot of like you're accessing a

play09:07

lot of data so how do you you know how

play09:10

do you think about AI security so at a

play09:12

high level I think of it similar to how

play09:14

I think about Cloud security and that

play09:16

one of the most important firewalls you

play09:18

have in Cloud security is an account ID

play09:20

as then what's running in one account

play09:21

should be partitioned from what's

play09:23

running in another account and there you

play09:24

know organizations have many accounts

play09:26

for one for each project Etc uh

play09:28

depending on your cloud provider what

play09:30

what you means account and I think some

play09:32

of that partitioning can apply in the AI

play09:35

space as well and what I mean by that is

play09:37

when you're saying I'm going to execute

play09:39

this prompt there needs to be full

play09:41

understanding of what that llm has

play09:43

access to through apis at that time and

play09:45

whether or not it's authorized so some

play09:47

of that is simple arback but as I

play09:49

mentioned earlier what I find is that

play09:51

the the implementation of the llm often

play09:54

confuses architecture teams and

play09:56

development teams and suddenly they

play09:58

forget their application security 101

play10:00

and they they forget that you need to

play10:02

have this authenticated uh control that

play10:04

you can't trust the input that's coming

play10:05

in um recently one of the the major

play10:08

office providers has had a very serious

play10:10

vulnerability where anything that would

play10:12

come in over an email if you ask a

play10:14

question of that email it would consider

play10:16

the content of the email to be part of

play10:18

the prompt so it was completely

play10:20

untrusted input and that meant you got

play10:23

completely untrusted output so this is a

play10:25

serious vulnerability so kind like

play10:27

prompt

play10:28

injection prompt injection is what it

play10:30

ends up as and that is one of the worst

play10:34

things that can possibly happen so this

play10:36

is a big deal people need to understand

play10:38

that um as these kinds of uses of AI

play10:42

come out different organizations have

play10:44

different levels of maturity and

play10:45

understanding how they can Implement

play10:47

these things and if they uh mix their

play10:49

controls they can cause serious

play10:51

vulnerabilities amazing yeah could could

play10:54

not agree more um yeah I I wasn't aware

play10:58

of that uh that that case I think that's

play11:00

a that's a really interesting one but

play11:02

huge

play11:03

vulnerability um so I guess kind of U

play11:07

maybe just to just to conclude um you

play11:10

know uh I'm sure there you know there

play11:12

are a lot of there are a lot of people

play11:13

watching I think that there there are a

play11:15

lot of people who are um you know sort

play11:17

of thinking about uh you know securing

play11:19

AI in their own organization uh and you

play11:22

know trying to like as on the one hand

play11:24

they're kind of like racing to you know

play11:27

uh you know to kind of like a Ai and and

play11:30

kind of like you know um and transform

play11:33

you know the the organization like into

play11:35

like kind of an AI first organization

play11:37

but at the same time right um they're

play11:40

they're aware of the you know all the

play11:42

you know the potential vulnerabilities

play11:44

uh the various like security

play11:46

issues

play11:48

um what would you recommend to uh in you

play11:53

know someone who's like leading an

play11:54

organization someone who's maybe

play11:55

responsible for the security of the

play11:57

organization who's now looking to to

play12:00

like uh incorporate you know adopt AI so

play12:03

the at the very beginning you have to at

play12:04

least itemize what AI is being used so

play12:06

one of the worst things that can happen

play12:08

is different places using uh different

play12:11

AIS in different ways and not talking to

play12:14

each other and not finding out best

play12:15

practices and things that you need to do

play12:16

for security for it so if you're a CIO

play12:19

or a ciso one of the top things you need

play12:21

to do is not restrict the usage of it

play12:23

but at least document the usage of it if

play12:25

you try to restrict it you'll get Shadow

play12:27

it people will work around it because

play12:29

they're trying to be productive so you

play12:30

want to enable that productivity but you

play12:33

need to also understand where all these

play12:34

things are what projects are using it

play12:36

and then Implement best practices so at

play12:39

trellix we have an AI Center of

play12:40

Excellence that we use to provide those

play12:42

best practices that makes a lot of sense

play12:45

um yeah I think uh and we're we're

play12:48

seeing that uh we're seeing that as well

play12:49

I feel like kind of policy first right

play12:52

itemizing kind of what you know what AI

play12:54

uh is used for and kind of like the

play12:55

access that it has uh and then you know

play12:58

following policy then uh that's that's

play13:00

the only way where you can like actually

play13:01

figure out like what is the right like

play13:03

implementation of security practices

play13:05

right for uh so yeah I think that's

play13:08

that's very good advice well Martin

play13:10

thank you so much for you know uh for

play13:11

taking the time to to chat um really

play13:14

appreciate it and uh yeah very very you

play13:16

know excited and looking forward to

play13:18

seeing like what you guys continue on

play13:19

doing at Relic it's really you know

play13:21

mind-blowing thank you it's been great

play13:22

thanks for having me thank you

Rate This

5.0 / 5 (0 votes)

Related Tags
AI SecurityCTO InsightsAI RisksGenerative AIAI BenefitsTech LeadershipCloud AIAI PoliciesModel IntegrityThreat Actors