AI Security Fireside Series: Trellix's Generative AI Transformation
Summary
TLDRIn this video, the host interviews Martin, the CTO of Cloud and AI at Trellix, discussing the risks and benefits of AI in security. Martin explains the challenges of implementing AI, particularly in understanding and controlling AI-generated outputs. He emphasizes the importance of AI in automating tasks, allowing humans to focus on more critical activities. The conversation also covers the debate between open-source and closed-source AI models, AI security concerns, and best practices for organizations adopting AI technologies. Martin advises on the need for documenting AI usage and implementing best practices for effective security management.
Takeaways
- đ§ AI Risks: Organizations face new risks with AI, particularly around application security and the opaque nature of AI inputs and outputs, which makes it hard to apply traditional security controls.
- đź AI Transparency: Generative AI can be challenging to understand due to its lack of a structured format, making it difficult to determine the reasoning behind its outputs.
- đĄïž AI and Security: AI can automate security tasks, allowing humans to focus on higher-level tasks like building threat models and engaging with business units.
- đ€ AI in Action: Trellix's TRX Wise leverages AI to read machine-level information and make security decisions, which is a new capability made possible by the maturation of generative AI.
- đ AI Maturity: The maturity of AI has reached a point where it can understand and identify specific security threats, such as password spray attacks.
- đ€ Human-AI Collaboration: AI can triage outputs from security systems, allowing for more efficient human involvement in the decision-making process.
- đ Open Source vs. Closed Source: The choice between open source and closed source AI models often comes down to the level of detail in the decision-making process and the ability to explain those decisions.
- đ Security of AI Models: The security of AI models is a critical concern, with the need to understand and control what data the models have access to and how they use it.
- đ Documentation: Documenting AI usage across an organization is essential for understanding where AI is being applied and ensuring security best practices are followed.
- đ Prompt Injection: A major vulnerability in AI systems can occur through 'prompt injection,' where untrusted input leads to untrusted output.
- đïž AI Governance: Establishing an AI Center of Excellence can provide guidelines and best practices for secure AI usage within an organization.
Q & A
What is the primary concern when adopting AI in terms of security risks?
-The primary concern is the opaque layer AI creates between input and output, making it difficult to understand what is coming in and going out, which complicates the implementation of security policies.
How does generative AI differ from structured languages like SQL in terms of security?
-Generative AI does not fit a particular format, making it challenging to apply traditional security controls and understand the prompts, unlike structured languages like SQL which are easier to evaluate for format compliance.
What is the role of AI in transforming an organization's operations as discussed in the script?
-AI is enabling machines to handle tasks that are machine-level, allowing humans to focus on tasks that require human-level understanding and intervention, thus automating processes and improving efficiency.
Can you explain the concept of 'AI reading Machine-level information' as mentioned in the script?
-This refers to AI's ability to process and understand raw data or information at a level that was traditionally only interpretable by machines, and then make decisions based on that data, which was not possible before the maturity of generative AI.
What is TRX Wise and how does it utilize AI?
-TRX Wise is a product launched by Trellix that incorporates AI to read and analyze machine-level information, allowing it to make decisions based on the data it's given, such as identifying security threats like a password spray attack.
How does AI help in automating tasks that were previously done by humans?
-AI can take over tasks such as anomaly detection and triage the output, identifying whether a human needs to intervene or if it can take action based on learned responses, thus reducing the manual workload for humans.
What is the significance of open source versus closed source AI models in the context of security?
-Open source models are smaller and less descriptive in explaining their decisions, while closed source models provide better descriptions but are larger. The choice between them may depend on the need for transparency in decision-making versus efficiency and cost.
Why is it important for organizations to document their AI usage?
-Documenting AI usage helps in understanding where and how AI is being used across different projects, enabling the organization to implement best practices and security measures effectively without restricting productivity.
How can AI security be compromised if not implemented correctly?
-AI security can be compromised through vulnerabilities such as 'prompt injection,' where untrusted input leads to untrusted output, highlighting the importance of proper implementation and understanding of security controls.
What is the recommended approach for a leader in an organization looking to adopt AI?
-Leaders should start by itemizing and documenting AI usage across the organization, then implement best practices for security without restricting the use of AI, possibly through an AI Center of Excellence to guide these practices.
What is the 'prompt injection' vulnerability mentioned in the script and why is it serious?
-Prompt injection is a vulnerability where an AI system considers untrusted input, such as content from an email, as part of its prompt, leading to untrusted outputs. It is serious because it can be exploited to manipulate AI systems into making insecure decisions.
Outlines
đ€ AI Risks and Security Challenges
In this introductory paragraph, the host welcomes Martin, the CTO of cloud and AI at trellix, to discuss AI and security. Martin emphasizes the challenges of AI risk, drawing parallels to application security and the opaque nature of AI inputs and outputs. He highlights the difficulty in applying traditional security controls to generative AI, which doesn't fit a structured format, making it hard to implement policies and understand the AI's decision-making process. The conversation sets the stage for a deeper dive into AI's impact on security and the measures organizations should take to mitigate risks.
đĄïž Balancing AI Benefits with Security Concerns
The second paragraph delves into the benefits AI brings to trellix, particularly in the realm of security. Martin explains how AI has matured to a point where it can understand and respond to security threats, such as password spray attacks, by analyzing machine-level information. This advancement has led to the automation of tasks previously performed by humans, allowing them to focus on higher-level tasks. The discussion also touches on the use of AI to build threat models and integrate telemetry data, enhancing the overall security posture of the organization. Martin's insights reveal a strategic approach to leveraging AI while acknowledging the security implications.
đ§ Navigating Open Source vs. Closed Source AI Models
In this segment, the conversation shifts to the debate between open source and closed source AI models. Martin shares insights from trellix's experience, noting that while open source models are smaller and can make similar decisions, they lack the descriptiveness of closed source models in explaining their decisions. This is crucial for customers who need to understand the rationale behind AI decisions. The discussion also addresses the reality that threat actors have access to both types of models, rendering the debate somewhat academic from a security perspective. Martin suggests that the focus should be on implementing robust security controls, regardless of the model's source.
đš Addressing AI Security and Best Practices
The final paragraph wraps up the discussion with advice on securing AI within organizations. Martin stresses the importance of documenting AI usage across the organization to prevent the development of silos and to ensure best practices are shared and implemented. He advocates for an AI Center of Excellence to guide the adoption of AI securely and effectively. The conversation concludes with a call to action for leaders to embrace AI while maintaining a vigilant approach to security, reflecting a balanced view of the potential and pitfalls of AI integration.
Mindmap
Keywords
đĄAI
đĄAI Risk
đĄGenerative AI
đĄApplication Security
đĄPolicy
đĄMachine Learning Models
đĄTRX Wise
đĄOpen Source Models
đĄClosed Source Models
đĄThreat Actors
đĄAI Security
Highlights
Mark and Hol, CTO of cloud and AI at Trellix, discusses AI and security, emphasizing the opaque layer AI creates between input and output.
AI's generative capabilities can make traditional security controls irrelevant due to its unstructured prompts.
The difficulty in policy implementation due to AI's complexity and the challenges in understanding AI's decision-making process.
AI's transformative impact on organizations, allowing machines to handle tasks more efficiently and freeing up humans for higher-level work.
Introduction of TRX Wise by Trellix, highlighting AI's role in reading machine-level information and making security decisions.
AI's newfound ability to understand security concepts, such as password spray attacks, and its implications for security practices.
The automation of tasks previously done by humans, with AI now capable of triaging outputs from anomaly detection systems.
The importance of trust in AI's decisions and the potential for AI to take autonomous actions based on learned trust levels.
The debate between open-source and off-the-shelf AI models, focusing on their effectiveness and descriptive capabilities.
Open-source models being smaller and less descriptive in their decision-making, while closed-source models provide more explanation.
The academic nature of the open-source versus closed-source debate, considering the inevitability of threat actors' access to AI models.
The importance of copyright and source data discussions in the context of AI model development and usage.
The need for AI security to mirror cloud security practices, emphasizing the importance of account isolation and API access control.
The concept of 'prompt injection' as a serious vulnerability in AI systems, where untrusted input leads to untrusted output.
The necessity for organizations to document AI usage and implement best practices to avoid security mishaps.
Trellix's AI Center of Excellence as a model for providing best practices and ensuring AI security within an organization.
The recommendation for CIOs and CISOs to enable productivity while understanding and documenting AI usage across projects.
Transcripts
[Music]
hi everyone uh today I'm honored to be
joined by Mark and Hol who is the uh CTO
of cloud and AI at trellix Martin
welcome thank you for having me uh so
Martin um you know uh we're we're here
to talk about uh here to talk about AI
talk about uh you know Ai and security
uh you're obviously a leader in the
space and this week you announced uh a
lot of really exciting things I want to
get started by just sort of like you
know um as a kind of like for a frame of
reference just to kind of like get your
take on how do you think about um
basically like Ai and AI risk like what
are the some of the um you know what
what are some of the the the risks that
you know the organizations are kind of
uh facing uh when when they're adopting
Ai and their applications yeah it's you
could write novels on the overall risk
in AI I think the way that I I initially
try to explain it uh is to look at it
like application security and most of
the OA top 10 uh for llms uh are really
about application security the
difference with AI is that it often uh
creates an opaque layer in between the
input in the output traditionally if you
look at something like SQL it's a
structured language and so you can
understand if it fits a format what you
find with generative AI is that the the
prompts themselves don't fit a
particular format so suddenly a lot of
the controls that we had in place are no
longer relevant and it's very difficult
to figure out what is coming in and what
is going out which means that it's hard
to put policy in place yeah um that that
makes a lot of sense and that that's
that's exactly what you know what we're
seeing it's it's harder to for policy
and um and I feel like you know what
we're seeing is also there kind of like
a lot of right like kind of
vulnerabilities right like Associated
like uh you know with AI and those uh
we're kind of seeing um you know now
kind of coupled with the fact that you
know with generative AI uh AI is a is uh
you know we're also getting instructions
to the machine right then then that kind
of uh you know the the risks kind of exp
exponentiate and you know and then you
know it's harder to put these controls
in place so um speaking a little bit
more on the kind of on the on the
benefits now of of AI though um I I know
that you know you guys have been uh
doing really forward things
uh things with AI uh and you you know
your organization specifically so I'm
just curious to hear a little bit more
about how do you feel like AI has been
transforming uh what you do and uh your
organization yeah fundamentally it's
allowing machines to do the machine
stuff and humans to do the human stuff
is the way that I think about it so uh
specifically at trellix with h we just
launched something called TRX wise and
one of the key components to that is the
ability to have ai read the Machine
level information and make decisions
based on what it's been given and we've
never been able to do this before really
it's just in the last year where
generative AI matured enough to
conceptually understands uh what
security is so it'll understand things
like what a password spray attack is so
if you give it information and say is
there anything wrong with this it'll say
yeah that was a password spray attack so
we've never had this before and suddenly
all the Investments that you know we've
been making over the years to collect
data to process that data to even do um
anomaly detection on that data is now
much better because the output from that
anomaly detection can now be triaged by
the generative AI itself which has a
fundamental concept of security things
like password spray attacks so what this
does is automates a ton of the things
that humans used to have to do and it
can say a human needs to look at this or
even as you learn to trust it and see
the output you can say nope I I agree
with this every time I want you to go
and take this action and move on and the
key thing is that lets humans do the
human level stuff so for our customer
that means that their security areas may
be able to talk to key business units
and say what is the worst thing that
could happen in your business unit build
threat models around that find out what
other Telemetry they might need and then
put that into the system so that AI can
continue to work on that that yeah um so
yeah I mean like those I I really like I
really like kind of like the the sort of
framing like kind of the machines do
what what machines do best less humans
like do it and it feels like uh with
something like that it really like Auto
kind of can can really kind of automate
a lot and uh and uh and basically um
sounds like it can really Empower
security teams right like having it like
this this is right sounds amazing um so
I guess kind of um I'm I'm curious then
you guys like develop uh you know like
models yourselves right and uh and you
know and and you're using kind of like
um you know Best in Class like if for
for this kind of development I'm curious
um how do you think about open source
versus uh uh you know versus kind of
like off-the-shelf models like do you
feel like there's is there kind of like
a world where it's like feel like it's
it's mainly open source or only open
source you feel like it's like it's uh
you know kind of like these offthe shelf
models like how how do you guys think
about it yeah to me it's especially
interesting because we test all these
different models to for our use cases to
see which ones work the best and I
talked about uh having AI make security
decisions and what has been most
interesting is that the open source
models tend to be smaller and we get
essentially the same decisions out of
smaller models but they're much less
descriptive in why they made that
decision so if they have fewer
parameters they'll have fewer tools to
explain the decision that they made and
we're taking a lot of data giving it uh
good grounding so it makes good
decisions and that's why we're able to
use most models to make these decisions
but from a customer perspective they
need to understand why the AI made its
decision and so for that we're finding
that the the closed Source models are
larger and we'll have better
descriptions for why they gave that that
information but at the same time the
smaller models can be run in very
different places uh for very different
uh costs and they can make the same
decisions so I think it's going to be
very interesting over the you know next
year or so to see the uh the decision
making that occurs at a corporate level
decide do we want to go this route or we
want to go this route or which
situations call for which model on the
threat actor side since we're in
security I think it's a very interesting
uh discussion around open versus closed
and I think for me it it's an
interesting discussion but it's largely
academic because if there's even one
decent open- Source model that means
that you know threat actors can use that
and so for me it's kind of a binary
question uh are there controls in place
that would prevent a threat actor for
using this for for evil purposes and the
answer is no thread actors are using
this today all the time and that that
ship has sailed that's a done deal and
in interestingly we've also found that
even for the close Source models it's
not particularly difficult to get around
the policies that they have in place so
threat actors can use the close Source
models just like the open source models
so I think from the the debate about
what should be published like that um
and what attackers can use I think that
that's probably a moot point right now I
think the larger question around open
versus close is probably more around
copyrights and the The Source data
that's used I know there's a number of
lawsuits right now pending on that I
think that is probably going to be the
larger discussion long term uh versus is
it dangerous or not to have these models
open or closed I agree so um yeah I I
really like kind of how you're saying
that you know that it's an academic
discussion because like you know the uh
you know the Bad actors already have
access to uh you know the you know
basically state-of-the-art models and
they can use them we could just you know
we're better off like kind of assuming
that that is the case right and now kind
of like building right protection you
know given that um you know we're you
know we're in kind of like a similar
boat where you know we're constantly
kind of in debate about like okay um you
know there new new attack technique new
new new new method right new algorithmic
approach for like you know attacking you
know models like do we do we let the
world know do we you know like uh we
publish it or not and and I feel like
we're you know a lot of times we're kind
of uh we we feel the same right it's uh
you know we we need to assume that like
uh the threat actors like have access to
these kinds of uh you know techniques
and we're about you know better off and
uh you know making them you know being
open about it so the larger Community
can you know can use it and enjoy and
kind of learn how to how to defend
itself you know from you know from that
um so with that right and and given that
you guys are kind of um well aware of
like kind of threat actors like in the
space how does your team think about AI
security you know so obviously you guys
are using you know uh you know you guys
you're using cfdr models you're doing
fine tuning uh you know model Integrity
is important for you you know there's
there's a lot of like you're accessing a
lot of data so how do you you know how
do you think about AI security so at a
high level I think of it similar to how
I think about Cloud security and that
one of the most important firewalls you
have in Cloud security is an account ID
as then what's running in one account
should be partitioned from what's
running in another account and there you
know organizations have many accounts
for one for each project Etc uh
depending on your cloud provider what
what you means account and I think some
of that partitioning can apply in the AI
space as well and what I mean by that is
when you're saying I'm going to execute
this prompt there needs to be full
understanding of what that llm has
access to through apis at that time and
whether or not it's authorized so some
of that is simple arback but as I
mentioned earlier what I find is that
the the implementation of the llm often
confuses architecture teams and
development teams and suddenly they
forget their application security 101
and they they forget that you need to
have this authenticated uh control that
you can't trust the input that's coming
in um recently one of the the major
office providers has had a very serious
vulnerability where anything that would
come in over an email if you ask a
question of that email it would consider
the content of the email to be part of
the prompt so it was completely
untrusted input and that meant you got
completely untrusted output so this is a
serious vulnerability so kind like
prompt
injection prompt injection is what it
ends up as and that is one of the worst
things that can possibly happen so this
is a big deal people need to understand
that um as these kinds of uses of AI
come out different organizations have
different levels of maturity and
understanding how they can Implement
these things and if they uh mix their
controls they can cause serious
vulnerabilities amazing yeah could could
not agree more um yeah I I wasn't aware
of that uh that that case I think that's
a that's a really interesting one but
huge
vulnerability um so I guess kind of U
maybe just to just to conclude um you
know uh I'm sure there you know there
are a lot of there are a lot of people
watching I think that there there are a
lot of people who are um you know sort
of thinking about uh you know securing
AI in their own organization uh and you
know trying to like as on the one hand
they're kind of like racing to you know
uh you know to kind of like a Ai and and
kind of like you know um and transform
you know the the organization like into
like kind of an AI first organization
but at the same time right um they're
they're aware of the you know all the
you know the potential vulnerabilities
uh the various like security
issues
um what would you recommend to uh in you
know someone who's like leading an
organization someone who's maybe
responsible for the security of the
organization who's now looking to to
like uh incorporate you know adopt AI so
the at the very beginning you have to at
least itemize what AI is being used so
one of the worst things that can happen
is different places using uh different
AIS in different ways and not talking to
each other and not finding out best
practices and things that you need to do
for security for it so if you're a CIO
or a ciso one of the top things you need
to do is not restrict the usage of it
but at least document the usage of it if
you try to restrict it you'll get Shadow
it people will work around it because
they're trying to be productive so you
want to enable that productivity but you
need to also understand where all these
things are what projects are using it
and then Implement best practices so at
trellix we have an AI Center of
Excellence that we use to provide those
best practices that makes a lot of sense
um yeah I think uh and we're we're
seeing that uh we're seeing that as well
I feel like kind of policy first right
itemizing kind of what you know what AI
uh is used for and kind of like the
access that it has uh and then you know
following policy then uh that's that's
the only way where you can like actually
figure out like what is the right like
implementation of security practices
right for uh so yeah I think that's
that's very good advice well Martin
thank you so much for you know uh for
taking the time to to chat um really
appreciate it and uh yeah very very you
know excited and looking forward to
seeing like what you guys continue on
doing at Relic it's really you know
mind-blowing thank you it's been great
thanks for having me thank you
Voir Plus de Vidéos Connexes
Andrew Ng on AI's Potential Effect on the Labor Force | WSJ
Code Interpreter.... but OPEN SOURCE? Open Interpreter's Mike Bird on OS Projects, Mindset + More
RCM Revenue Cycle Management Medical AI LLM Case Studies
BREAKING: LLaMA 405b is here! Open-source is now FRONTIER!
Come installare Llama 3.1: un ChatGPT GRATIS SENZA LIMITI
Why Tech Leaders want to build AI "Superintelligence": Aspirational or Creepy and Cultish?
5.0 / 5 (0 votes)