24 hr AI conference: The AI in Audit Revolution (US)
Summary
TLDRIn this insightful discussion, Will Bible, the deputy leader of Audit, Transformation, Fraud, and Assurance Business, explores the impact of AI on the audit industry since 2015. With Brian Crowley and Ryan, they delve into the transformative power of AI, particularly large language models, in enhancing audit services, standardizing workflows, and managing the exponential growth of data. They address the challenges of deploying AI, including risk mitigation strategies, the importance of data quality, and the future skills required in a technology-driven audit profession.
Takeaways
- 😀 Digital transformation has been a key focus for the Audit, Transformation, Fraud, and Assurance Business since 2015, aiming to improve quality through standardized workflows and digitization.
- 🌐 The global deployment of the fully digital platform OMNIA is a significant milestone in the company's digital journey, enabling advanced data utilization and AI integration.
- 🔍 Data is identified as a crucial enabler for AI, and the standardization and structuring of data have been foundational to the deployment of AI capabilities in auditing.
- 📈 The rapid growth of data available to auditors presents challenges that AI is positioned to address, particularly in handling large volumes of transactions and identifying outliers.
- 📝 The script highlights the importance of written language in auditing, with AI capabilities like natural language processing being used to analyze and summarize vast amounts of textual data.
- 🤖 Generative AI is transforming the audit process by acting as a translator between human and machine, enhancing productivity, and democratizing enterprise knowledge.
- 🛠️ AI deployment in auditing involves a multidisciplinary approach, requiring collaboration across data science, IT, risk, legal, and quality to ensure effective and ethical use.
- 🔑 The importance of having a clear mission and value statement for AI is emphasized, to guide its design, development, and use, and to mitigate risks such as reputational damage and regulatory non-compliance.
- 🚀 The potential of generative AI in auditing is vast but still in its early stages, with opportunities for complex information extraction and enhancing human-machine interaction.
- 🛑 The need for a robust framework to evaluate AI performance, including metrics for accuracy, impact, and bias, is discussed, as is the importance of continuous monitoring and validation.
- 🔑 The script suggests that AI will not replace jobs but will change them, requiring a workforce that understands and can effectively utilize AI, emphasizing the need for ongoing upskilling and adaptability.
Q & A
What digital transformation journey has Will Bible's organization been on since 2015?
-Will Bible's organization has been on a digital transformation journey that involves rapidly innovating and deploying digital technologies to address specific pain points in the audit process. The focus has been on standardizing workflows and digitizing with the aim of improving overall quality.
What is the name of the globally deployed, fully digital platform mentioned in the script?
-The globally deployed, fully digital platform mentioned is called OMNIA.
What role does data play in the deployment of AI within Will Bible's organization?
-Data is a key enabler for deploying AI within the organization. The digital platform Omnia is critical for managing data and supporting AI capabilities.
What is Brian Crowley's perspective on the impact of AI on the audit business?
-Brian Crowley believes that the audit business, like others, is being significantly impacted by AI as clients digitize their operations, leading to an exponential growth in available data. This data growth presents a challenge that cannot be managed by human means alone, necessitating the use of AI.
What are some of the AI capabilities that Brian Crowley's group focuses on within audit?
-Brian Crowley's group focuses on AI capabilities such as anomaly detection using unsupervised learning to identify outliers in transactions, and natural language processing to handle written language in transaction descriptions, account names, and other forms of transactional evidence.
How does Ryan view the deployment of AI in the finance and reporting field?
-Ryan observes that AI is being deployed in low-risk areas initially, but its use is quickly expanding into other parts of the organization. He mentions specific AI applications in financial reporting, such as internal control for financial reporting, transaction monitoring, and anomaly detection.
What are some of the risks associated with deploying AI in critical business processes?
-Some of the risks include reputational risk, operational and financial risks, and regulatory compliance risks. There is also the challenge of disparate impact on protected groups and the evolving nature of AI regulation.
What is the trustworthy AI framework that Will Bible refers to?
-The trustworthy AI framework is a set of guidelines and principles that aim to ensure that AI is developed and used in a way that is ethical, transparent, and minimizes risk. It includes having a clear mission statement, value statement, and governance frameworks for AI deployment.
How does Brian Crowley see generative AI impacting the audit process?
-Brian Crowley sees generative AI as a tool that can significantly enhance productivity by acting as a translator between human and machine, making enterprise knowledge more accessible, and facilitating complex information extraction from documents.
What advice does Brian Crowley give for individuals looking to upskill in the area of generative AI?
-Brian Crowley advises individuals to learn about prompting and prompt engineering, which involves effectively interacting with AI models to provide instructions and achieve desired outcomes.
What are some lessons learned from deploying AI models in an enterprise context, according to Brian and Ryan?
-Some lessons learned include the importance of having high-quality data, the necessity of getting all stakeholders on board, the complexity of infrastructure, and the need for a clear mission statement. Additionally, it's crucial to consider where AI fits within the workflow and to be prepared for the uncertainty of AI outputs.
Outlines
🌟 Digital Transformation and AI Integration in Audit Services
Will Bible, the deputy leader of Audit, Transformation, Fraud, and Assurance Business, introduces the company's digital transformation journey since 2015, emphasizing the development and deployment of digital technologies to enhance audit quality. The introduction of the OMNIA platform, a fully digital solution, is highlighted as a key development. Brian Crowley and Ryan, leaders in data science and AI assurance services respectively, discuss the use of AI, including large language models, to transform audit services and the challenges of handling exponential data growth. The conversation underscores the importance of data as an enabler for AI and the company's strategic embrace of technological advancements in audit processes.
📈 Harnessing AI for Auditing: Challenges and Opportunities
Brian Crowley discusses the impact of digitization on the audit business, noting the exponential growth of data and the challenges of managing it through human means alone. He outlines the company's efforts to standardize processes, digitize and structure data, and deploy advanced analytical capabilities, including AI, in response to the data explosion. The conversation delves into the full set of AI capabilities focused within audit, such as anomaly detection using unsupervised learning and natural language processing to accelerate processes involving written language. Ryan shares insights into client deployments of AI in finance and reporting, highlighting the cautious yet evolving approach to integrating AI into critical processes and the importance of risk mitigation strategies.
🛡️ Navigating the Risks and Regulations of AI Deployment
The discussion shifts to the risks associated with AI deployment, including reputational, operational, financial, and regulatory risks. The panelists emphasize the importance of having a clear mission and value statement for AI deployment and adhering to governance frameworks to mitigate these risks. They also touch on the evolving landscape of AI regulation, with specific mention of the New York City-based law focusing on AI. The conversation highlights the necessity for a proactive approach to understanding and managing the risks inherent in AI solutions, as well as the importance of continuous monitoring and validation to ensure the reliability and ethical use of AI.
🚀 Generative AI's Potential in Auditing and the Future of Work
Brian explores the concept of generative AI, its potential to transform the audit process, and the importance of effective prompting and prompt engineering in leveraging these capabilities. He likens generative AI to having a team of well-rounded interns that can be directed to perform tasks, generating meaningful output. The conversation also addresses the democratization of enterprise knowledge through AI and the potential for AI to enhance productivity. Ryan adds his perspective on upskilling, emphasizing the need for creativity in finding new use cases for AI and the importance of understanding the limitations and uncertainties of AI outputs in the workplace.
💡 Lessons Learned in Deploying AI and the Importance of Data
The panelists share their experiences and lessons learned in deploying AI, focusing on the critical role of data quality and accessibility. They discuss the challenges of synthesizing data and the importance of securing and organizing data to effectively develop, test, and pilot AI solutions. The conversation also highlights the need for a collaborative approach involving various stakeholders, including risk, legal, and quality assurance functions, to ensure the successful deployment of AI within an organization.
🔍 Evaluating AI Deployment: Metrics and Safety Considerations
Ryan discusses the complexity of AI infrastructure and the importance of aligning AI deployment with specific use cases and problems. He emphasizes the value of traditional technology alongside AI and the need for a clear understanding of the problems to be solved. The conversation includes a discussion on performance metrics for evaluating AI models, such as predictive accuracy and disparate impact analysis, as well as the broader considerations around the safety and automation of AI systems.
🌐 The Future Impact of AI on Jobs and the Workforce
The final paragraph addresses the broader implications of AI on employment and the workforce. While acknowledging the uncertainty of predicting the future job market, the panelists reflect on the historical impact of technological advancements on economic expansion and job transformation. They suggest that AI is likely to deepen the role of humans in the workplace rather than replace jobs, and emphasize the importance of adapting to new skills and understanding the tools of today and tomorrow.
Mindmap
Keywords
💡Digital Transformation
💡Audit
💡Artificial Intelligence (AI)
💡Data Science
💡Natural Language Processing (NLP)
💡Anomaly Detection
💡Risk Management
💡Generative AI
💡Prompt Engineering
💡AI Lifecycle
💡Disparate Impact
Highlights
Deputy leader of Audit, Transformation, Fraud, and Assurance Business, Will Bible, discusses the digital transformation journey since 2015.
Introduction of the globally deployed, fully digital platform called OMNIA.
Data is identified as a key enabler for deploying AI within the audit process.
Brian Crowley from the data science group talks about using AI to transform audit services, including work with large language models.
Ryan discusses AI and algorithmic assurance services and client insights.
The impact of AI on the audit business and the exponential growth of data available to auditors.
Standardizing processes and deploying advanced analytics, including AI, in response to data explosion.
Anomaly detection capabilities using unsupervised learning to identify outliers in transactions.
Natural language processing to accelerate processes involving written language in audits.
Use of AI to extract information from unstructured written data like contracts and invoices.
Omni Suggestions to analyze internal control documentation and provide auditors with summaries.
Adoption of AI in low-risk areas and its gradual deployment into more critical organizational processes.
Risk management strategies when deploying AI solutions in critical processes.
The importance of having a trustworthy AI framework and general mission statement for AI deployment.
Generative AI's potential to enhance productivity and democratize enterprise knowledge.
The necessity of upskilling in prompt engineering and understanding generative AI capabilities.
Infrastructure complexity and the need for a collaborative approach when deploying AI.
Evaluating AI models through performance metrics and considering the impact on protected groups.
Reflections on the future impact of AI on jobs, emphasizing the expansion of human capabilities rather than replacement.
Transcripts
So my name is Will Bible.
I'm the deputy leader of Audit
Transformation, Fraud
and Assurance Business.
And to give you
a little bit of background,
we have been on a digital transformation
journey since around 2015.
We went through a process of rapidly
innovating and deploying
digital technologies to solve
specific pain points in the audit process
and try to replace manual steps
really with a focus
on standardizing
our workflows and digitizing.
All of those things were in the pursuit
of improving overall quality.
And since that time we've now progressed
to where we have
a globally deployed
fully digital platform,
which we call OMNIA.
As you heard in the prior
discussion, data is a key enabler
for deploying A.I.,
So that platform,
Omnia is a really critical element
that might come up again.
In today's discussion,
I'm joined by Brian Crowley,
who leads our data science group
that works in the Omnia portfolio,
and he's going to be talking about
how we are using A.I.
to transform our actual audit services,
including some of the work
that we are doing
with large language models
and have been doing for several years.
And I'm also joined by Ryan,
who is leading the development of our
AI and algorithmic assurance services.
So he's going to share insight
about what he sees going on
with our clients in particular.
So with that
as a background, I'm going to ask
Brian first.
Obviously, we're here on a 24 hour event,
so everyone's hearing a lot about
AI in the news today.
How is this impacting the audit business?
How are you thinking about it?
Leading a data science group
and how are we embracing the technology
in this in this profession?
Yeah, thanks. Well,
our business, like really all others, is
being impacted
in a variety of different ways,
really, as our clients
digitize their operations,
the data
that's available to us
as auditors is just
growing exponentially.
And quite frankly,
it can be a challenge
to handle
all that data through human means alone.
You know, Will,
you mentioned
a lot of what
we have done previously in the past
related to our transformation journey.
Really, that journey is predicated
on this fact
that this
this data explosion was happening.
And as a result of this,
we've been standardizing
our processes,
digitizing and structuring our data
and developing
and deploying advanced
analytical capabilities, including A.I.,
in response to it.
So I know we've we've
today the Q&A and everything else
is can be dominated
by large language models.
But what are what's
kind of the full set of AI capabilities
that we focus on within audit?
If we can maybe round out
a couple of the things that we work on
here?
Yeah, I think there's
probably a little bit of a misnomer
kind of at play with our
with our profession,
with the term
auditing and public accounting,
sort of the first sort of thing
that comes to people's minds
when they think about the audit business
is that traditional finance
and accounting sort of role
that's heavily
steeped in quantitative analysis.
And we're certainly addressing
the numerical aspect of the data
that we deal with
by developing
and piloting
anomaly detection capabilities
that use unsupervised
learning to point out
outliers
among the millions of transactions
to highlight
potential areas of increased risk,
whether that's due to misstatement,
do error or fraud.
But really,
while that
numerical quantitative analysis
is an important aspect
of our responsibilities
in our kind of day to day
workload as auditors,
really the more common form of data
that we deal with is is written word,
written language.
You know,
we have descriptions of transactions,
account names, back patterns
that are written in memos,
various forms of transactional evidence,
you know, accounting guidance.
That's just a sample
of what we read and analyze every day.
And we also do a lot of writing
to create the documentation
that's required to explain and evidence
the work that
that we perform
as a part of our our audits
and the considerations that we make.
And of course, all of that
writing is, again, subject
to multiple rounds of more reading
through various levels of review in our
in our audits.
So major opportunities
to accelerate those processes
through natural language processing
capabilities, both traditional and
some of the new capabilities.
For example,
within the Army,
a platform we document
AI in the form of our Argus module
that can extract information
from contracts, from invoices and other
similar types of unstructured
written data in mass,
which can really greatly accelerate
our testing procedures.
And we also have a capability
referred to as omni suggestions
that analyzes
long form internal control documentation
and provides our auditors
with a summary of some of those key
control characteristics.
Within those control activities.
So really a lot of uses that that we have
in development
and that have been deployed.
Yeah, it's really it's
really interesting point about all the
the written word.
I know
I've reviewed
a few work papers in my time
and the detail that's in there can be
quite a bit.
So Ryan, when you're looking
and working with clients,
are you seeing the same kind of thing?
How do you see them deploying
AI into the finance and reporting Field.
Yeah,
So I guess maybe just real quickly
before we go into
financial reporting specifically,
I think as most of us know, wide
adoption of AI
across industries,
so tech
recommendation systems, finance,
fraud prevention and detection,
advertising, ad placement,
optimization of audiences.
So it is being used
in a lot of key places.
One of the things that we
tend to see
is that adoption has been generally
high for low risk areas.
And so what we mean by
that is the risks of AI producing
an erroneous output
or something that that, you know,
an organization wouldn't agree with,
the risks of that happening
would have low impact,
so low impact to the organization.
So if something is wrong,
not a big impact,
not not millions of dollars at stake.
So that's where we're seeing
AI getting deployed first.
But that's changing quickly.
We're seeing more and more of
AI being deployed into other parts
of the organization.
So with financial reporting,
we definitely are seeing limited
use cases
so far
that organizations
are deploying limited for sure,
but we do see that changing.
So we have we've spoken
with a number
of number of vendors out there.
We know many vendors that are building
AI systems,
specifically targeted
at the CFO organization,
things like internal control
for financial reporting,
other types of AI looking at transaction
monitoring or anomaly detection
for journal entries,
expenses, accounts payable.
So lots of
lots of use cases
that we're seeing being deployed,
being developed and being designed
by vendors.
In one case,
we're talking
with a couple of sorry,
in a couple of cases
we're talking about
generative, generative
AI for financial statements.
We do we'll talk about this later on.
But you know, in that type of a use case,
we think it's very important to have a
a chain of review.
But I think some
the way organizations
are thinking about it
is that there's an ability
to take some of the low level
or maybe some initial draft
versions of write ups
about certain
components of the of the statement
and being able to use that as draft one
and then be able to
incorporate that
into a typical review
cycle of a statement.
But yeah, definitely,
definitely evolving.
We definitely expect to see
much, much more A.I.
being deployed into finance over
over the next 2 to 3 years.
We think that's definitely
a definitely likely.
Yeah. Thanks, Ryan.
And the questions
are kind of lighting up in the audience
about how risk is managed.
When we started
to put these solutions
into some of the critical processes
that you mentioned, that
maybe lower risk
solutions first, but ultimately
the value being generated
here will lead US
lead organizations to use it
more critical processes.
How how are companies
thinking about mitigating risks?
What do they do
to to kind of
to try and mitigate
the risk of inaccuracy or incomplete?
Yeah, maybe I could
maybe I could just talk a little bit
about the risk first that, you know,
the types of risk we're seeing.
So I think it's important to note
that there are differences between
where the risk lives, right?
There are there are many stakeholders.
There could be organizational risks.
There could be downstream
risks to the customers
or clients of that organization.
So things obviously
big things
that we're thinking about
as disparate impact
for certain group protected groups
among the population,
but ultimately for the organization
we're going to see in categories
like reputational risk, you know,
even the concept of the knowledge of
AI being used,
just simply the public knowledge of it,
could that impact reputation.
And we've obviously seen cases
where that is the case, just given the
the difference of opinion of
AI currently in today, in today's
climate,
we think that's definitely something
that could impact their organization.
That's something we have to look at
operational, financial, right.
So if if AI's being used
in a place where it's making decisions
without control or human involvement,
certainly open to risk there.
And then the big one,
which is something that's evolving,
we talked about it in the last
session, laws and regulation.
So definitely something that we're seeing
at least bipartisan agreement.
And the very few topics
are bipartisan agreement these days.
But certainly AI regulation
is something that
that everyone seems to agree on
and protecting the downstream
users, you know, people of
you know,
people that are
going to be affected
by the outcomes of the
they mentioned it
last in the last session,
but the New York City based law,
one of the first actual laws in the U.S.
that's going to be focusing on AI
and actually going live in July.
So something that we're seeing,
that one we're seeing at various
other bills and proposed law
is going through
various stages of legislation.
But definitely,
yeah, the risk are there for sure.
And maybe I can talk about the mitigation
if you like that will as well.
Yeah, Yeah.
Now that you enumerate all the risks
we want to be worried about.
Yeah, right.
So I think, listen, we, you know, we have
Deloitte has a trustworthy AI framework.
There's, there are lots of
we mentioned a few before
that the
National Institute of Science
and Technology has a
has a framework and guidelines.
We have the White House
Bill of Rights, A.I.
Bill of Rights guidelines.
So lots of similar concepts.
And we talk about
how do you mitigate risk.
I think, you know, from my perspective,
I think two things are very important.
One is having a general mission statement
and value statement for
I think many organizations just go at AI
without understanding what the goal is.
Is it trying to scale up?
Is it trying to do
more of a certain task?
Is it
trying to improve the outcomes for
for users?
So I think that's something
that is pretty important
that should affect
both design, development and the use of
AI across an organization
and that that would affect
and definitely affect
the ethical uses as well.
So that's something
that I think that a firm
should
organization should follow
that type of mindset
too, is
just all the governance frameworks
that that we see out there.
And I think
some of the important things are
whether, you know, a policy
that would have
all the permissible use cases
that the organization
would feel comfortable
using AI for controls standards across,
you know, development, use,
testing and monitoring.
In other words, in other industries
we see things like validation.
We alluded to previously
limited to the validation
in, for example, the financial industry,
which is
looking at all
types of models,
quantitative models
and performing validation tasks.
And validation can mean
lots of different testing procedures.
And one of those is
conceptual soundness and logical
underpinnings of the model
and making
sure it has a logical intuition behind it
that becomes difficult with
I think we all see
some of the
the number of features
that's that are included
with these models.
So coming in
and then trying to understand
the intuition behind the model
becomes a little a little difficult.
So we are seeing more of the focus
being spent on performance monitoring.
So rather than,
you know, do
we agree with the conceptual
soundness of a model?
Is it it's more about
how is it performing
and making sure
that if we have certain
performance metrics that we want to hit
and make sure we want to track,
make sure that that's something
that we keep an eye on,
and if that falls below a certain
a certain indicator or limit, then
we have to look at,
you know, is this still working
in the way that we intended?
They want at at conception.
So I inception.
So I think that's is definitely there
definitely ways to mitigate risks
but there's a lot that we're learning
about even regulators right
now are really trying to think about
how do we instill laws
that will mitigate these risks.
And ultimately it's it's a hard task,
but something that at least
some of these higher level frameworks
we think are step one of getting to that
that mitigation ability.
Yeah, right.
I like that idea of having
a very intentional framework
for how it's deployed.
A lot of
what we hear
people concerned about is somehow,
you know,
the mathematical model
getting loose in the world
and taking things over.
But, you know,
you do have to be put in that position,
right, in order for it
to actually have any kind of influence.
So, Brian, I'm going to turn back to you.
When you're thinking about
the generative AI,
which is the hot topic,
how do you see that
being deployed in audit
and what's our view of where
that should be
kind of put in a position to
to accelerate
or be used as an enabling tool
within an audit process?
Well generative AI is certainly exciting.
I think everyone can
kind of agree on that.
What it
what it can
ultimately do for really anyone.
But certainly certainly our business
is turn everyone into a software engineer
sort of acts as a translator
between human and machine
does so much more
than just generate content.
There's there's a lot of
the human machine interaction
that it can facilitate.
And with the proper tooling,
it can significantly enhance productivity
of our people.
It's sort of like having,
you know, a thousand
very well rounded interns
that all you really need to do
is just tell,
tell them, tell the model,
tell these interns
what what you need it to do
and how it needs to do it.
And you get an incredible
meaningful output
from these generative AI capabilities.
Some of the things
that's in the ways
that are impacting our our business
in the way we're approaching it
is is opening up things
like the opportunities
with enterprise knowledge
becoming decentralized and democratized
because this this
capability makes it so much,
so much
easier to access and to search.
It makes it really makes everyone in
in the business
as knowledgeable as,
you know,
the most knowledgeable person
in the business
because they have access
to all that on the edge.
Also extracting information
from documents
that we mentioned,
the Argus module in the document
API capability that we
we already have today
that can become even faster and and able
to handle
even more complex information extraction
like,
you know,
variety of different table
formats or infographics
that we're starting to see
a lot of in ESG reports, for example.
So doing a lot of a lot more complex
things that we're able to do.
And quite frankly,
we're just scratching the surface.
I think the whole
the industries, society, business
at large
is really just scratching the surface
of what these capabilities can do.
But there's already a lot
that we've seen that it can do.
So a lot that that we think it can
do for our business.
So, Brian, you have a team of
people that are really focused on this
and I know you get a lot of interest
within our organization from individuals
who maybe have a passing interest
or have read the news
or have seen something
and now want to understand more about it.
What advice
would you give
people to trying
to kind of upskill themselves
when it
when it comes to this particular topic?
Yeah,
the good news
is that a lot of this generative AI
is incredibly user friendly.
I would I would my, my sort of my,
my advice would would be that people
go out and
learn early
as much as they can
about prompting,
which is this concept of interacting
with the model itself,
these generally AI models,
you know, how a human
can provide instructions to these models
as well as prompt engineering.
So take that to the next step,
which is how to do that
really effectively.
There's
a lot of nuance in effective prompting.
You know,
the business world,
technology, world
at large,
is constantly discovering new nuances
of of how to communicate
with these different
these different models
and these different generative,
the AI capabilities,
you know,
things like giving the model a persona,
explicitly telling it not to lie,
giving it the opportunity to critique
its own responses
and then create a another response,
taking into account
those those criticisms
of self criticisms.
There's really a lot to learn,
but prompt engineering
is really a great place to start.
Ryan, what's your
what's your view on how
people trying to upskill themselves
should think about
these these new technologies?
I actually was in a workshop yesterday
exploring use cases and I,
I was impressed
with some of them and the ability
to summarize and extract data,
something that I think
I think every day
I'm learning a new use case or something
that works well.
I also found that
with some of the models now
trained on on
almost everything that it's able to,
you know, I think we have other
text to code
models that are out there
specifically trained for text to code.
But we're seeing now
that some of these advanced
large language models are able to
just do that
because that kind of sits
on the Internet's
training data anyway.
So some very interesting use cases.
I think we all have to be creative
and we all have to be
ready to see how that could be introduced
into our workflows.
I'm I'm always a little bit
skeptical of the risk,
and I think it's important
to have
I'm always going to say that
have some level of review.
I like the fact that it can take out
that first mile
of of writing efforts
or some sort of design effort
where you can get that draft one
and then take it to a place
where you want it.
And I think that, you know,
that saves time,
that that's effective and it's not it's
not whether we can use this and then
and then publish it
after it gets produced.
I think it's more about
where does it fit in the work,
the workflow
and how we could effectively
bring that in and use it
and not be worried about the fact
that it's an uncertain device.
Right?
The output is always
going to be uncertain.
I think back to my point
about whether
you can validate these things
or test them or assess them.
I don't think we really have
an intuition of how
a certain prompt will map to an output.
So I think we always have to be ready for
a poor output
even as these models progress
into the future.
So having that awareness
kind of finding a place for it
to fit in in the work
in your life, in your
your workflow, I think that's the
that's kind of how I I'm thinking
about it as I go forward.
Yeah.
Thanks.
And so
Brian, what
what are some of your experience
in actually deploying these things?
What are some of limitations
that you've discovered in this process?
If you can talk a little bit about,
you know,
it's always nice to see the neat parlor
trick to start with,
but when you start to dig down into what
what have you discovered
and what lessons to be learned?
Yeah, I think there's there's
whether it's with,
you know, generative AI or traditional AI
and I think this was discussed
in the previous day
AI lifecycle sessions.
Well
really data is king and we found
that we've tried to do things with in
I don't want the invalid but but not the
not the best data
we're trying to use synthesize data
to solve for a problem that
really requires real data.
That's that's really
kind of lesson learned throughout,
number one,
throughout this entire process,
whether that's with
the traditional capabilities
or the new capabilities,
really,
without that accessible,
discoverable, organized data,
it becomes really difficult
to develop,
to test, to pilot
and quite frankly, to
convince business
stakeholders of the value of AI,
because you can't show anyone
the real thing.
You know,
you don't really have a means of
of determining
and displaying
how effective they really is.
So, you know, the as I mentioned earlier
in the generative
AI capabilities are just becoming
much more readily available
and really user friendly.
So those that hard development work
that that traditional computer
science data science work
is already sort of done
for, for an enterprise
and for us as well.
You know,
you just really need to hook it up
to your own,
to your own data effectively.
And there's a variety of different,
different ways to do that.
You step one is getting the data
and approvals for the data,
securing it, ensuring that you're not
certainly
running roughshod over
professional responsibility
or contractual
responsibilities with with data,
as well as then
developing the capabilities
to connect to it effectively.
That's getting it is one
one aspect
that you need to certainly cover.
It's another aspect
entirely too to hook up
some of these capabilities
to that dataset
to effectively interrogate.
The other thing that I would
sort of the lesson
learned that I think anybody in
attending this webcast
can sort of attest to
if they've
if they've tried to do
something like this
in an enterprise context,
is that you really have to get
everybody on board with
with what you're
trying to do with with they are meeting
it takes a corporate village,
so to speak, where
you don't just need data science
and computer science
and some of the I.T component,
but you really need
to have risk on board.
You could have all your
legal functions on board.
We talked about the
different regulations
that that
could potentially be impacting us.
You have to have this quality
to our m processes
to ensure
that what you're building is in fact
enhancing the quality of some of what
our practitioners
are doing in the actual product
that goes out the door
as well as operations
in the traditional
in our case, traditional auditors.
Everybody really needs
to be working together
from the beginning in parallel,
not in sequence, because in this day
and age
that is where
as we're all sort of experiencing this,
this tempest
that we're a part of,
things are moving incredibly quickly.
So you can't be mired
in this long tail process.
All the docs have to
be in a row working together.
Yeah.
Thanks, Brian.
Ryan, what about lessons
learned that you've observed?
So I think what we've seen
is that
the infrastructure can get quite complex.
So we are seeing that.
Yeah, I think again,
I spent a lot of my career
in a certain industry
where the models were
maybe a step, a step
step down
a lower
lower of a lower amount of complexity
in terms of what goes into the model
in those cases,
I think there are some easier ways
to implement them and deploy them.
It becomes difficult from the training
perspective, the inference perspective
change management across a pipeline
and a lot of people are using many,
many applications across the pipeline
to get to a final product.
It becomes more complex,
it's more costly.
So I'd say a lot of people
that I've spoken to are looking to fill
or looking to place
AI into a certain use case.
I think the other way around
is a little bit better.
Where can we
what are the problems we have?
A lot of problems
can be addressed
by by traditional technology.
I think there's certainly a place
for structured technology with algorithms
that follow rules
that we know the output every time.
I think there's a really great place
for A.I.
in many use cases,
but I do think
we have to think about that
for how we how we deploy.
I think we we've seen a number of
we've had a number of conversations
where we've had
we've seen
we've been part of working sessions.
And the question is, where do we put
either general or general
AI or traditional A.I., if you will?
And I think that's something
that will get folks into trouble
because I think it's more about where
what is the biggest impact of our
of our are,
where are the problems now and then?
What are we actually
what type of a solution
can we use for that problem?
In many cases,
algorithmic or a rule based
approach work.
They work really well.
In fact, we've seen
some organizations, organizations
taking algorithmic decisions.
So rules building AI on top of that,
which kind of effectively gets you back
if you have a really good A.I.
model,
gets you back to the algorithm
or the rule,
the rules that you've
developed originally.
So yeah,
I think we have to really think about
what's the use case,
especially for for use cases
where we don't know the output, sorry,
we don't know the ground truth
and we don't understand
whether something is potentially
fraudulent or not.
So we've seen a lot of approaches
that aim to have a probability
on whether a transaction
is fraudulent or not
without truly knowing.
At the end of the day,
I think
a game plan is to look at a metric
where you let some of that
either traffic, whether it's traffic,
whether it's clicks,
whether it's transactions
or sign ups, Right.
In terms of identity theft,
let them in the door,
see where you have tagged correctly
with that model
and kind of track that over time
for maybe a small amount.
So you can kind of keep an eye
on how well this tool
is doing
at picking up on on the positive.
Right.
The the positive outcomes
that you're looking for.
We are
just, you know,
some of the things that where we have
a lot of conversations about this
and I think these are some of the things
that keep bubbling up
and there's a lot of excitement
around here right now, including myself.
But I am also a traditionalist, too.
I think there's a lot of great places
for traditional tech to come in
and be neighbors with AI,
but maybe not be.
I mean,
it may not need to be
the center of focus of that show.
Yeah.
Ryan, there's a lot of questions
in the chat around,
and I'm going to kind of term
it as safety
and, you know, how do you safely use it?
And even a question
around the performance metrics
that you mentioned and
and trustworthy framework,
what are some examples of
performance metrics
that you could use to evaluate or
or other
evaluative techniques
that you could use to look at models?
The ring executed? Yeah.
So it goes
it goes back down to the mission
statement of that model
or that that that system,
what it's being trained for.
So there are a lot of cases where,
where we
it's a forecasting device, right.
Where we're anticipating the outcome
to happen.
It's great to go back and back test
that and see if it did
and fraudulent
so and identity theft, for example.
Right.
So people are signing up to a website.
You want to make sure
that their identity is accurate.
What's the chance
that they are going to commit fraud
after they sign up
and get an application on a website?
The question comes is
if we turn them away
right immediately from the
from the output of the AI model,
we won't really ever know if that person
was going to be turning out
to be a legitimate user
or was going to commit fraud.
So I think that point
I was trying to make the for is
having a period of time where you make
you take an investment
in allowing the door to be open, Right.
So you let everyone in.
You definitely use the AI model to tag
certain individuals
or certain profiles
that you think are fraud
likely
and see what happens and track that
and track that over time and see
are we getting
we're getting better at that
or are we getting worse?
I think there's many,
many metrics to look at,
but these are some
the ones that
are trying to get to the the
the predictability of the model.
And then certainly, you know, relevant
for the New York City
bias law in terms of,
you know,
the automated
decisioning tools
that that folks are using for either
either screening resumes,
any type of hiring decisions
or even promotion decisions,
disparate impact,
making sure that we have
we're not disproportionately,
disproportionately harming
certain protected groups.
That's certainly a metric
that that organizations
are being required to follow in New York.
So so definitely
those metrics are important, too.
Yeah, you
hit on a keyword there,
which is automation.
So a lot of what I seem to
get from people concerned about air
safety, it's it's less about
really peeling back the air
and more about the automation.
So you could put an automated rule
in place that makes a critical decision
and a process
that has nothing to do with AI
but is just flat out wrong
and that's equally as unsafe.
So I think some of the questions
that should be asked around
deploying and
models is really around automation.
There's a question here about,
you know, should
there be a doomsday switch?
And it's
I think only if you put the automation
in charge of the doomsday device.
I mean, that that's kind of
you know,
there's choices to be made in the use
case and choices to make in a way
in which systems are deployed
and that really governs,
you know,
the kind of safety mechanisms
in place and A.I.
model that that Brian,
you deployed
to write an accounting memo is
not going to take over the world
on its own.
But if you didn't put it
in charge of the nuclear fleet, then
jeez, that was a decision to make.
And so I think that those are important
considerations.
When we talk about technology in general.
There is one question in here
I'm going to try to answer,
and I'll get your thoughts on this,
too, guys, about, you know, jobs.
And I don't think
anyone here is going to predict
what's going to happen with employment.
I would say that most of
us are looking at this and
even struggling to see a
couple of years out in terms
of how this technology impacts
society in the workforce.
I guess the one thing that
I always reflect on is technology.
Step change
rarely results in economic regression,
right?
It usually results
in an economic expansion of some sort.
And so while the nature of jobs change,
you end up with probably more jobs
than you had before,
even though they might be
in different categories.
So I guess I
got your final wrap up opinions on
where you think all this is headed in any
kind of final thought.
So we'll start Brian, with with you.
Sure. Yeah.
I mean,
I think it's just,
you know, we talk a lot
about how analogous
some of these
capabilities are to the
advent of computers
when they when they first gained
our industry and calculators
and and how they never really
fully replaced jobs.
They really just expanded
the role of the human
and expanded the capability of the human.
And that's sort of what
we're we're anticipating.
It's just
we can't quite see where
that that
that full extent of that
that change is taking place,
especially with. Jared May I.
But you
also can envision a
future certainly in our profession
where that
that the amount of depth
that we can provide
with our honest
opinions is becomes deeper.
You know, it
just it doesn't necessarily
what's going to keep
the same static level
of reasonable assurance per
say I can
I could foresee that changing over time
just as it has
the opinion,
the reasonable assurance of that opinion,
you know, 15 years ago
was probably different
from what it is today.
We don't know the answer to that.
But you can
you could argue that it could be
different today based on all the
the different analytics and data
that are available to us today.
So I see a continuation of that where,
you know,
we're going to be able
to provide our clients
and our market
and the capital markets
with with incredible depth
and incredible amounts of quality
into in our audits
that we weren't able to do previously
with the same amount of
workforce.
That's where I see it going.
And I guess I would focus
at least in my response,
not not to comment on the on
the jobs aspect,
but it's more about
what skills or whether,
you know,
there's jobs available for
or certain roles that that
that are going to change.
But it's more about
what are the skills
that are needed
for a successful role in the future.
I think even in my career as I hire
the require
the skills that that I'm looking for
have changed over the past 16, 17 years.
So without a doubt,
I think a high or being able
to use effectively or being a little bit
have an understanding
of potentially how it works, to know
the pitfalls, to know the use cases
and the ways that it can be effective.
I think that becomes really powerful.
I think if you have
that ability to do it,
I don't think it's going to be changing.
You know, the job market in great leaps.
But I do think every
technology does change
what is needed
out of the workforce,
and I think it's always
a daily it's changing.
So we should just be ready for that
and understand
what are the tools of the job today
and what are the tools
going to be for tomorrow,
and how do we take those bets
and try to put ourselves
in good positions for the next few years?
So it seems that
that game has never changed, right?
That's always
been a game of the job market.
So I definitely think that's something
that we all have to think
about and be ready for.
تصفح المزيد من مقاطع الفيديو ذات الصلة
All Things Internal Audit AI Podcast: Generative AI Uses for Internal Audit
All Things Internal Audit: Risk & Cyber Audit Opportunities with AI
The Sarbanes Oxley Act of 2002
2.3 Overview of the Audit Process Audit Planning Risk Assessment
Audit Planning | Understanding the Entity and its Environment | Hermosilla, Tiu, Salosagcol
Audit Risk Model (Audit, Inherent, Control & Detection Risks)
5.0 / 5 (0 votes)