24 hr AI conference: The AI in Audit Revolution (US)

Deloitte US
18 May 202335:00

Summary

TLDRIn this insightful discussion, Will Bible, the deputy leader of Audit, Transformation, Fraud, and Assurance Business, explores the impact of AI on the audit industry since 2015. With Brian Crowley and Ryan, they delve into the transformative power of AI, particularly large language models, in enhancing audit services, standardizing workflows, and managing the exponential growth of data. They address the challenges of deploying AI, including risk mitigation strategies, the importance of data quality, and the future skills required in a technology-driven audit profession.

Takeaways

  • 😀 Digital transformation has been a key focus for the Audit, Transformation, Fraud, and Assurance Business since 2015, aiming to improve quality through standardized workflows and digitization.
  • 🌐 The global deployment of the fully digital platform OMNIA is a significant milestone in the company's digital journey, enabling advanced data utilization and AI integration.
  • 🔍 Data is identified as a crucial enabler for AI, and the standardization and structuring of data have been foundational to the deployment of AI capabilities in auditing.
  • 📈 The rapid growth of data available to auditors presents challenges that AI is positioned to address, particularly in handling large volumes of transactions and identifying outliers.
  • 📝 The script highlights the importance of written language in auditing, with AI capabilities like natural language processing being used to analyze and summarize vast amounts of textual data.
  • 🤖 Generative AI is transforming the audit process by acting as a translator between human and machine, enhancing productivity, and democratizing enterprise knowledge.
  • 🛠️ AI deployment in auditing involves a multidisciplinary approach, requiring collaboration across data science, IT, risk, legal, and quality to ensure effective and ethical use.
  • 🔑 The importance of having a clear mission and value statement for AI is emphasized, to guide its design, development, and use, and to mitigate risks such as reputational damage and regulatory non-compliance.
  • 🚀 The potential of generative AI in auditing is vast but still in its early stages, with opportunities for complex information extraction and enhancing human-machine interaction.
  • 🛑 The need for a robust framework to evaluate AI performance, including metrics for accuracy, impact, and bias, is discussed, as is the importance of continuous monitoring and validation.
  • 🔑 The script suggests that AI will not replace jobs but will change them, requiring a workforce that understands and can effectively utilize AI, emphasizing the need for ongoing upskilling and adaptability.

Q & A

  • What digital transformation journey has Will Bible's organization been on since 2015?

    -Will Bible's organization has been on a digital transformation journey that involves rapidly innovating and deploying digital technologies to address specific pain points in the audit process. The focus has been on standardizing workflows and digitizing with the aim of improving overall quality.

  • What is the name of the globally deployed, fully digital platform mentioned in the script?

    -The globally deployed, fully digital platform mentioned is called OMNIA.

  • What role does data play in the deployment of AI within Will Bible's organization?

    -Data is a key enabler for deploying AI within the organization. The digital platform Omnia is critical for managing data and supporting AI capabilities.

  • What is Brian Crowley's perspective on the impact of AI on the audit business?

    -Brian Crowley believes that the audit business, like others, is being significantly impacted by AI as clients digitize their operations, leading to an exponential growth in available data. This data growth presents a challenge that cannot be managed by human means alone, necessitating the use of AI.

  • What are some of the AI capabilities that Brian Crowley's group focuses on within audit?

    -Brian Crowley's group focuses on AI capabilities such as anomaly detection using unsupervised learning to identify outliers in transactions, and natural language processing to handle written language in transaction descriptions, account names, and other forms of transactional evidence.

  • How does Ryan view the deployment of AI in the finance and reporting field?

    -Ryan observes that AI is being deployed in low-risk areas initially, but its use is quickly expanding into other parts of the organization. He mentions specific AI applications in financial reporting, such as internal control for financial reporting, transaction monitoring, and anomaly detection.

  • What are some of the risks associated with deploying AI in critical business processes?

    -Some of the risks include reputational risk, operational and financial risks, and regulatory compliance risks. There is also the challenge of disparate impact on protected groups and the evolving nature of AI regulation.

  • What is the trustworthy AI framework that Will Bible refers to?

    -The trustworthy AI framework is a set of guidelines and principles that aim to ensure that AI is developed and used in a way that is ethical, transparent, and minimizes risk. It includes having a clear mission statement, value statement, and governance frameworks for AI deployment.

  • How does Brian Crowley see generative AI impacting the audit process?

    -Brian Crowley sees generative AI as a tool that can significantly enhance productivity by acting as a translator between human and machine, making enterprise knowledge more accessible, and facilitating complex information extraction from documents.

  • What advice does Brian Crowley give for individuals looking to upskill in the area of generative AI?

    -Brian Crowley advises individuals to learn about prompting and prompt engineering, which involves effectively interacting with AI models to provide instructions and achieve desired outcomes.

  • What are some lessons learned from deploying AI models in an enterprise context, according to Brian and Ryan?

    -Some lessons learned include the importance of having high-quality data, the necessity of getting all stakeholders on board, the complexity of infrastructure, and the need for a clear mission statement. Additionally, it's crucial to consider where AI fits within the workflow and to be prepared for the uncertainty of AI outputs.

Outlines

00:00

🌟 Digital Transformation and AI Integration in Audit Services

Will Bible, the deputy leader of Audit, Transformation, Fraud, and Assurance Business, introduces the company's digital transformation journey since 2015, emphasizing the development and deployment of digital technologies to enhance audit quality. The introduction of the OMNIA platform, a fully digital solution, is highlighted as a key development. Brian Crowley and Ryan, leaders in data science and AI assurance services respectively, discuss the use of AI, including large language models, to transform audit services and the challenges of handling exponential data growth. The conversation underscores the importance of data as an enabler for AI and the company's strategic embrace of technological advancements in audit processes.

05:03

📈 Harnessing AI for Auditing: Challenges and Opportunities

Brian Crowley discusses the impact of digitization on the audit business, noting the exponential growth of data and the challenges of managing it through human means alone. He outlines the company's efforts to standardize processes, digitize and structure data, and deploy advanced analytical capabilities, including AI, in response to the data explosion. The conversation delves into the full set of AI capabilities focused within audit, such as anomaly detection using unsupervised learning and natural language processing to accelerate processes involving written language. Ryan shares insights into client deployments of AI in finance and reporting, highlighting the cautious yet evolving approach to integrating AI into critical processes and the importance of risk mitigation strategies.

10:04

🛡️ Navigating the Risks and Regulations of AI Deployment

The discussion shifts to the risks associated with AI deployment, including reputational, operational, financial, and regulatory risks. The panelists emphasize the importance of having a clear mission and value statement for AI deployment and adhering to governance frameworks to mitigate these risks. They also touch on the evolving landscape of AI regulation, with specific mention of the New York City-based law focusing on AI. The conversation highlights the necessity for a proactive approach to understanding and managing the risks inherent in AI solutions, as well as the importance of continuous monitoring and validation to ensure the reliability and ethical use of AI.

15:05

🚀 Generative AI's Potential in Auditing and the Future of Work

Brian explores the concept of generative AI, its potential to transform the audit process, and the importance of effective prompting and prompt engineering in leveraging these capabilities. He likens generative AI to having a team of well-rounded interns that can be directed to perform tasks, generating meaningful output. The conversation also addresses the democratization of enterprise knowledge through AI and the potential for AI to enhance productivity. Ryan adds his perspective on upskilling, emphasizing the need for creativity in finding new use cases for AI and the importance of understanding the limitations and uncertainties of AI outputs in the workplace.

20:06

💡 Lessons Learned in Deploying AI and the Importance of Data

The panelists share their experiences and lessons learned in deploying AI, focusing on the critical role of data quality and accessibility. They discuss the challenges of synthesizing data and the importance of securing and organizing data to effectively develop, test, and pilot AI solutions. The conversation also highlights the need for a collaborative approach involving various stakeholders, including risk, legal, and quality assurance functions, to ensure the successful deployment of AI within an organization.

25:07

🔍 Evaluating AI Deployment: Metrics and Safety Considerations

Ryan discusses the complexity of AI infrastructure and the importance of aligning AI deployment with specific use cases and problems. He emphasizes the value of traditional technology alongside AI and the need for a clear understanding of the problems to be solved. The conversation includes a discussion on performance metrics for evaluating AI models, such as predictive accuracy and disparate impact analysis, as well as the broader considerations around the safety and automation of AI systems.

30:08

🌐 The Future Impact of AI on Jobs and the Workforce

The final paragraph addresses the broader implications of AI on employment and the workforce. While acknowledging the uncertainty of predicting the future job market, the panelists reflect on the historical impact of technological advancements on economic expansion and job transformation. They suggest that AI is likely to deepen the role of humans in the workplace rather than replace jobs, and emphasize the importance of adapting to new skills and understanding the tools of today and tomorrow.

Mindmap

Keywords

💡Digital Transformation

Digital transformation refers to the integration of digital technology into all areas of a business, fundamentally changing how an organization operates and delivers value to its customers. In the video, the speaker discusses their journey of digital transformation since 2015, emphasizing the rapid innovation and deployment of digital technologies to improve audit processes and standardize workflows.

💡Audit

Auditing is the inspection and verification of an individual or organization's accounts, practices, and statutory compliance. The video's context is within an audit business, where the use of AI is being discussed to enhance the quality and efficiency of audit services, including the deployment of a digital platform called OMNIA.

💡Artificial Intelligence (AI)

Artificial Intelligence, or AI, is the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. The video discusses the impact of AI on the audit business, with a focus on how AI is transforming audit services, including the use of large language models and algorithmic assurance services.

💡Data Science

Data science is a field that uses scientific methods, processes, and algorithms to extract knowledge and insights from structured and unstructured data. In the script, Brian Crowley, who leads the data science group, talks about the use of AI in transforming audit services and the importance of data as a key enabler for deploying AI.

💡Natural Language Processing (NLP)

Natural Language Processing is a subfield of AI that focuses on the interaction between computers and humans using the natural language. The video mentions NLP in the context of accelerating processes in the audit field, such as extracting information from written documents and generating documentation.

💡Anomaly Detection

Anomaly detection is the identification of rare items, events, or observations which raise suspicions by differing significantly from the majority of the data. The script refers to the development of anomaly detection capabilities using unsupervised learning to highlight potential areas of increased risk in the audit process.

💡Risk Management

Risk management is the process of identifying, assessing, and controlling threats to an organization's capital and earnings. The video discusses how companies are thinking about mitigating risks associated with AI, including the deployment of AI in low-risk areas and the evolution of regulations to protect downstream users.

💡Generative AI

Generative AI refers to artificial intelligence systems that are capable of creating new content, such as text, music, or images. The video talks about the deployment of generative AI in the audit process, highlighting its potential to enhance productivity and facilitate human-machine interaction.

💡Prompt Engineering

Prompt engineering is the practice of effectively instructing AI models to generate desired outputs. The script mentions prompt engineering as a key skill for upskilling in the context of generative AI, where understanding how to interact with AI models is crucial.

💡AI Lifecycle

AI Lifecycle refers to the stages an AI system goes through from its inception to deployment and maintenance. The video discusses the importance of data in the AI lifecycle, emphasizing that without high-quality, accessible, and organized data, it is difficult to develop, test, and pilot AI effectively.

💡Disparate Impact

Disparate impact refers to the unintentional discriminatory effects of a practice or policy on a protected group. The script mentions disparate impact in the context of AI regulation, where laws are being considered to ensure that AI systems do not disproportionately harm certain groups in society.

Highlights

Deputy leader of Audit, Transformation, Fraud, and Assurance Business, Will Bible, discusses the digital transformation journey since 2015.

Introduction of the globally deployed, fully digital platform called OMNIA.

Data is identified as a key enabler for deploying AI within the audit process.

Brian Crowley from the data science group talks about using AI to transform audit services, including work with large language models.

Ryan discusses AI and algorithmic assurance services and client insights.

The impact of AI on the audit business and the exponential growth of data available to auditors.

Standardizing processes and deploying advanced analytics, including AI, in response to data explosion.

Anomaly detection capabilities using unsupervised learning to identify outliers in transactions.

Natural language processing to accelerate processes involving written language in audits.

Use of AI to extract information from unstructured written data like contracts and invoices.

Omni Suggestions to analyze internal control documentation and provide auditors with summaries.

Adoption of AI in low-risk areas and its gradual deployment into more critical organizational processes.

Risk management strategies when deploying AI solutions in critical processes.

The importance of having a trustworthy AI framework and general mission statement for AI deployment.

Generative AI's potential to enhance productivity and democratize enterprise knowledge.

The necessity of upskilling in prompt engineering and understanding generative AI capabilities.

Infrastructure complexity and the need for a collaborative approach when deploying AI.

Evaluating AI models through performance metrics and considering the impact on protected groups.

Reflections on the future impact of AI on jobs, emphasizing the expansion of human capabilities rather than replacement.

Transcripts

play00:14

So my name is Will Bible.

play00:15

I'm the deputy leader of Audit

play00:17

Transformation, Fraud

play00:18

and Assurance Business.

play00:19

And to give you

play00:20

a little bit of background,

play00:21

we have been on a digital transformation

play00:23

journey since around 2015.

play00:26

We went through a process of rapidly

play00:28

innovating and deploying

play00:29

digital technologies to solve

play00:32

specific pain points in the audit process

play00:34

and try to replace manual steps

play00:36

really with a focus

play00:38

on standardizing

play00:39

our workflows and digitizing.

play00:43

All of those things were in the pursuit

play00:45

of improving overall quality.

play00:48

And since that time we've now progressed

play00:50

to where we have

play00:51

a globally deployed

play00:52

fully digital platform,

play00:53

which we call OMNIA.

play00:55

As you heard in the prior

play00:57

discussion, data is a key enabler

play01:00

for deploying A.I.,

play01:01

So that platform,

play01:02

Omnia is a really critical element

play01:04

that might come up again.

play01:05

In today's discussion,

play01:07

I'm joined by Brian Crowley,

play01:09

who leads our data science group

play01:11

that works in the Omnia portfolio,

play01:13

and he's going to be talking about

play01:14

how we are using A.I.

play01:16

to transform our actual audit services,

play01:18

including some of the work

play01:19

that we are doing

play01:20

with large language models

play01:21

and have been doing for several years.

play01:23

And I'm also joined by Ryan,

play01:24

who is leading the development of our

play01:26

AI and algorithmic assurance services.

play01:28

So he's going to share insight

play01:29

about what he sees going on

play01:31

with our clients in particular.

play01:34

So with that

play01:35

as a background, I'm going to ask

play01:36

Brian first.

play01:38

Obviously, we're here on a 24 hour event,

play01:41

so everyone's hearing a lot about

play01:43

AI in the news today.

play01:45

How is this impacting the audit business?

play01:47

How are you thinking about it?

play01:48

Leading a data science group

play01:50

and how are we embracing the technology

play01:52

in this in this profession?

play01:54

Yeah, thanks. Well,

play01:56

our business, like really all others, is

play02:00

being impacted

play02:01

in a variety of different ways,

play02:04

really, as our clients

play02:05

digitize their operations,

play02:07

the data

play02:07

that's available to us

play02:08

as auditors is just

play02:09

growing exponentially.

play02:11

And quite frankly,

play02:11

it can be a challenge

play02:12

to handle

play02:13

all that data through human means alone.

play02:16

You know, Will,

play02:17

you mentioned

play02:17

a lot of what

play02:18

we have done previously in the past

play02:20

related to our transformation journey.

play02:23

Really, that journey is predicated

play02:24

on this fact

play02:25

that this

play02:26

this data explosion was happening.

play02:28

And as a result of this,

play02:30

we've been standardizing

play02:31

our processes,

play02:32

digitizing and structuring our data

play02:35

and developing

play02:35

and deploying advanced

play02:36

analytical capabilities, including A.I.,

play02:39

in response to it.

play02:43

So I know we've we've

play02:45

today the Q&A and everything else

play02:47

is can be dominated

play02:48

by large language models.

play02:50

But what are what's

play02:51

kind of the full set of AI capabilities

play02:53

that we focus on within audit?

play02:55

If we can maybe round out

play02:57

a couple of the things that we work on

play03:00

here?

play03:00

Yeah, I think there's

play03:02

probably a little bit of a misnomer

play03:05

kind of at play with our

play03:07

with our profession,

play03:08

with the term

play03:09

auditing and public accounting,

play03:11

sort of the first sort of thing

play03:12

that comes to people's minds

play03:14

when they think about the audit business

play03:15

is that traditional finance

play03:17

and accounting sort of role

play03:19

that's heavily

play03:20

steeped in quantitative analysis.

play03:23

And we're certainly addressing

play03:24

the numerical aspect of the data

play03:26

that we deal with

play03:27

by developing

play03:28

and piloting

play03:29

anomaly detection capabilities

play03:31

that use unsupervised

play03:33

learning to point out

play03:34

outliers

play03:35

among the millions of transactions

play03:38

to highlight

play03:38

potential areas of increased risk,

play03:42

whether that's due to misstatement,

play03:43

do error or fraud.

play03:45

But really,

play03:46

while that

play03:47

numerical quantitative analysis

play03:49

is an important aspect

play03:50

of our responsibilities

play03:52

in our kind of day to day

play03:53

workload as auditors,

play03:55

really the more common form of data

play03:58

that we deal with is is written word,

play04:00

written language.

play04:02

You know,

play04:02

we have descriptions of transactions,

play04:06

account names, back patterns

play04:09

that are written in memos,

play04:11

various forms of transactional evidence,

play04:14

you know, accounting guidance.

play04:15

That's just a sample

play04:16

of what we read and analyze every day.

play04:20

And we also do a lot of writing

play04:22

to create the documentation

play04:24

that's required to explain and evidence

play04:26

the work that

play04:27

that we perform

play04:28

as a part of our our audits

play04:29

and the considerations that we make.

play04:32

And of course, all of that

play04:33

writing is, again, subject

play04:35

to multiple rounds of more reading

play04:38

through various levels of review in our

play04:40

in our audits.

play04:41

So major opportunities

play04:43

to accelerate those processes

play04:45

through natural language processing

play04:47

capabilities, both traditional and

play04:49

some of the new capabilities.

play04:51

For example,

play04:52

within the Army,

play04:53

a platform we document

play04:55

AI in the form of our Argus module

play04:57

that can extract information

play04:58

from contracts, from invoices and other

play05:02

similar types of unstructured

play05:04

written data in mass,

play05:06

which can really greatly accelerate

play05:08

our testing procedures.

play05:10

And we also have a capability

play05:13

referred to as omni suggestions

play05:15

that analyzes

play05:16

long form internal control documentation

play05:19

and provides our auditors

play05:21

with a summary of some of those key

play05:22

control characteristics.

play05:25

Within those control activities.

play05:27

So really a lot of uses that that we have

play05:30

in development

play05:31

and that have been deployed.

play05:33

Yeah, it's really it's

play05:34

really interesting point about all the

play05:36

the written word.

play05:37

I know

play05:37

I've reviewed

play05:37

a few work papers in my time

play05:40

and the detail that's in there can be

play05:43

quite a bit.

play05:43

So Ryan, when you're looking

play05:45

and working with clients,

play05:46

are you seeing the same kind of thing?

play05:47

How do you see them deploying

play05:49

AI into the finance and reporting Field.

play05:53

Yeah,

play05:54

So I guess maybe just real quickly

play05:55

before we go into

play05:56

financial reporting specifically,

play05:58

I think as most of us know, wide

play06:00

adoption of AI

play06:00

across industries,

play06:01

so tech

play06:02

recommendation systems, finance,

play06:04

fraud prevention and detection,

play06:06

advertising, ad placement,

play06:08

optimization of audiences.

play06:09

So it is being used

play06:11

in a lot of key places.

play06:13

One of the things that we

play06:14

tend to see

play06:15

is that adoption has been generally

play06:18

high for low risk areas.

play06:20

And so what we mean by

play06:21

that is the risks of AI producing

play06:24

an erroneous output

play06:25

or something that that, you know,

play06:27

an organization wouldn't agree with,

play06:28

the risks of that happening

play06:30

would have low impact,

play06:31

so low impact to the organization.

play06:33

So if something is wrong,

play06:34

not a big impact,

play06:35

not not millions of dollars at stake.

play06:37

So that's where we're seeing

play06:39

AI getting deployed first.

play06:41

But that's changing quickly.

play06:42

We're seeing more and more of

play06:44

AI being deployed into other parts

play06:45

of the organization.

play06:46

So with financial reporting,

play06:49

we definitely are seeing limited

play06:51

use cases

play06:52

so far

play06:52

that organizations

play06:53

are deploying limited for sure,

play06:55

but we do see that changing.

play06:57

So we have we've spoken

play06:58

with a number

play06:59

of number of vendors out there.

play07:00

We know many vendors that are building

play07:02

AI systems,

play07:03

specifically targeted

play07:04

at the CFO organization,

play07:06

things like internal control

play07:07

for financial reporting,

play07:10

other types of AI looking at transaction

play07:13

monitoring or anomaly detection

play07:14

for journal entries,

play07:16

expenses, accounts payable.

play07:18

So lots of

play07:19

lots of use cases

play07:20

that we're seeing being deployed,

play07:22

being developed and being designed

play07:24

by vendors.

play07:27

In one case,

play07:28

we're talking

play07:28

with a couple of sorry,

play07:29

in a couple of cases

play07:30

we're talking about

play07:30

generative, generative

play07:32

AI for financial statements.

play07:33

We do we'll talk about this later on.

play07:35

But you know, in that type of a use case,

play07:37

we think it's very important to have a

play07:39

a chain of review.

play07:40

But I think some

play07:41

the way organizations

play07:42

are thinking about it

play07:43

is that there's an ability

play07:44

to take some of the low level

play07:46

or maybe some initial draft

play07:48

versions of write ups

play07:49

about certain

play07:50

components of the of the statement

play07:52

and being able to use that as draft one

play07:54

and then be able to

play07:55

incorporate that

play07:55

into a typical review

play07:57

cycle of a statement.

play07:58

But yeah, definitely,

play08:00

definitely evolving.

play08:00

We definitely expect to see

play08:02

much, much more A.I.

play08:03

being deployed into finance over

play08:06

over the next 2 to 3 years.

play08:08

We think that's definitely

play08:09

a definitely likely.

play08:12

Yeah. Thanks, Ryan.

play08:12

And the questions

play08:15

are kind of lighting up in the audience

play08:16

about how risk is managed.

play08:18

When we started

play08:19

to put these solutions

play08:20

into some of the critical processes

play08:22

that you mentioned, that

play08:23

maybe lower risk

play08:24

solutions first, but ultimately

play08:27

the value being generated

play08:29

here will lead US

play08:30

lead organizations to use it

play08:32

more critical processes.

play08:33

How how are companies

play08:34

thinking about mitigating risks?

play08:36

What do they do

play08:38

to to kind of

play08:39

to try and mitigate

play08:40

the risk of inaccuracy or incomplete?

play08:42

Yeah, maybe I could

play08:43

maybe I could just talk a little bit

play08:44

about the risk first that, you know,

play08:45

the types of risk we're seeing.

play08:46

So I think it's important to note

play08:49

that there are differences between

play08:50

where the risk lives, right?

play08:51

There are there are many stakeholders.

play08:53

There could be organizational risks.

play08:55

There could be downstream

play08:56

risks to the customers

play08:57

or clients of that organization.

play08:59

So things obviously

play09:00

big things

play09:01

that we're thinking about

play09:01

as disparate impact

play09:03

for certain group protected groups

play09:05

among the population,

play09:06

but ultimately for the organization

play09:08

we're going to see in categories

play09:10

like reputational risk, you know,

play09:12

even the concept of the knowledge of

play09:15

AI being used,

play09:16

just simply the public knowledge of it,

play09:18

could that impact reputation.

play09:19

And we've obviously seen cases

play09:21

where that is the case, just given the

play09:23

the difference of opinion of

play09:24

AI currently in today, in today's

play09:27

climate,

play09:28

we think that's definitely something

play09:29

that could impact their organization.

play09:31

That's something we have to look at

play09:33

operational, financial, right.

play09:34

So if if AI's being used

play09:36

in a place where it's making decisions

play09:38

without control or human involvement,

play09:41

certainly open to risk there.

play09:42

And then the big one,

play09:43

which is something that's evolving,

play09:44

we talked about it in the last

play09:46

session, laws and regulation.

play09:47

So definitely something that we're seeing

play09:50

at least bipartisan agreement.

play09:53

And the very few topics

play09:54

are bipartisan agreement these days.

play09:55

But certainly AI regulation

play09:57

is something that

play09:58

that everyone seems to agree on

play10:00

and protecting the downstream

play10:01

users, you know, people of

play10:04

you know,

play10:04

people that are

play10:05

going to be affected

play10:05

by the outcomes of the

play10:07

they mentioned it

play10:08

last in the last session,

play10:09

but the New York City based law,

play10:10

one of the first actual laws in the U.S.

play10:14

that's going to be focusing on AI

play10:16

and actually going live in July.

play10:17

So something that we're seeing,

play10:19

that one we're seeing at various

play10:21

other bills and proposed law

play10:23

is going through

play10:23

various stages of legislation.

play10:24

But definitely,

play10:26

yeah, the risk are there for sure.

play10:28

And maybe I can talk about the mitigation

play10:31

if you like that will as well.

play10:33

Yeah, Yeah.

play10:34

Now that you enumerate all the risks

play10:35

we want to be worried about.

play10:37

Yeah, right.

play10:38

So I think, listen, we, you know, we have

play10:40

Deloitte has a trustworthy AI framework.

play10:42

There's, there are lots of

play10:44

we mentioned a few before

play10:45

that the

play10:46

National Institute of Science

play10:47

and Technology has a

play10:49

has a framework and guidelines.

play10:50

We have the White House

play10:51

Bill of Rights, A.I.

play10:52

Bill of Rights guidelines.

play10:54

So lots of similar concepts.

play10:56

And we talk about

play10:57

how do you mitigate risk.

play10:58

I think, you know, from my perspective,

play11:00

I think two things are very important.

play11:02

One is having a general mission statement

play11:04

and value statement for

play11:05

I think many organizations just go at AI

play11:08

without understanding what the goal is.

play11:10

Is it trying to scale up?

play11:12

Is it trying to do

play11:13

more of a certain task?

play11:15

Is it

play11:15

trying to improve the outcomes for

play11:18

for users?

play11:18

So I think that's something

play11:20

that is pretty important

play11:21

that should affect

play11:22

both design, development and the use of

play11:26

AI across an organization

play11:28

and that that would affect

play11:30

and definitely affect

play11:31

the ethical uses as well.

play11:33

So that's something

play11:33

that I think that a firm

play11:34

should

play11:35

organization should follow

play11:36

that type of mindset

play11:38

too, is

play11:39

just all the governance frameworks

play11:40

that that we see out there.

play11:41

And I think

play11:42

some of the important things are

play11:45

whether, you know, a policy

play11:46

that would have

play11:47

all the permissible use cases

play11:48

that the organization

play11:49

would feel comfortable

play11:50

using AI for controls standards across,

play11:54

you know, development, use,

play11:57

testing and monitoring.

play11:59

In other words, in other industries

play12:01

we see things like validation.

play12:02

We alluded to previously

play12:04

limited to the validation

play12:05

in, for example, the financial industry,

play12:07

which is

play12:08

looking at all

play12:09

types of models,

play12:10

quantitative models

play12:11

and performing validation tasks.

play12:13

And validation can mean

play12:14

lots of different testing procedures.

play12:16

And one of those is

play12:18

conceptual soundness and logical

play12:21

underpinnings of the model

play12:22

and making

play12:22

sure it has a logical intuition behind it

play12:26

that becomes difficult with

play12:27

I think we all see

play12:28

some of the

play12:29

the number of features

play12:30

that's that are included

play12:31

with these models.

play12:32

So coming in

play12:33

and then trying to understand

play12:35

the intuition behind the model

play12:37

becomes a little a little difficult.

play12:39

So we are seeing more of the focus

play12:41

being spent on performance monitoring.

play12:43

So rather than,

play12:44

you know, do

play12:44

we agree with the conceptual

play12:46

soundness of a model?

play12:47

Is it it's more about

play12:48

how is it performing

play12:49

and making sure

play12:50

that if we have certain

play12:51

performance metrics that we want to hit

play12:52

and make sure we want to track,

play12:54

make sure that that's something

play12:55

that we keep an eye on,

play12:56

and if that falls below a certain

play12:58

a certain indicator or limit, then

play13:01

we have to look at,

play13:02

you know, is this still working

play13:03

in the way that we intended?

play13:04

They want at at conception.

play13:06

So I inception.

play13:07

So I think that's is definitely there

play13:10

definitely ways to mitigate risks

play13:11

but there's a lot that we're learning

play13:13

about even regulators right

play13:14

now are really trying to think about

play13:15

how do we instill laws

play13:18

that will mitigate these risks.

play13:19

And ultimately it's it's a hard task,

play13:21

but something that at least

play13:22

some of these higher level frameworks

play13:23

we think are step one of getting to that

play13:26

that mitigation ability.

play13:29

Yeah, right.

play13:30

I like that idea of having

play13:31

a very intentional framework

play13:33

for how it's deployed.

play13:33

A lot of

play13:35

what we hear

play13:36

people concerned about is somehow,

play13:39

you know,

play13:39

the mathematical model

play13:40

getting loose in the world

play13:41

and taking things over.

play13:42

But, you know,

play13:43

you do have to be put in that position,

play13:45

right, in order for it

play13:46

to actually have any kind of influence.

play13:49

So, Brian, I'm going to turn back to you.

play13:51

When you're thinking about

play13:53

the generative AI,

play13:54

which is the hot topic,

play13:57

how do you see that

play13:57

being deployed in audit

play13:58

and what's our view of where

play14:00

that should be

play14:01

kind of put in a position to

play14:03

to accelerate

play14:04

or be used as an enabling tool

play14:05

within an audit process?

play14:07

Well generative AI is certainly exciting.

play14:11

I think everyone can

play14:12

kind of agree on that.

play14:14

What it

play14:15

what it can

play14:16

ultimately do for really anyone.

play14:18

But certainly certainly our business

play14:20

is turn everyone into a software engineer

play14:24

sort of acts as a translator

play14:27

between human and machine

play14:29

does so much more

play14:30

than just generate content.

play14:32

There's there's a lot of

play14:33

the human machine interaction

play14:35

that it can facilitate.

play14:36

And with the proper tooling,

play14:37

it can significantly enhance productivity

play14:41

of our people.

play14:42

It's sort of like having,

play14:45

you know, a thousand

play14:46

very well rounded interns

play14:48

that all you really need to do

play14:50

is just tell,

play14:51

tell them, tell the model,

play14:52

tell these interns

play14:54

what what you need it to do

play14:56

and how it needs to do it.

play14:58

And you get an incredible

play15:00

meaningful output

play15:02

from these generative AI capabilities.

play15:05

Some of the things

play15:06

that's in the ways

play15:06

that are impacting our our business

play15:09

in the way we're approaching it

play15:11

is is opening up things

play15:13

like the opportunities

play15:15

with enterprise knowledge

play15:17

becoming decentralized and democratized

play15:21

because this this

play15:23

capability makes it so much,

play15:25

so much

play15:26

easier to access and to search.

play15:30

It makes it really makes everyone in

play15:33

in the business

play15:34

as knowledgeable as,

play15:36

you know,

play15:36

the most knowledgeable person

play15:37

in the business

play15:38

because they have access

play15:38

to all that on the edge.

play15:41

Also extracting information

play15:43

from documents

play15:44

that we mentioned,

play15:44

the Argus module in the document

play15:46

API capability that we

play15:48

we already have today

play15:49

that can become even faster and and able

play15:52

to handle

play15:53

even more complex information extraction

play15:56

like,

play15:56

you know,

play15:57

variety of different table

play15:58

formats or infographics

play16:00

that we're starting to see

play16:01

a lot of in ESG reports, for example.

play16:04

So doing a lot of a lot more complex

play16:06

things that we're able to do.

play16:09

And quite frankly,

play16:10

we're just scratching the surface.

play16:12

I think the whole

play16:13

the industries, society, business

play16:16

at large

play16:16

is really just scratching the surface

play16:18

of what these capabilities can do.

play16:20

But there's already a lot

play16:22

that we've seen that it can do.

play16:23

So a lot that that we think it can

play16:26

do for our business.

play16:28

So, Brian, you have a team of

play16:30

people that are really focused on this

play16:31

and I know you get a lot of interest

play16:32

within our organization from individuals

play16:36

who maybe have a passing interest

play16:38

or have read the news

play16:39

or have seen something

play16:41

and now want to understand more about it.

play16:43

What advice

play16:44

would you give

play16:45

people to trying

play16:46

to kind of upskill themselves

play16:48

when it

play16:49

when it comes to this particular topic?

play16:51

Yeah,

play16:52

the good news

play16:52

is that a lot of this generative AI

play16:56

is incredibly user friendly.

play16:59

I would I would my, my sort of my,

play17:03

my advice would would be that people

play17:05

go out and

play17:06

learn early

play17:07

as much as they can

play17:08

about prompting,

play17:10

which is this concept of interacting

play17:12

with the model itself,

play17:13

these generally AI models,

play17:15

you know, how a human

play17:16

can provide instructions to these models

play17:20

as well as prompt engineering.

play17:22

So take that to the next step,

play17:24

which is how to do that

play17:25

really effectively.

play17:27

There's

play17:28

a lot of nuance in effective prompting.

play17:31

You know,

play17:31

the business world,

play17:32

technology, world

play17:33

at large,

play17:34

is constantly discovering new nuances

play17:38

of of how to communicate

play17:41

with these different

play17:42

these different models

play17:43

and these different generative,

play17:44

the AI capabilities,

play17:45

you know,

play17:46

things like giving the model a persona,

play17:49

explicitly telling it not to lie,

play17:54

giving it the opportunity to critique

play17:56

its own responses

play17:57

and then create a another response,

play17:59

taking into account

play18:01

those those criticisms

play18:02

of self criticisms.

play18:04

There's really a lot to learn,

play18:06

but prompt engineering

play18:07

is really a great place to start.

play18:12

Ryan, what's your

play18:13

what's your view on how

play18:15

people trying to upskill themselves

play18:17

should think about

play18:18

these these new technologies?

play18:22

I actually was in a workshop yesterday

play18:24

exploring use cases and I,

play18:26

I was impressed

play18:27

with some of them and the ability

play18:28

to summarize and extract data,

play18:30

something that I think

play18:31

I think every day

play18:32

I'm learning a new use case or something

play18:34

that works well.

play18:35

I also found that

play18:36

with some of the models now

play18:38

trained on on

play18:39

almost everything that it's able to,

play18:41

you know, I think we have other

play18:43

text to code

play18:45

models that are out there

play18:46

specifically trained for text to code.

play18:48

But we're seeing now

play18:49

that some of these advanced

play18:51

large language models are able to

play18:52

just do that

play18:52

because that kind of sits

play18:53

on the Internet's

play18:56

training data anyway.

play18:57

So some very interesting use cases.

play18:59

I think we all have to be creative

play19:02

and we all have to be

play19:03

ready to see how that could be introduced

play19:04

into our workflows.

play19:06

I'm I'm always a little bit

play19:08

skeptical of the risk,

play19:09

and I think it's important

play19:10

to have

play19:11

I'm always going to say that

play19:12

have some level of review.

play19:14

I like the fact that it can take out

play19:17

that first mile

play19:18

of of writing efforts

play19:20

or some sort of design effort

play19:21

where you can get that draft one

play19:24

and then take it to a place

play19:26

where you want it.

play19:26

And I think that, you know,

play19:27

that saves time,

play19:28

that that's effective and it's not it's

play19:30

not whether we can use this and then

play19:32

and then publish it

play19:34

after it gets produced.

play19:35

I think it's more about

play19:36

where does it fit in the work,

play19:37

the workflow

play19:38

and how we could effectively

play19:39

bring that in and use it

play19:40

and not be worried about the fact

play19:42

that it's an uncertain device.

play19:45

Right?

play19:45

The output is always

play19:46

going to be uncertain.

play19:47

I think back to my point

play19:48

about whether

play19:49

you can validate these things

play19:50

or test them or assess them.

play19:52

I don't think we really have

play19:53

an intuition of how

play19:54

a certain prompt will map to an output.

play19:56

So I think we always have to be ready for

play19:59

a poor output

play20:00

even as these models progress

play20:01

into the future.

play20:02

So having that awareness

play20:03

kind of finding a place for it

play20:04

to fit in in the work

play20:06

in your life, in your

play20:07

your workflow, I think that's the

play20:08

that's kind of how I I'm thinking

play20:10

about it as I go forward.

play20:12

Yeah.

play20:13

Thanks.

play20:13

And so

play20:13

Brian, what

play20:14

what are some of your experience

play20:15

in actually deploying these things?

play20:17

What are some of limitations

play20:20

that you've discovered in this process?

play20:22

If you can talk a little bit about,

play20:24

you know,

play20:25

it's always nice to see the neat parlor

play20:26

trick to start with,

play20:27

but when you start to dig down into what

play20:30

what have you discovered

play20:30

and what lessons to be learned?

play20:33

Yeah, I think there's there's

play20:34

whether it's with,

play20:36

you know, generative AI or traditional AI

play20:40

and I think this was discussed

play20:41

in the previous day

play20:42

AI lifecycle sessions.

play20:43

Well

play20:45

really data is king and we found

play20:47

that we've tried to do things with in

play20:52

I don't want the invalid but but not the

play20:54

not the best data

play20:55

we're trying to use synthesize data

play20:57

to solve for a problem that

play20:58

really requires real data.

play21:01

That's that's really

play21:02

kind of lesson learned throughout,

play21:04

number one,

play21:05

throughout this entire process,

play21:06

whether that's with

play21:07

the traditional capabilities

play21:09

or the new capabilities,

play21:10

really,

play21:10

without that accessible,

play21:12

discoverable, organized data,

play21:15

it becomes really difficult

play21:17

to develop,

play21:18

to test, to pilot

play21:19

and quite frankly, to

play21:21

convince business

play21:22

stakeholders of the value of AI,

play21:24

because you can't show anyone

play21:25

the real thing.

play21:27

You know,

play21:27

you don't really have a means of

play21:29

of determining

play21:30

and displaying

play21:31

how effective they really is.

play21:35

So, you know, the as I mentioned earlier

play21:38

in the generative

play21:39

AI capabilities are just becoming

play21:41

much more readily available

play21:42

and really user friendly.

play21:45

So those that hard development work

play21:48

that that traditional computer

play21:50

science data science work

play21:52

is already sort of done

play21:54

for, for an enterprise

play21:55

and for us as well.

play21:57

You know,

play21:58

you just really need to hook it up

play22:00

to your own,

play22:01

to your own data effectively.

play22:02

And there's a variety of different,

play22:05

different ways to do that.

play22:06

You step one is getting the data

play22:08

and approvals for the data,

play22:10

securing it, ensuring that you're not

play22:13

certainly

play22:15

running roughshod over

play22:17

professional responsibility

play22:18

or contractual

play22:19

responsibilities with with data,

play22:22

as well as then

play22:23

developing the capabilities

play22:25

to connect to it effectively.

play22:26

That's getting it is one

play22:27

one aspect

play22:28

that you need to certainly cover.

play22:31

It's another aspect

play22:32

entirely too to hook up

play22:34

some of these capabilities

play22:35

to that dataset

play22:35

to effectively interrogate.

play22:38

The other thing that I would

play22:40

sort of the lesson

play22:41

learned that I think anybody in

play22:44

attending this webcast

play22:45

can sort of attest to

play22:46

if they've

play22:47

if they've tried to do

play22:48

something like this

play22:49

in an enterprise context,

play22:51

is that you really have to get

play22:53

everybody on board with

play22:55

with what you're

play22:55

trying to do with with they are meeting

play23:00

it takes a corporate village,

play23:01

so to speak, where

play23:03

you don't just need data science

play23:06

and computer science

play23:07

and some of the I.T component,

play23:09

but you really need

play23:11

to have risk on board.

play23:13

You could have all your

play23:13

legal functions on board.

play23:16

We talked about the

play23:17

different regulations

play23:18

that that

play23:19

could potentially be impacting us.

play23:21

You have to have this quality

play23:23

to our m processes

play23:24

to ensure

play23:26

that what you're building is in fact

play23:28

enhancing the quality of some of what

play23:31

our practitioners

play23:32

are doing in the actual product

play23:33

that goes out the door

play23:35

as well as operations

play23:36

in the traditional

play23:37

in our case, traditional auditors.

play23:39

Everybody really needs

play23:40

to be working together

play23:40

from the beginning in parallel,

play23:42

not in sequence, because in this day

play23:45

and age

play23:46

that is where

play23:46

as we're all sort of experiencing this,

play23:48

this tempest

play23:49

that we're a part of,

play23:50

things are moving incredibly quickly.

play23:53

So you can't be mired

play23:54

in this long tail process.

play23:56

All the docs have to

play23:57

be in a row working together.

play24:02

Yeah.

play24:02

Thanks, Brian.

play24:03

Ryan, what about lessons

play24:04

learned that you've observed?

play24:08

So I think what we've seen

play24:09

is that

play24:11

the infrastructure can get quite complex.

play24:13

So we are seeing that.

play24:15

Yeah, I think again,

play24:16

I spent a lot of my career

play24:17

in a certain industry

play24:19

where the models were

play24:20

maybe a step, a step

play24:22

step down

play24:22

a lower

play24:23

lower of a lower amount of complexity

play24:24

in terms of what goes into the model

play24:28

in those cases,

play24:28

I think there are some easier ways

play24:30

to implement them and deploy them.

play24:31

It becomes difficult from the training

play24:34

perspective, the inference perspective

play24:37

change management across a pipeline

play24:39

and a lot of people are using many,

play24:41

many applications across the pipeline

play24:43

to get to a final product.

play24:45

It becomes more complex,

play24:46

it's more costly.

play24:47

So I'd say a lot of people

play24:49

that I've spoken to are looking to fill

play24:54

or looking to place

play24:55

AI into a certain use case.

play24:56

I think the other way around

play24:58

is a little bit better.

play24:58

Where can we

play24:59

what are the problems we have?

play25:01

A lot of problems

play25:02

can be addressed

play25:03

by by traditional technology.

play25:05

I think there's certainly a place

play25:07

for structured technology with algorithms

play25:10

that follow rules

play25:11

that we know the output every time.

play25:14

I think there's a really great place

play25:15

for A.I.

play25:15

in many use cases,

play25:16

but I do think

play25:18

we have to think about that

play25:18

for how we how we deploy.

play25:20

I think we we've seen a number of

play25:22

we've had a number of conversations

play25:23

where we've had

play25:25

we've seen

play25:26

we've been part of working sessions.

play25:27

And the question is, where do we put

play25:30

either general or general

play25:32

AI or traditional A.I., if you will?

play25:34

And I think that's something

play25:35

that will get folks into trouble

play25:37

because I think it's more about where

play25:38

what is the biggest impact of our

play25:40

of our are,

play25:41

where are the problems now and then?

play25:42

What are we actually

play25:43

what type of a solution

play25:44

can we use for that problem?

play25:45

In many cases,

play25:46

algorithmic or a rule based

play25:48

approach work.

play25:49

They work really well.

play25:50

In fact, we've seen

play25:52

some organizations, organizations

play25:53

taking algorithmic decisions.

play25:56

So rules building AI on top of that,

play25:59

which kind of effectively gets you back

play26:01

if you have a really good A.I.

play26:02

model,

play26:02

gets you back to the algorithm

play26:04

or the rule,

play26:04

the rules that you've

play26:05

developed originally.

play26:07

So yeah,

play26:07

I think we have to really think about

play26:08

what's the use case,

play26:09

especially for for use cases

play26:12

where we don't know the output, sorry,

play26:15

we don't know the ground truth

play26:17

and we don't understand

play26:19

whether something is potentially

play26:22

fraudulent or not.

play26:23

So we've seen a lot of approaches

play26:24

that aim to have a probability

play26:27

on whether a transaction

play26:28

is fraudulent or not

play26:29

without truly knowing.

play26:31

At the end of the day,

play26:32

I think

play26:33

a game plan is to look at a metric

play26:35

where you let some of that

play26:37

either traffic, whether it's traffic,

play26:39

whether it's clicks,

play26:39

whether it's transactions

play26:42

or sign ups, Right.

play26:43

In terms of identity theft,

play26:45

let them in the door,

play26:46

see where you have tagged correctly

play26:49

with that model

play26:49

and kind of track that over time

play26:51

for maybe a small amount.

play26:52

So you can kind of keep an eye

play26:53

on how well this tool

play26:54

is doing

play26:54

at picking up on on the positive.

play26:57

Right.

play26:58

The the positive outcomes

play26:59

that you're looking for.

play27:01

We are

play27:02

just, you know,

play27:02

some of the things that where we have

play27:04

a lot of conversations about this

play27:05

and I think these are some of the things

play27:06

that keep bubbling up

play27:07

and there's a lot of excitement

play27:09

around here right now, including myself.

play27:11

But I am also a traditionalist, too.

play27:13

I think there's a lot of great places

play27:14

for traditional tech to come in

play27:17

and be neighbors with AI,

play27:19

but maybe not be.

play27:20

I mean,

play27:21

it may not need to be

play27:22

the center of focus of that show.

play27:24

Yeah.

play27:25

Ryan, there's a lot of questions

play27:27

in the chat around,

play27:27

and I'm going to kind of term

play27:28

it as safety

play27:30

and, you know, how do you safely use it?

play27:32

And even a question

play27:33

around the performance metrics

play27:34

that you mentioned and

play27:35

and trustworthy framework,

play27:37

what are some examples of

play27:39

performance metrics

play27:40

that you could use to evaluate or

play27:42

or other

play27:42

evaluative techniques

play27:45

that you could use to look at models?

play27:46

The ring executed? Yeah.

play27:48

So it goes

play27:49

it goes back down to the mission

play27:50

statement of that model

play27:50

or that that that system,

play27:52

what it's being trained for.

play27:54

So there are a lot of cases where,

play27:56

where we

play27:58

it's a forecasting device, right.

play27:59

Where we're anticipating the outcome

play28:01

to happen.

play28:03

It's great to go back and back test

play28:04

that and see if it did

play28:06

and fraudulent

play28:07

so and identity theft, for example.

play28:09

Right.

play28:09

So people are signing up to a website.

play28:12

You want to make sure

play28:12

that their identity is accurate.

play28:14

What's the chance

play28:14

that they are going to commit fraud

play28:17

after they sign up

play28:18

and get an application on a website?

play28:20

The question comes is

play28:21

if we turn them away

play28:23

right immediately from the

play28:24

from the output of the AI model,

play28:26

we won't really ever know if that person

play28:28

was going to be turning out

play28:30

to be a legitimate user

play28:31

or was going to commit fraud.

play28:33

So I think that point

play28:33

I was trying to make the for is

play28:35

having a period of time where you make

play28:36

you take an investment

play28:38

in allowing the door to be open, Right.

play28:40

So you let everyone in.

play28:42

You definitely use the AI model to tag

play28:45

certain individuals

play28:46

or certain profiles

play28:46

that you think are fraud

play28:48

likely

play28:49

and see what happens and track that

play28:50

and track that over time and see

play28:52

are we getting

play28:53

we're getting better at that

play28:54

or are we getting worse?

play28:55

I think there's many,

play28:56

many metrics to look at,

play28:56

but these are some

play28:57

the ones that

play28:58

are trying to get to the the

play29:01

the predictability of the model.

play29:03

And then certainly, you know, relevant

play29:05

for the New York City

play29:07

bias law in terms of,

play29:09

you know,

play29:09

the automated

play29:11

decisioning tools

play29:12

that that folks are using for either

play29:14

either screening resumes,

play29:15

any type of hiring decisions

play29:17

or even promotion decisions,

play29:20

disparate impact,

play29:21

making sure that we have

play29:22

we're not disproportionately,

play29:23

disproportionately harming

play29:25

certain protected groups.

play29:26

That's certainly a metric

play29:27

that that organizations

play29:28

are being required to follow in New York.

play29:32

So so definitely

play29:33

those metrics are important, too.

play29:36

Yeah, you

play29:37

hit on a keyword there,

play29:39

which is automation.

play29:40

So a lot of what I seem to

play29:42

get from people concerned about air

play29:45

safety, it's it's less about

play29:48

really peeling back the air

play29:50

and more about the automation.

play29:51

So you could put an automated rule

play29:53

in place that makes a critical decision

play29:56

and a process

play29:57

that has nothing to do with AI

play29:59

but is just flat out wrong

play30:01

and that's equally as unsafe.

play30:04

So I think some of the questions

play30:05

that should be asked around

play30:07

deploying and

play30:07

models is really around automation.

play30:10

There's a question here about,

play30:11

you know, should

play30:11

there be a doomsday switch?

play30:12

And it's

play30:13

I think only if you put the automation

play30:16

in charge of the doomsday device.

play30:18

I mean, that that's kind of

play30:20

you know,

play30:20

there's choices to be made in the use

play30:22

case and choices to make in a way

play30:24

in which systems are deployed

play30:26

and that really governs,

play30:28

you know,

play30:28

the kind of safety mechanisms

play30:30

in place and A.I.

play30:31

model that that Brian,

play30:33

you deployed

play30:33

to write an accounting memo is

play30:36

not going to take over the world

play30:38

on its own.

play30:40

But if you didn't put it

play30:41

in charge of the nuclear fleet, then

play30:45

jeez, that was a decision to make.

play30:47

And so I think that those are important

play30:50

considerations.

play30:50

When we talk about technology in general.

play30:53

There is one question in here

play30:54

I'm going to try to answer,

play30:56

and I'll get your thoughts on this,

play30:57

too, guys, about, you know, jobs.

play31:00

And I don't think

play31:01

anyone here is going to predict

play31:02

what's going to happen with employment.

play31:04

I would say that most of

play31:05

us are looking at this and

play31:07

even struggling to see a

play31:08

couple of years out in terms

play31:09

of how this technology impacts

play31:12

society in the workforce.

play31:14

I guess the one thing that

play31:15

I always reflect on is technology.

play31:17

Step change

play31:19

rarely results in economic regression,

play31:22

right?

play31:23

It usually results

play31:24

in an economic expansion of some sort.

play31:26

And so while the nature of jobs change,

play31:29

you end up with probably more jobs

play31:31

than you had before,

play31:33

even though they might be

play31:34

in different categories.

play31:36

So I guess I

play31:37

got your final wrap up opinions on

play31:39

where you think all this is headed in any

play31:41

kind of final thought.

play31:42

So we'll start Brian, with with you.

play31:45

Sure. Yeah.

play31:46

I mean,

play31:46

I think it's just,

play31:47

you know, we talk a lot

play31:48

about how analogous

play31:50

some of these

play31:51

capabilities are to the

play31:52

advent of computers

play31:54

when they when they first gained

play31:55

our industry and calculators

play31:56

and and how they never really

play31:59

fully replaced jobs.

play32:00

They really just expanded

play32:02

the role of the human

play32:03

and expanded the capability of the human.

play32:05

And that's sort of what

play32:06

we're we're anticipating.

play32:08

It's just

play32:09

we can't quite see where

play32:11

that that

play32:11

that full extent of that

play32:13

that change is taking place,

play32:14

especially with. Jared May I.

play32:16

But you

play32:17

also can envision a

play32:19

future certainly in our profession

play32:21

where that

play32:22

that the amount of depth

play32:24

that we can provide

play32:26

with our honest

play32:27

opinions is becomes deeper.

play32:29

You know, it

play32:30

just it doesn't necessarily

play32:32

what's going to keep

play32:33

the same static level

play32:34

of reasonable assurance per

play32:35

say I can

play32:36

I could foresee that changing over time

play32:38

just as it has

play32:40

the opinion,

play32:41

the reasonable assurance of that opinion,

play32:43

you know, 15 years ago

play32:43

was probably different

play32:44

from what it is today.

play32:45

We don't know the answer to that.

play32:46

But you can

play32:47

you could argue that it could be

play32:49

different today based on all the

play32:51

the different analytics and data

play32:53

that are available to us today.

play32:54

So I see a continuation of that where,

play32:58

you know,

play32:58

we're going to be able

play32:58

to provide our clients

play33:00

and our market

play33:01

and the capital markets

play33:02

with with incredible depth

play33:04

and incredible amounts of quality

play33:07

into in our audits

play33:08

that we weren't able to do previously

play33:11

with the same amount of

play33:14

workforce.

play33:15

That's where I see it going.

play33:18

And I guess I would focus

play33:20

at least in my response,

play33:21

not not to comment on the on

play33:23

the jobs aspect,

play33:24

but it's more about

play33:25

what skills or whether,

play33:26

you know,

play33:27

there's jobs available for

play33:28

or certain roles that that

play33:31

that are going to change.

play33:32

But it's more about

play33:32

what are the skills

play33:33

that are needed

play33:34

for a successful role in the future.

play33:37

I think even in my career as I hire

play33:40

the require

play33:41

the skills that that I'm looking for

play33:43

have changed over the past 16, 17 years.

play33:45

So without a doubt,

play33:47

I think a high or being able

play33:49

to use effectively or being a little bit

play33:51

have an understanding

play33:52

of potentially how it works, to know

play33:55

the pitfalls, to know the use cases

play33:57

and the ways that it can be effective.

play33:58

I think that becomes really powerful.

play34:00

I think if you have

play34:01

that ability to do it,

play34:01

I don't think it's going to be changing.

play34:04

You know, the job market in great leaps.

play34:07

But I do think every

play34:07

technology does change

play34:09

what is needed

play34:10

out of the workforce,

play34:11

and I think it's always

play34:13

a daily it's changing.

play34:14

So we should just be ready for that

play34:16

and understand

play34:17

what are the tools of the job today

play34:19

and what are the tools

play34:20

going to be for tomorrow,

play34:21

and how do we take those bets

play34:22

and try to put ourselves

play34:24

in good positions for the next few years?

play34:26

So it seems that

play34:27

that game has never changed, right?

play34:28

That's always

play34:29

been a game of the job market.

play34:31

So I definitely think that's something

play34:33

that we all have to think

play34:33

about and be ready for.

Rate This

5.0 / 5 (0 votes)

Ähnliche Tags
Digital TransformationAuditingAI TechnologyData ScienceRisk ManagementQuality AssuranceNatural Language ProcessingAnomaly DetectionGenerative AIProfessional Ethics
Benötigen Sie eine Zusammenfassung auf Englisch?