RCM Revenue Cycle Management Medical AI LLM Case Studies

42robotsAI
27 Aug 202418:15

Summary

TLDRDavid Hood, CEO of 42 Robots AI, discusses the transformative impact of AI, particularly in revenue cycle management for healthcare. He outlines the benefits of implementing AI, warns of common pitfalls, and emphasizes the importance of understanding AI capabilities and avoiding a one-size-fits-all approach. Hood shares insights on leveraging large language models for data processing efficiency and stresses the need for a strategic AI implementation roadmap, including assembling the right team with a deep understanding of AI integration.

Takeaways

  • 😀 David Hood, CEO of 42 Robots AI, discusses the benefits of implementing AI in revenue cycle management for healthcare organizations.
  • 🔍 The video script emphasizes the importance of understanding the capabilities of Large Language Models (LLMs) and their potential in healthcare, especially in data processing and analysis.
  • 🚀 Hood highlights a positive feedback loop created by AI implementation, where initial AI use leads to increased understanding and value, helping organizations outcompete their rivals.
  • ⏳ He mentions that modern AI tools, particularly LLMs, have significantly advanced, with GPT-4 being a turning point in 2023, and that multimodal capabilities are even more recent.
  • 💡 The script points out that while LLMs are broadly applicable, they should be used as tools by traditional software, not as the central solution, to avoid common pitfalls.
  • 🛑 Hood warns against relying solely on solutions like Microsoft Co-Pilot, as they may not leverage the full potential of LLMs and could leave significant value on the table.
  • 🔧 The transcript details case studies where AI has been used to automate complex and unstructured data processing in healthcare, significantly reducing time and costs.
  • 📉 The video stresses that the goal of AI implementation is not 100% automation but rather to improve efficiency and productivity, with an 80-90% automation rate being a realistic and valuable target.
  • 👥 It is important to assemble a team with a deep understanding of AI, including how it works, how to leverage it effectively, and how to integrate it into the organization's systems.
  • 🛠 The transcript differentiates between AI engineers who understand the practical application of LLMs and machine learning engineers who focus on the technical aspects of AI models.
  • 🔄 Hood suggests starting with small, achievable AI projects to gain traction and then iterating and building on those successes to develop a comprehensive AI implementation roadmap.

Q & A

  • What is the main focus of the video by David Hood?

    -The video focuses on AI usage in revenue cycle management within the healthcare and medical space, highlighting the benefits and potential pitfalls of implementing AI.

  • Who is David Hood and what is his role in the video?

    -David Hood is the CEO of 42 Robots AI, and he discusses the company's expertise in helping organizations implement AI, particularly in revenue cycle management.

  • What does David Hood suggest is the starting point for implementing AI in a healthcare organization?

    -David Hood suggests understanding the capabilities of large language models (LLMs) and their benefits in revenue cycle management as a starting point for AI implementation.

  • What are the potential benefits of using AI in revenue cycle management according to the video?

    -The video suggests that AI can significantly improve efficiency, speed, and productivity in processing and analyzing large amounts of data in healthcare organizations.

  • What does David Hood warn against in terms of AI implementation?

    -He warns against relying solely on solutions like Microsoft Co-Pilot, which he says only utilize a fraction of the capabilities of large language models and may not fully leverage AI's potential.

  • What is the importance of not seeking 100% automation with AI, as mentioned in the video?

    -The video emphasizes that pursuing 100% automation can be unrealistic and less efficient; instead, achieving 80-90% automation can often be more valuable and manageable.

  • What is the significance of the 'positive feedback loop' mentioned by David Hood?

    -The positive feedback loop refers to the cycle of improvement that occurs as an organization begins to use AI, gains understanding, and builds momentum, ultimately outperforming competitors.

  • What are the key skills required for an effective AI implementation team according to the video?

    -The video outlines three key skills: understanding how AI works, knowing how to leverage AI effectively, and having the ability to integrate AI efficiently into the organization's systems.

  • Why is having a Chief AI Officer important for an organization implementing AI, as suggested in the video?

    -A Chief AI Officer is crucial for understanding the organization's systems, ensuring AI is integrated effectively, and for guiding the AI implementation strategy to avoid common pitfalls.

  • What is the role of an AI engineer in the context of the video's discussion on AI implementation?

    -An AI engineer is responsible for calling the LLM APIs, solving real-world problems, and ensuring that AI solutions are not LLM-centric but rather use LLMs as tools within a broader software framework.

  • What advice does David Hood give for building an AI implementation roadmap for a revenue cycle management company?

    -He suggests starting with assembling a team with the right skills, understanding the basics of AI, leveraging AI effectively, and integrating AI into the business systems, while also considering hiring fractional Chief AI officer services for guidance.

Outlines

00:00

🤖 Introduction to AI in Revenue Cycle Management

David Hood, CEO of 42 Robots AI, introduces the video's focus on AI usage in revenue cycle management within the healthcare and medical sectors. He emphasizes the significant benefits of implementing AI, particularly in data processing and analysis, which can greatly enhance efficiency and productivity. David also mentions the existence of several supplementary videos on topics like AI solution architecture and new capabilities of large language models (LLMs). He cautions about the potential pitfalls in AI implementation and suggests that there's a positive feedback loop in adopting AI, where initial successes lead to further integration and competitive advantage. The video aims to provide viewers with a basic understanding of AI's potential and a starting point for implementation.

05:00

📈 Leveraging AI for Data Processing and Analysis

The second paragraph delves into the transformative impact of AI, specifically large language models, on data processing and analysis within the medical and healthcare sectors. It highlights the underleveraged data and the costly manual labor often required for data processing. AI, and particularly modern LLMs, can drastically improve these processes by increasing speed, efficiency, and productivity. The speaker shares an example of a client case study where AI was used to process complex and unstructured data, reducing the time from months to a week for initial drafts. The video stresses that the solution should not be LLM-centric but rather use LLMs as tools within a broader software architecture. It also touches on the importance of having AI engineers who understand how to build and leverage LLMs effectively.

10:01

📑 Case Studies in AI Implementation for Revenue Cycle Management

Paragraph three presents further case studies illustrating the application of AI in revenue cycle management. It discusses the challenges of processing handwritten, unstructured, and variant healthcare faxes and how AI can be used to automate a significant portion of this manual work. The speaker clarifies that the goal is not 100% automation but rather achieving a substantial improvement in efficiency. They explain that attempting to automate everything can lead to increased complexity and diminishing returns, suggesting that focusing on 80-90% automation is more practical and valuable. The paragraph also emphasizes that the solution should be built around software that uses LLMs as tools, rather than the other way around, to avoid common pitfalls in AI implementation.

15:01

🛠 Building an AI Implementation Roadmap for Revenue Cycle Management

The final paragraph outlines the steps for building an AI implementation roadmap within a revenue cycle management company. It identifies the need for a team with three key skill sets: understanding how AI works, knowing how to leverage AI effectively, and having a systems understanding of how AI integrates into business processes. The speaker distinguishes between AI engineers who build AI tools and those who know how to use them effectively. They suggest starting with a fractional Chief AI officer to guide the hiring process and ensure the right AI talent is brought on board. The paragraph concludes by encouraging viewers to contact 42 Robots AI for assistance in developing a tailored AI implementation strategy, emphasizing the importance of acting quickly to gain a competitive edge.

Mindmap

Keywords

💡AI

AI, or Artificial Intelligence, refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. In the context of the video, AI is central to improving revenue cycle management in healthcare organizations. The script discusses how AI, especially through large language models (LLMs), can be leveraged to process vast amounts of data more efficiently and cost-effectively than traditional methods.

💡Revenue Cycle Management

Revenue Cycle Management (RCM) is the process of managing the administrative and clinical functions of a healthcare practice. In the video, RCM is highlighted as an area where AI can significantly improve efficiency and reduce costs. The script mentions how AI can be implemented in RCM to automate and streamline processes, such as processing complex and unstructured data in the medical field.

💡Large Language Models (LLMs)

Large Language Models, or LLMs, are AI systems that can understand and generate human-like text based on the input they receive. The video emphasizes the broad applicability of modern LLMs in business processes, particularly in healthcare, where they can be used to analyze and process large volumes of data, leading to improved decision-making and operational efficiency.

💡AI Implementation Roadmap

An AI Implementation Roadmap is a strategic plan that outlines how an organization will adopt and integrate AI technologies into its operations. The script suggests that having a clear roadmap is crucial for organizations looking to implement AI in their revenue cycle management. It mentions the importance of understanding what's possible with AI and having a basic starting point for implementation.

💡Chief AI Officer

A Chief AI Officer is a high-level executive responsible for overseeing an organization's AI strategy and initiatives. The video script highlights the importance of this role in understanding and integrating AI into business systems effectively. It suggests that a Chief AI Officer should have a deep understanding of AI, experience in leveraging AI technologies, and a broad systems thinking approach to AI integration.

💡Automation

Automation refers to the use of technology to perform tasks without human intervention. In the video, automation is discussed as a key benefit of implementing AI in healthcare revenue cycle management. The script provides examples of how AI can automate the processing of complex data, reducing the need for manual labor and increasing efficiency.

💡Data Processing

Data Processing is the manipulation of data by a computer to produce a desired output. The video emphasizes the importance of data processing in the context of AI, particularly in healthcare, where large amounts of unstructured data need to be processed. The script mentions how AI can significantly speed up and improve the productivity of data processing tasks.

💡AI Engineering

AI Engineering involves the design, development, and maintenance of AI systems. The video script discusses the distinction between AI engineers who understand how to build AI models and those who know how to effectively leverage AI technologies. It stresses the importance of having AI engineers who can build solutions that are not solely model-centric but use AI as a tool within broader software systems.

💡Microsoft Co-pilot

Microsoft Co-pilot is an AI-powered assistant designed to enhance productivity by integrating with various Microsoft applications. The video script warns against relying solely on solutions like Microsoft Co-pilot for AI implementation, as it suggests that doing so may not fully leverage the capabilities of large language models and could leave significant AI potential untapped.

💡Custom AI Solutions

Custom AI Solutions refer to AI systems that are specifically tailored to meet the unique needs and requirements of an organization. The video script discusses the importance of building custom AI solutions rather than using generic AI architectures. It emphasizes the need for AI solutions that are not model-centric but are built with a deep understanding of the organization's systems and processes.

💡Fractional Chief AI Officer

A Fractional Chief AI Officer is a part-time executive who provides strategic guidance on AI initiatives without being a full-time employee. The video script suggests that hiring a fractional Chief AI Officer can be a cost-effective way for organizations to quickly gain the expertise needed to develop and implement an AI strategy, especially when looking to rapidly adopt AI technologies.

Highlights

David Hood, CEO of 42 Robots AI, discusses AI implementation in revenue cycle management for healthcare organizations.

Emphasizes the significant benefits of AI in medical and healthcare organizations, particularly in revenue cycle management.

The importance of understanding the capabilities of Large Language Models (LLMs) and their potential in business.

Warns of potential pitfalls and 'landmines' in AI implementation and how to avoid them for value addition.

Describes a positive feedback loop created by AI, where initial use builds organizational 'muscle' and understanding.

Advises on the competitive advantage gained by early and aggressive AI implementation in business processes.

The transformative impact of modern AI capabilities, especially those emerging within the last year.

Caution against relying solely on Microsoft Co-Pilot or similar tools, as they may not leverage the full potential of LLMs.

Details the enhanced data processing capabilities of AI, particularly in unstructured or variant data scenarios.

Case study: Accelerating healthcare outcome and process metrics processing from months to weeks using AI.

The inefficiency of manual data processing and how AI can offer a 100x to 10,000x improvement in productivity.

Differentiating between AI Engineers who understand LLMs and those who effectively leverage them in practical applications.

The misconception of aiming for 100% automation with LLMs and the practicality of achieving 80-90% efficiency.

Highlighting the importance of building an AI implementation roadmap for a revenue cycle management company.

The necessity of assembling a team with the right skills to understand, leverage, and integrate AI within an organization.

The role of a Chief AI Officer in understanding the organization's systems and the impact of AI integration.

Encourages contacting 42 Robots AI for assistance in building an AI implementation roadmap and gaining a competitive edge.

Transcripts

play00:00

hi this video is about AI usage case

play00:02

studies with revenue cycle management

play00:05

companies in the me in the healthcare

play00:06

medical space my name is David hood and

play00:09

I'm CEO of 42 robots Ai and we help

play00:12

organizations Implement AI especially

play00:14

revenue cycle management that's where we

play00:16

have a lot of experience in the medical

play00:18

space so there's links to several videos

play00:20

that I'll put below that are all

play00:22

relevant to and and supplement this

play00:24

content here we got the chief AI officer

play00:26

video bad AI solution architecture and

play00:29

custom AI solution and llm new

play00:31

capabilities recommend you check those

play00:32

out uh before after you watch this video

play00:35

so first of all there are real

play00:38

significant benefits today uh to

play00:41

implementing AI at any medical or

play00:44

Healthcare organization especially

play00:46

revenue cycle management um and I'll

play00:48

explain to you that in a second based

play00:50

upon the capabilities of llms some of

play00:52

which don't really get talked about that

play00:54

that much uh but there's definitely some

play00:56

landmines to navigating this that you

play00:59

want to keep in mind and I'll sprinkle

play01:01

those throughout here uh and that should

play01:04

help you avoid some of the biggest

play01:05

pitfalls and move forward with things

play01:08

that add the most value I'd say that

play01:10

there's a positive feedback loop here

play01:12

because you know you can't expect to be

play01:14

like all right 3 months from now we're

play01:15

going to use AI in all the places it's

play01:17

going to be perfect in everything and

play01:18

and I don't think anybody necessarily

play01:20

reasonably expects that uh but as you

play01:23

start to use AI with an organization as

play01:25

an organization you start build some of

play01:26

that muscle you'll get some

play01:28

understanding you'll get some political

play01:29

biy in and it starts to create a

play01:31

positive feedback loop to add more and

play01:33

more value to the organization which

play01:35

helps you beat out your competitors uh a

play01:38

lot of the latest capabilities some of

play01:39

which are weren't even available until

play01:42

within the last year um are uh not being

play01:46

used by most companies uh at the at the

play01:49

same time there's a lot of companies

play01:50

that understand that this needs to be

play01:52

implemented and if you're competitors do

play01:55

some of the stuff we're going to talk

play01:56

about here uh in this video before you

play01:59

do it then they will leave you in the

play02:02

dust and the other the opposite is true

play02:04

if you get to this sooner and faster and

play02:06

you implement some of these AI Solutions

play02:08

very aggressively then uh you'll start

play02:11

to to uh get ahead of them and it'll be

play02:13

hard for them to even see what you're

play02:15

doing so part of the objectives of this

play02:18

video is to help you gain understanding

play02:19

of what's possible and and at least a

play02:22

basic starting point for how how to get

play02:24

going on uh implementing AI at your

play02:27

revenue cycle C uh management company

play02:30

um put you in a better position to do

play02:32

that and also to even give you just like

play02:35

the really basic seed for building a

play02:37

road map so let's talk about kind of the

play02:39

new capabilities of that that come out

play02:42

that have come out recently um because

play02:44

of the modern large language model so

play02:46

first of all I'd say the clock really

play02:49

really doesn't start

play02:50

for the modern tools until gp4 came out

play02:54

in I think believe it it's April of

play02:57

2023 um so you know some people might

play03:00

say I've been an AI engineer for 10

play03:02

years well that doesn't really mean a

play03:04

whole lot because uh prior to GPT 4 or

play03:08

if you really want gpt3 um there wasn't

play03:11

a lot uh just the tools were different

play03:13

it's it's like saying um uh I used to I

play03:16

I've driven race cars therefore I'm an

play03:18

expert at flying planes it's it's about

play03:21

that much of a difference um well

play03:23

there's certainly some overlap in skills

play03:25

it's it's very different additionally

play03:27

multimodal is even more recent um in

play03:29

terms terms of uh the large language

play03:31

model is actually being able to look at

play03:32

images and um understand them and this

play03:35

is useful in some use cases the the app

play03:39

the capabilities um are super broad for

play03:42

large language models um it the modern

play03:45

llms are so broadly applicable that just

play03:48

about every business and and about every

play03:50

business process and just about every

play03:52

business in the world can gain value

play03:53

from large language models caution

play03:56

though Microsoft co-pilot or chat GPT

play03:59

Enterprise or whatever if that's your

play04:01

solution I I've heard a lot of people

play04:03

say oh our solution is we got Microsoft

play04:04

co-pilot we're doing Ai No you are

play04:07

leaving 99 plus% of the capabilities on

play04:11

the table if you're doing that it is not

play04:14

a solution for uh for actually

play04:16

leveraging LMS I mean it is a solution

play04:18

it gets some of the value but it does it

play04:20

doesn't get the majority of the value um

play04:23

but one of the key things that comes out

play04:24

of this is that uh data processing has

play04:28

been super leveled up uh good AI

play04:30

Engineers who have experience processing

play04:31

large amounts of unstructured or widely

play04:34

Andor widely variant data can do a whole

play04:36

lot with data to speed up and um make it

play04:40

less expensive and and make people more

play04:42

productive in the in the spaces where

play04:44

there's lots of data that needs to be

play04:45

processed which is huge in revenue cycle

play04:49

management and Medical in

play04:51

general so let's go a little bit deeper

play04:55

into that so really what you have is you

play04:57

have two major areas uh that are kind of

play05:00

two sides of the same coin where there's

play05:02

tons of data that is either

play05:03

underleveraged

play05:05

um which means that basically you have

play05:08

this valuable data that you could do

play05:10

stuff with that you're not or it's very

play05:12

expensively leveraged oftentimes this

play05:14

means you have very expensive manual

play05:16

labor processing the data um and so in

play05:20

both of these cases AI specifically

play05:22

large language models the modern

play05:24

Premiere you know Frontier models can

play05:26

dramatically improve these processes and

play05:29

some cases I I think especially when it

play05:31

comes to like data processing and and

play05:33

data analysis and data processing means

play05:35

a bunch of different things you can get

play05:37

a 100x 10,000x the value the the

play05:40

productivity the efficiency the speed

play05:43

that you could get from a human uh let's

play05:46

say even a very experienced data person

play05:48

this is not a knock on humans this is

play05:49

just something that that a large

play05:51

language models are really really good

play05:52

at especially when you have an AI

play05:54

engineer that knows how to build these

play05:55

correctly again if you're using like

play05:56

Microsoft co-pilot there's a little bit

play05:58

of a value that you can get from this

play06:00

but you're missing most of it which

play06:02

brings me to real AI engineering I think

play06:04

there's a lot of hand waving going on

play06:06

with regards to AI engineering there's a

play06:07

lot of people saying that they're AI

play06:08

Engineers when they have no real world

play06:10

experience or um or they're they don't

play06:13

really have practical experience maybe

play06:15

they think they're doing AI engineering

play06:17

but they're building with like a bad AI

play06:18

architecture again check out that video

play06:21

or they're they've done a lot of hand

play06:22

waving and they just put AI engineering

play06:25

on there we can help kind of filter

play06:27

through that nonsense and I I'll talk

play06:29

about that uh in in a minute so let's

play06:31

get into some of the the case studies a

play06:32

little bit of details as to what we were

play06:34

doing so for one client we did

play06:37

Healthcare outcome and process metrics

play06:39

so this is a revenue cycle management

play06:40

SAS company the data was very large

play06:43

comple complex and unstructured and even

play06:45

hard for humans so some of the data that

play06:47

they gave us that was supposed to be

play06:48

good we figured out oh actually the

play06:50

humans did some of it wrong too so it

play06:52

was not easy to process um it would come

play06:55

out this data would come out on a yearly

play06:57

basis and their clients customers needed

play06:59

it needed them to process it to put it

play07:00

into their SASS and it usually took them

play07:03

months and sometimes there's like a few

play07:04

Key Resources a few key people that if

play07:06

they were out it could take an

play07:07

additional few months mean these are

play07:09

very skilled humans these are highly

play07:11

paid full-time salary people with lots

play07:14

of experience in this specific stuff so

play07:16

they're very expensive um and basically

play07:18

with our solution we're able to make

play07:19

that this can be months faster instead

play07:22

of it taking 2 3 4 months it can take

play07:25

about a week for them to get like a

play07:27

first version draft of these to where

play07:29

it's basically about 80% automated uh

play07:32

maybe as much as 90% but 80% is kind of

play07:35

the minimum bar and um this dramatically

play07:38

improves things for them because they

play07:40

get the the drafts it's highly accurate

play07:42

for most of the different types of uh

play07:45

inputs that they have there are a few

play07:47

that it that some some of the more

play07:49

complex ones uh more longtail ones that

play07:51

it doesn't do well but that's fine

play07:53

you're not really looking to fully

play07:55

automate and I'll talk about that in a

play07:56

second with the second case study why

play07:59

this is actually good you wanting to

play08:01

fully automate 100% automate things with

play08:03

with LMS is not really a reasonable goal

play08:06

in most

play08:07

scenarios and I want to point out here

play08:09

this is super critical the solution is

play08:11

not llm Centric and that might sound

play08:13

weird it's like well but you're using

play08:14

llms they're they're key to the solution

play08:16

yes um but the the solution itself does

play08:20

not put the llm at the center the model

play08:23

is not the solution the solution is not

play08:25

the model and this is part of again if

play08:27

you watch the bad architecture or the

play08:29

how to build custom AI Solutions you

play08:32

you'll understand this on a deeper level

play08:34

but long story short the llms need to be

play08:36

used as tools by classic code uh just

play08:40

basically software versus the other way

play08:43

around where uh most people are building

play08:45

things LM Centric they have like a model

play08:46

and then the model basically does

play08:48

everything it it picks things it uses

play08:50

tools they train everything everything

play08:52

tries to get shoved in that model

play08:53

basically and this is going to give you

play08:56

a lot of trouble and you're going to

play08:57

have you're going to really struggle to

play08:59

solve more complex problems um

play09:01

especially on a very customized basis

play09:03

and and to automate to a very high

play09:07

degree custom if you're liking this

play09:10

video please like if you have any

play09:11

questions or thoughts or your own

play09:13

experiences please feel free to leave a

play09:15

comment below as well um this is this is

play09:19

a case study where the the inputs are

play09:21

handwritten unstructured and variant

play09:23

healthc care faxes um again another

play09:25

revenue cycle management SAS company you

play09:28

know sometimes the the writing was

play09:29

unclear or even if it's not handwritten

play09:32

sometimes like the the typing is like a

play09:34

skew or um the structure of the document

play09:37

is really different to where sometimes

play09:39

they put this over here sometimes they

play09:40

put it over here and it really there was

play09:42

not a whole lot of consisten these are

play09:44

coming from a wide variety of different

play09:46

sources um and then not only is the

play09:49

formatting and the structure

play09:50

inconsistent but the the the way that

play09:53

the input is put in there can be

play09:55

different so for example a simple

play09:56

version of this would be the date

play09:58

there's a bunch of different ways to

play09:59

write a date and if you're trying to

play10:00

write like a script for that there's a

play10:02

bunch of different variants that's a

play10:04

much more simple version of it but a

play10:06

more complex one might be like a

play10:07

diagnosis there's a bunch of different

play10:09

ways to write a diagnosis there's a lot

play10:11

more variance in the outputs in and in

play10:14

terms of how that data is labeled um but

play10:16

also what the what the the actual field

play10:19

or or category or or content of that

play10:22

field looks like um I I want to point

play10:25

out again the objective here is not 100%

play10:27

automation so this is stuff that is

play10:29

getting 100% manually processed so

play10:33

imagine if we could put it through a

play10:35

system in it only 50% of them are

play10:37

automated that's still a huge

play10:38

Improvement on this because it means

play10:41

that for 50% of the documents you don't

play10:42

need the manual inputs also just a

play10:46

bigger Point here and again I I think I

play10:47

covered this pretty detail in the

play10:49

customing solutions video where you

play10:53

shouldn't be looking for 100% automation

play10:56

Solutions in most scenarios the reason

play10:58

for this is that

play10:59

going back to llms are so broadly

play11:02

applicable that uh on top of the fact

play11:05

that once you get to like 80 90% there

play11:08

becomes a point where every percentage

play11:10

Point becomes exponentially harder so

play11:13

going from 90% to 95% is probably harder

play11:17

or more challenging than going from like

play11:20

uh 0 to 50% or 0 to 80% or something

play11:23

like that and then going from 95% to 97%

play11:27

etc etc etc and um um part of this has

play11:30

to do with especially if you're doing

play11:32

llm Centric this is almost impossible in

play11:34

a lot of scenarios but even with a

play11:36

really good structure to the solution um

play11:39

as you build try to build more and more

play11:41

of the edge cases and more and more of

play11:43

the challenging things it can actually

play11:44

degrade some of the more basic

play11:46

fundamental things that you do and so

play11:48

you have to build much many more

play11:49

exceptions the complexity of the

play11:51

solution grows and it and as the

play11:53

complexity grows the the the overall it

play11:56

just becomes exponentially harder with

play11:58

software and this is not true for every

play12:00

case but in general this makes sense so

play12:02

another way to put this is in terms of

play12:04

from an efficiency standpoint it's

play12:06

better in the same amount of time you

play12:08

could probably get 10 80 to 90%

play12:11

Solutions than you could get one like

play12:13

99% solution okay in terms of automation

play12:17

so which one has more value and almost

play12:19

always the first one now there might be

play12:21

some scenarios where the 99% to Plus

play12:23

Solution does add more value it's like a

play12:25

key process and automating it just is

play12:27

just huge for the company maybe that's

play12:30

the case where it is worth it to try to

play12:31

go for that really high level thing but

play12:33

I think that's the exception and not the

play12:34

rule at least in with the clients that

play12:36

we've talk

play12:37

to so I'll also briefly put in here a

play12:42

just a general idea on how to build your

play12:43

AI implementation roadmap as a revenue

play12:45

cycle management company and then if you

play12:48

really want us to help you with this

play12:49

because there's obviously more nuance

play12:51

and I can give in this quick video

play12:52

please contact us you can give us a call

play12:54

or click the link below and we'll help

play12:56

you do this U and give you a lot more a

play13:00

lot better of of ani implementation road

play13:02

map so first you need to assemble the

play13:04

team and there's three main skills that

play13:06

you need how AI Works how AI can be

play13:09

leveraged and how to integrate AI

play13:10

efficiently into your organization okay

play13:12

so let's cover each one of these

play13:13

individually so how AI works most people

play13:16

think that this is um an a machine

play13:18

learning engineer and that is true but

play13:21

you don't have to be a machine learning

play13:22

engineer to get the fundamentals of how

play13:24

AI Works down um a good AI engineer has

play13:27

this and so this is isn't necessarily

play13:30

understanding all the ins and outs of

play13:31

how a large language model works at a

play13:33

very computer science fundamental level

play13:36

it's more about like understanding um

play13:38

the the the basics and and the key the

play13:41

some a handful of the key ideas so for

play13:44

example understanding how large language

play13:47

models turn the semantic meaning of

play13:49

words into math that's a key

play13:52

understanding and so somebody who has

play13:53

some math experience with like um linear

play13:56

algebra and linear programming and stuff

play13:58

like that can be effective in

play14:00

understanding this so some amount of

play14:01

math understanding but at our company

play14:04

some of our AI Engineers have machine

play14:06

learning official machine learning

play14:08

experience and some of them don't and

play14:09

some of the ones that some of our best

play14:11

ones don't have that experience but they

play14:13

understand that it's next token

play14:15

prediction they understand that there's

play14:16

Randomness in there and some of the key

play14:17

ideas there and then a lot of people

play14:20

think conflate one and two they think oh

play14:22

it's the same thing how does the LM work

play14:24

a machine learning engineer understands

play14:25

how to leverage llms effectively nope

play14:29

um and let's

play14:31

actually let's call these llms how llms

play14:35

work and then how llms can be

play14:38

effectively leveraged this is you know

play14:40

is a little bit of a rough um metaphor

play14:44

but it works generally there are two

play14:46

different skill sets between the person

play14:48

who drives the car and the person who

play14:49

builds the car and that you can maybe

play14:52

have somebody who understands both

play14:54

really well um but it's two different

play14:56

skills so think of the llms as a tool

play15:01

and the person who knows how to use

play15:02

these tools versus the person who knows

play15:03

how to build these tools to to be really

play15:05

good two different skill sets

play15:08

unfortunately these get conflated a lot

play15:10

and so pretty much all the time

play15:11

resources and money in in the AI

play15:13

industry are going towards item one here

play15:15

and they're thinking that's all that you

play15:16

really need and I'm telling you it's not

play15:19

for sure definitely 100% uh that item

play15:22

two here could actually end up being

play15:24

more important especially if llms become

play15:27

commoditized um and it's also definitely

play15:30

where there's a big amount of

play15:32

underserved population here

play15:35

so uh this is where an AI engineer

play15:38

really should Shine

play15:40

versus a um I'm sorry yeah an AI

play15:44

engineer versus a machine learning

play15:45

engineering these are two different

play15:46

skill sets um they need to have

play15:48

experience actually calling the llm apis

play15:51

actually solving real world problems

play15:52

probably with python um there's some

play15:54

other languages but usually it's with

play15:56

python to solve problems in a way that

play15:58

is not LM Centric okay and then this

play16:01

third item this one's a little bit

play16:03

harder and more challenging and some

play16:04

people might have part of this this is

play16:05

more of a systems understanding both

play16:07

broadly how are these llms going to

play16:10

affect your business systems as a whole

play16:12

but also an understanding of your

play16:14

specific Business Systems and this is

play16:16

where uh this third item is where a

play16:18

chief AI officer comes into play a chief

play16:20

AI officer really should have one and

play16:23

two pretty well maybe they're a previous

play16:25

AI engineer they've done AI engineering

play16:27

work and then some amount of especially

play16:29

like systems thinking is really helpful

play16:31

or you know if if they don't understand

play16:34

your your organization they come in and

play16:36

that's part of their job is to really

play16:38

understand the organization how if I'm

play16:40

going to be doing stuff over here how

play16:41

does it affect this other part of the

play16:42

business and vice

play16:44

versa um so this is a really important

play16:47

position Chief a officer if you don't

play16:48

believe me watch the video on Chief a

play16:50

officer I'll put a link to that below um

play16:52

it is super important for all sorts of

play16:54

reasons and you could you also want to

play16:56

if you need to get this up and running

play16:58

really quickly consider Hiring Our

play16:59

fractional Chief AI officer services

play17:01

this will really help you get AI

play17:03

Engineers hired because you don't want

play17:06

to just go out and start hiring ai ai

play17:08

Engineers because I bet if you don't

play17:09

have a CH Chief AI officer who if you

play17:11

have nobody in there who actually has AI

play17:13

engineering experience how will you know

play17:15

how to tell the difference between the

play17:16

posers and the real PE the real deal you

play17:18

won't and these people are very

play17:20

expensive because there's not very many

play17:21

AI Engineers there's not very many

play17:22

people who have these one two three

play17:24

skills we do at our organization

play17:26

generally have these in in a variety of

play17:28

different different capacities so once

play17:30

you have a chief AI officer or a

play17:32

fractional one you hire an AI engineer

play17:35

or you Outsource which we can also

play17:37

provide and then you go you try and you

play17:39

try and you try and because the first

play17:41

few times maybe it won't work or you try

play17:43

for something with super low hang fruit

play17:45

to just get a win even if it's um it's a

play17:47

quick win to get some Buy in um and then

play17:50

you iterate from there um and you start

play17:52

building the organizational muscle okay

play17:56

give us a call if that if you're still

play17:57

with me give give us a call click the

play17:59

link below and I I I we can we can

play18:02

definitely help you navigate through

play18:04

this and get get this going as soon as

play18:05

possible so you can start getting value

play18:08

from llms at your organization before

play18:11

your competitors do thank you very much

play18:13

have a great day bye

Rate This

5.0 / 5 (0 votes)

相关标签
AI ImplementationHealthcare TechRevenue CycleData ProcessingAutomationAI EngineeringLLMsMedical IndustryTech InnovationEfficiency
您是否需要英文摘要?