Improving Patient Support with Conversational AI
Summary
TLDRIn this episode of CareTalk, co-founder of Hyro, Israel Krush, discusses the potential of conversational AI in healthcare. He addresses the challenges of deploying AI systems due to high error costs and regulatory hurdles. Krush emphasizes the importance of responsible AI, focusing on explainability, control, and compliance. The conversation explores AI's role in personalizing care, improving patient engagement, and the ethical considerations of AI-generated 'hallucinations' in healthcare.
Takeaways
- 🧩 Israel Krush, co-founder of Hyro, discusses the potential of conversational AI in revolutionizing human-computer interaction and its challenges in the healthcare sector.
- 🗣️ The adoption of chatbots and voice AI systems in healthcare is still limited due to the difficulty of deployment and maintenance, despite the growing interest in these technologies.
- 🔮 Large language models, like those behind Chat GPT, are powerful tools that can generate human-like responses but are still limited by their lack of deep reasoning and understanding.
- 🤖 The concept of 'responsible AI' in healthcare involves ensuring the AI is explainable, controllable, and compliant, particularly important given the high stakes and regulatory environment.
- 🚫 Hyro aims to simplify the deployment of AI in healthcare by using knowledge graphs to provide a controlled and safe data source for AI to draw upon, reducing the risk of errors.
- 🛑 The cost of mistakes in healthcare is significantly higher than in other industries, making the deployment of AI需谨慎 to avoid legal and practical issues.
- 📚 Knowledge graphs provide a structured and vetted data source that can work in tandem with AI to ensure accurate and safe responses to patient inquiries.
- 💡 AI has the potential to democratize access to healthcare expertise by enabling remote consultation and leveraging large databases for coherent answers.
- 🛠️ While AI can create efficiencies and improve access to care, there are concerns about job displacement as technology advances, highlighting the need for adaptation and training.
- 🔑 The pace of AI development is rapid, and there is a question of whether humans can adapt quickly enough to the changes it brings to various sectors, including healthcare.
- 🌐 The future of AI in healthcare is promising but uncertain, with the potential for breakthroughs in areas not yet fully explored or understood.
Q & A
What is the primary mission of Hyro, the company co-founded by Israel Krush?
-Hyro's mission is to revolutionize conversational AI, particularly in the healthcare sector, aiming to solve patient frustrations in interacting with the healthcare system.
What are the two hypotheses that Hyro was founded upon according to Israel Krush?
-The two hypotheses are that natural language interfaces, such as chatbots and voice AI systems, will become the dominant interfaces for human-computer interaction, and that large enterprises, especially in the healthcare ecosystem, will find it very hard to adopt, deploy, and maintain these systems.
What is the significance of the term 'responsible AI' in the context of healthcare and chatbots?
-Responsible AI, particularly in healthcare, refers to the practice of ensuring that AI systems are explainable, controllable, and compliant with regulations, which is crucial given the high cost of mistakes in this industry.
How does Israel Krush define 'hallucinations' in the context of AI and large language models?
-In the context of AI, 'hallucinations' refer to the instances where AI systems provide information or responses that are inaccurate or false, such as scheduling an appointment with a non-existent physician, without realizing the inaccuracy of the information.
What is the role of a knowledge graph in ensuring responsible AI in healthcare according to the transcript?
-A knowledge graph serves as a vetted and managed data structure that works in tandem with large language models to provide accurate and reliable information. It helps to prevent AI from generating false or misleading information by limiting its data search to the curated knowledge graph.
What are the challenges of deploying chatbots and voice AI systems in healthcare as discussed in the transcript?
-The challenges include the high cost of mistakes due to the potential for serious consequences, the difficulty of maintaining accuracy and reliability in a highly regulated industry, and the need for responsible AI practices such as explainability, control, and compliance.
How does the transcript suggest that AI can help in personalizing care and improving patient engagement?
-The transcript suggests that AI, through chatbots and voice assistants, can provide 24/7 availability, consistent tone, and even empathy, which can lead to better patient engagement. It can also help in navigating complex healthcare systems and provide personalized responses.
What is the potential of large language models in healthcare beyond administrative tasks as discussed in the transcript?
-Large language models have the potential to assist in creative thinking, brainstorming, and generating novel solutions to complex problems in healthcare, possibly even contributing to breakthroughs in treatment and care.
What is the concern raised by some nurses and other healthcare workers regarding the introduction of AI in their field?
-The concern is that the introduction of AI might eliminate jobs and disrupt the workforce. There is also a question about whether humans can adapt quickly enough to the rapid pace of AI advancements in the healthcare sector.
How does the transcript address the issue of AI potentially replacing human jobs in healthcare?
-The transcript acknowledges the concern but also points out that while AI might eliminate some jobs, it also has the potential to create new ones. The key challenge is whether the workforce can adapt to these changes at the pace AI is advancing.
What is the potential impact of AI on the future of human-computer interaction as discussed in the transcript?
-The transcript suggests that AI, particularly through natural language interfaces like chatbots and voice AI systems, will become the dominant mode of human-computer interaction, making technology more accessible and understandable in human language.
Outlines
🤖 Introduction to Hyro's Conversational AI in Healthcare
The video script introduces the guest, Israel Krush, co-founder of Hyro, a company aiming to revolutionize conversational AI. The hosts, David Williams and John Driscoll, set the stage for a discussion on the potential of AI chatbots in healthcare, specifically addressing patient frustrations within the system. The conversation highlights the rapid evolution of natural language interfaces and the challenges large enterprises face in adopting and maintaining these systems. Israel shares his insights on the current state of chatbots and voice AI systems, emphasizing the difficulty of deployment and maintenance despite significant media attention and public interest.
🔍 The Role of AI in Enhancing Patient Engagement
This paragraph delves into the capabilities and limitations of AI chatbots and voice assistants in healthcare. The hosts and guest discuss the potential for these technologies to provide empathetic, 24/7 support without fatigue. The conversation contrasts the cost of mistakes in healthcare with other industries, underscoring the high stakes involved in medical interactions. Israel addresses the challenges of engaging with patients on clinical issues, emphasizing the importance of language and knowledge in healthcare applications of AI.
🧠 Understanding Large Language Models and Responsible AI
The discussion shifts to the technical aspects of large language models (LLMs) and their role in conversational AI. Israel explains that these models are statistical tools that predict the next most probable word, which can lead to 'hallucinations' or factual inaccuracies. The concept of 'responsible AI' is introduced, focusing on three pillars: explainability, control, and compliance. The conversation explores the importance of these elements in regulated industries like healthcare, where mistakes can have severe consequences.
🛡️ Implementing Guardrails in AI for Healthcare
The paragraph discusses the implementation of safeguards in AI systems to prevent errors and ensure compliance with healthcare regulations. Israel describes the use of a knowledge graph to provide accurate and vetted information to AI systems, preventing them from generating incorrect data. The conversation highlights the importance of controlling AI responses in sensitive situations, such as emergencies, to ensure patient safety and accuracy.
🌐 The Future of AI in Healthcare and Knowledge Dissemination
The final paragraph of the script contemplates the broader implications of AI in healthcare, particularly in knowledge dissemination and access to care. The discussion suggests that AI has the potential to make expert knowledge more widely available, improving healthcare outcomes for patients globally. However, there is also a note of caution about the rapid pace of AI development and whether the human workforce can adapt to these changes without significant job displacement.
👋 Closing Thoughts on AI's Impact and Adaptation
In the closing segment, the hosts and guest reflect on the potential positive and negative impacts of AI. While there is optimism about AI's role in enhancing healthcare and other sectors, there is also concern about the speed of AI's development and its potential to disrupt jobs. The conversation concludes with a question about whether AI can replace podcasts, leaving the audience with a note of intrigue about the future of AI in various forms of media and communication.
Mindmap
Keywords
💡Conversational AI
💡Hyro
💡Healthcare System
💡Chatbots
💡Natural Language Interfaces
💡Large Language Models
💡Mistake Cost
💡Empathy
💡Knowledge Graph
💡Responsible AI
💡Regulated Industry
💡Genetic Algorithms
💡Job Displacement
Highlights
Israel Krush, co-founder of Hyro, discusses the potential of AI chatbots to solve patient frustrations in the healthcare system.
The evolution of chatbots and voice assistants as the future of human-computer interaction since 2018.
Hyro's mission to revolutionize conversational AI in the healthcare ecosystem.
The challenges large enterprises face in adopting and maintaining chatbots and voice AI systems.
The importance of responsible AI, especially in regulated industries like healthcare.
Three main pillars of responsible conversational AI: explainability, control, and compliance.
The concept of 'hallucinations' in AI and the risks of incorrect information being provided to patients.
Strategies to control AI-generated content to prevent misinformation, such as using a knowledge graph.
The potential of AI to improve patient engagement and personalize care.
Large language models as statistical tools lacking deep reasoning but capable of generating coherent responses.
The role of AI in administrative tasks within healthcare and the cost of mistakes.
AI's ability to provide empathy and 24/7 availability in contrast to human limitations.
The comparison between AI's 'hallucinations' and human behavior, particularly in children.
The potential for AI to enable access to specialized medical knowledge and expertise globally.
The debate on AI's impact on jobs, particularly in nursing, and the need for human adaptation to technological shifts.
Israel Krush's optimistic view on AI creating new job opportunities despite eliminating some existing ones.
The rapid pace of AI development and the challenge for humanity to adapt to these changes.
Transcripts
Today's guest is a lifelong lover and solver of puzzles and problems.
He loves the complexities of data and real world impact, but can
the AI chat bots from his company, Hyro, solve patient frustrations in
interacting with the healthcare system?
That's a tall order.
Welcome to CareTalk, America's home for incisive debate about
healthcare business and policy.
I'm David Williams, president of Health Business Group.
And I'm John Driscoll, senior advisor at Walgreens.
Well today's guest is Israel Krush, co founder of Hyro, whose mission is
to revolutionize conversational AI.
Meanwhile, join the fast growing CareTalk community on LinkedIn, where you can
dig deep into healthcare business and policy topics, access CareTalk
content, and interact with the hosts and our guests or their chatbots.
And please be sure to leave us a rating on Apple or Spotify while you're at it.
So, David, why, why are we talking about chatbots and how
does it relate to healthcare?
You know, I was intrigued when I was reading about Israel's company and
he had, uh, he, he founded a company.
We know he's a scholar because he founded a company with hypotheses,
uh, which is something we would, uh, we would love to do.
And it was really about chatbots and voice assistants being the future
of human computer interaction and the traditional ways of doing it.
We're no good.
That was back in 2018.
We're in 2024 now.
So I want to know, like, how's it panned out?
Yeah, absolutely.
Uh, so you're right.
I think that when we started the company, we had two hypotheses.
One is that, uh, natural language interfaces, so chatbots and
voice AI systems, Will be the dominant interfaces when it comes
to human computer interaction.
I think that today, you know, if the buzz around chat, GPT and
large language models, it's, it's a matter of time, of course, we're
going to talk with technology.
Of course, we're going to type into technology and the technology
computers, uh, phones would be able to understand us in our own human language.
So this hypothesis today, I think is less of an hypothesis.
This is a reality.
Uh, with regards to large enterprises.
Um, in the health care ecosystem, but generally speaking, adopting chat
bots and voice AI systems and being able to deploy and maintain them.
That's the second hypothesis.
We believe it's going to be very hard for them to do so.
And they think that you see that even post at the buzz, we're like,
what, 18 months now after, uh, call it like the media attention that we
got with the release of chair GPT.
And you still see very few, uh, Production ready, patient facing or
end user facing, facing chatbots and voice assistants that are available.
So the hurdles to actually deploy and maintain those are still very hard.
And that's exactly what we try to simplify and help with this transition.
So Israel, if you think about the consumer Approach of of whether it's claims or
answering basic questions chatbots are not only increasingly more pervasive
in some cases, they're more sympathetic than the actual people on the phones.
And yet there have been real hurdles and roadblocks and.
And sort of, uh, a high level of concern about engaging directly with
patients around clinical issues.
What's the big difference?
Language is language.
Knowledge is knowledge.
What's different?
So, yeah, absolutely.
To your first comment, John, you know, chatbots or voice AI
assistants, they never get tired.
They never get frustrated.
They can speak with you 24 7, whenever you call them, and they'll have the
same tone, and they can be emphatetic these days also with some new tech.
Thanks.
To your point, Watson, the question, even before getting into clinical use cases,
let's just talk about administrative use cases within healthcare.
I think one of the main things to consider here is the cost of mistake.
So, um, let's think about the chatbot that helps you buy, I
don't know, clothes, uh, t shirts.
If you requested the blue
David buys a lot of t shirts.
Okay, you see, so here's like a user for you.
So if you request a blue t shirt and you get the purple one You might be upset,
but it's not the, the end of the world.
The cost of mistake is, is not that big now.
Oh, so Israel, by the way, John, John
is, John is colorblind, so he really doesn't care what shirt shows up.
? I, I'd
be fine.
I'm as well by the way, one outta seven male, uh, is colorblind, so it
tends to be associated with higher intelligence.
David . Sorry.
Exactly.
One outta two 50 women, by the way.
So anyway, yeah.
So
I'm gonna be upset with my purple shirt, but I'm not gonna be upset if they take
out like my kidney instead of my liver.
Am I.
Correct.
No, absolutely correct.
Maybe this is a, you know, a very extreme example.
But even scheduling an appointment in a time slot that doesn't exist
and you're getting into the clinic to find out that it's closed and the
physician that you wanted isn't there.
That's like much more frustrating and opens a lot of lawsuits
against the health care facility.
So the cost of mistake is probably like one big factor for it.
Of why you don't see them a lot in production these days.
So, I mean, I, I get, you know, these voice bots, they always, I mean,
they don't get tired, but also what happens is they, sometimes they try
to be empathetic and they'll, and they'll say, you know, they'll say,
say, or whatever, and I'll say it.
And they said, could you say it again?
Or, you know, please say this.
And.
I'm sorry, I can't understand.
It's like, shut up.
And at the same time, I do find that the, you know, the CHAT GPT is
particularly, it can be very empathetic.
And in fact, the bar relative to like a healthcare administrator
or a doctor isn't that high.
So it can easily be more empathetic than them.
Does it, does it end up playing a role?
Can it play a role in personalizing care and actually improving engagement?
These possibilities?
Well,
David, David, maybe, maybe the way to think about it in Israel, to frame it
a little bit more broadly, the large language models, which kind of drive chat
GPT, maybe you could kind of explain what they are and how they feed chat GPT to
sort of contextualize David's question.
Yeah, absolutely.
So large language models, uh, at least today, um, are
very much statistical tools.
So, um, they don't have deep reasoning as we as humans have.
So when you ask me a question, I actually think about what I want to answer
to you versus like a large language model will think about what's the
next word in the highest probability.
And all of a sudden, you know, because it was trained on so much data, we got
like to this magical point in which the sentences makes a lot of sense.
And then to your point, we can align them to be sympathetic and we can
align them to not break certain rules.
Um, uh, and they think that While reading these answers or hearing
these answers, it all makes sense.
You know, there's a big hallucination issues with them today.
We talked about custom mistake.
So the AI will sound smart and sympathetic and will schedule an
appointment for you, but with a physician that doesn't even exist.
And that's a problem.
So it doesn't know.
What's
the difference between David's hallucination and
Chad GPT's hallucination?
Like, how do you define it?
It's a good question.
I think that, um, I don't know, like regarding David's hallucinations, but to
give you like a good analogy with human beings is like a six year old when you'll
ask a six year old to do something.
Um, some of it would be based on on ground truth.
And sometimes the six year old would be shy to say, You know what?
I just don't know.
So we'll make up something.
And that's how you need to think about it with large language models.
They don't know that they lie.
Actually, if you continue the conversation and say, You know what?
This physician is actually not existence or like disappointment
is not in their schedule.
They will apologize and say, You know what?
You're actually correct.
So
Oh, that's very different than David.
David never apologized.
So how do you deal?
I mean, so this hallucinations are a problem and you talk about high
rope controlling hallucinations.
Do you like just like put the thing in a headlock or what does it mean
to control the hallucinations?
Yeah, absolutely.
So in healthcare, we talked about the cost of mistake.
And I think another thing that we didn't talk about is the fact that like, this
is a regulated industry, very much highly regulated industry, which is Part
of why the cost of mistake is so big.
So now, uh, I'd say the new buzzword, especially like in regulated industries
that wants to deploy AI is responsible AI.
Oh, in our area, responsible conversational AI.
And I think that, you know, I am allergic to buzzwords.
So I like to say like, what does it mean for me?
And when I think about responsible conversational AI, I think about
three main pillars, which are.
explainability, control and compliance.
Um, so explainability, why the AI replied the answer that it replied.
And, um, again, without getting too geeky, you know, large language
models are large machine learning models, which are large black boxes.
Inputs, outputs, you don't really understand what's happening inside.
So how can we make it more explainable to Why did I recommend this physician?
Why did I gave you, uh, this information about your headache
and so on and so forth?
And it's not like that's partially to deal with the hallucinations issues.
And it's not like you can eliminate it entirely.
But you can definitely offer citations and paths to how the I deduct the answer.
Control to your question is how do I balance between the generation
and You know, gen ai, generative ai.
So how do I balance between the AI generating an answer to, you know
what, this is a sensitive subject.
I don't want you, dear Ai, to generate an answer.
I want you to give me the same exact response each and every time.
Uh, the main, uh, the, the simplest example here is
when you are in an emergency.
So when we identify that you need, like, to get to the ER or call 9 1 1, we
don't want the AI to offer any type of treatment or diagnosis besides telling
you, it looks like it's an emergency situation, call 9 1 1 or get to the ER.
So Israel is the right way to think about this because open AI
has continued the, the, the, the organization that's behind chat GPT.
That is, this is the way David and I are navigating the world and not necessarily
telling everybody using these models to answer questions or to prepare work
have been somewhat allergic to actually limiting What's effectively a model,
a model that is learning even on the questions that you're asking, um,
Anthropic has a different competitor has worked to sort of organize and try to
make more explainable the way the logical, the way the models are driving answers.
It's just two different approaches to a similar problem.
Chat GPT is more open ended and has obviously got more users.
I think what I hear you saying in healthcare is rather than let the models
try to answer every question in every place, you're actually putting stops,
Uh, controls and, and effectively, um, guardrails that no, you can't answer
certain of these questions because the range of answers is, is, is, is, is
too risky to risk the wrong answer.
Is it, is, is that the right way to think about it?
Absolutely.
We actually talk about guardrails or safeguards in terms of, um, you know, like
today, there are a lot of AI companies.
Some of them are basically let's call open AI or entropic or any other
large language models in healthcare.
You just cannot do that.
So the question is, what are your guardrails?
For example, we use a knowledge graph again without getting too geeky.
What we do is we can tap into the physician directory and scrape all
of the physician's information and restructure it in a knowledge graph form.
And then when I have questions about the physician, I know that the large
language model isn't going to go to the world wide web and search for
the data, but he's going to search it through the knowledge graph.
So the knowledge graph will work with tandem.
Tandem with the large language model, and that's how you create.
Just,
just to be clear, a knowledge graph is, would be a, a form of a, of a database.
Only a little bit more elaborate with a, with a few more pathways.
But it, it's a, it's a, it's a data, it's a, a data structure that you vetted
and manage so that therefore it's safer.
Is that the right way to think about it?
Absolutely.
And we reorganized it in a way that.
Um, you can think.
So let's take a find the physician use case.
Um, so the main entity in this knowledge graph would be the physician and some
of the attributes of this physician would be your specialty, the insurance
plans that they accept, the locations they accepting and you can actually
visualize all of this information.
So when I'll say I'm looking for a cardiologist to speak Spanish and accepts
at nature once in the upper east side, I'll see john as as a physician that
is a cardiologist that accepts Aetna.
Um, And is in the Upper East Side and whatever else I said, but,
but that's how you guard the data.
So you won't make up physicians just to satisfy an answer.
There is a John Driscoll who I think it's a pediatrician in the Upper East
Side, but I never could have gotten into medical school, unlike David.
Yeah, I'm sure I could have gotten in.
I don't know what I would have done once I got inside there though.
Luckily I have a brother does that.
Now, speaking of John and not John Driscoll, there was another John that
you spoke with recently, John Brownstein.
And I saw you had a webinar on, on responsible AI and, and John, it was
actually been a guest on our show.
And I don't know whether he's responsible or not, but he is a creative thinker.
What, um,
He's responsible.
We love John.
And please don't forget to listen, to re listen to that podcast.
That
was a, that was a good one, John.
He's always smiles through every, whatever we say to him.
Kind of like you, Israel, in a sense, whatever we throw at him.
What was that webinar like?
When he, any takeaways, uh, from that on, on the responsible,
uh, AI side from John or others?
Yeah, I think that john and Children and Boston Children's like a very
thoughtful and very advanced with everything that they do with, uh,
with large language models with the collaboration with open the eye.
And I definitely don't think that you should take an example from john because
they're very advanced in terms of like the resources that they can put in.
Um, and I think that the nice thing about the conversations with him and obviously
that's not our first conversation is, um, the depth of In which they actually
started experimenting with large language models for a variety of things.
And now I said, don't take an example from him because, um,
it's, it's really where we started.
It's still is a hustle to deployment and maintain a good enough chatbots
and voice assistance for various needs, especially in a patient facing.
So, um, unless you have the time, the resources, both from like
a capital perspective and from a technical perspective, uh, to
actually Get very much into the weeds.
You probably want some sort of a partner, uh, to help you like navigate that.
And, uh, yeah, I think that Boston Children's is in a very unique
place, uh, to be able to both find the partners, but also do a lot
of experimentations by themselves.
So we're fortunate enough to actually live down the street from Boston
Children's and my kids have gone there.
And one of the things that and we've done some work there.
And one of the things that struck me is that AI, but also other sorts of
technology should be able to do is enable anybody, wherever they are, even if
they're not just down the street, but to be able to tap into that kind of expertise
and to be able to project it further, you see people there from all over the world.
There's only a few that can come.
It's very expensive.
You have to wait, et cetera.
But how can we tap into all of that knowledge?
Not just randomly that's generated, but actually that they've generated
and bring it, bring it forth.
Hasn't happened so much to date.
Does AI, that wasn't necessarily the conversation with John,
but does AI enable that?
Yeah.
To a greater extent, or are we, or is that a different direction?
Yeah, I think that, uh, as I matures, it's going to be, um, you know, the
most professional skilled workers, knowledge workers that we've had.
Um, that means that they can query various large databases and return
with coherent answers and really create efficiencies, um, and more access to care,
access to knowledge, generally speaking.
So in health care, which is a very complicated.
area for a lot of us patients, you know, to grasp both from, you know, the clinical
side, but also in terms of like how it works, you know, the payers versus the
providers versus the pharma companies.
It's, it, it is, um, still very problematic to navigate.
But
Israel, I, I do think that David's asking a slightly Different question,
which I've actually seen some evidence, David, that there is.
I mean, there's a, uh, an MD who's a chief data officer in a hospital who's
got a very rare form of cancer, and he's actually using the hallucinations,
those in those, those sort of probabilistic jumps in the model.
to help test whether new forms of and new combinations of cancer treatments
might help accelerate his healing.
And he, because he's a PhD in data science as well as an MD, and
understands that he can, he can, to Israel's point, control it.
But I do think that in addition to a learning, learning faster,
which I think was your question, we also may be able to some of
these, what now is a hallucination.
And potentially control them as Israel is suggesting into insights.
And maybe we'll try to get that doctor on our, on our podcast.
But I, I think it's a really, um, it's a subtle question.
And I think Israel, I mean, you've got to speak to this, but I think
we're still figuring out the models, but there there's a lot of runway.
No, absolutely.
I think that, um, if we're looking at this from this perspective is, you know,
um, when people thought about what are the first use cases that is going to
solve, I can tell you that The last use case that people thought that the
eye is going to solve is creativity.
And all of a sudden, most of us on on, you know, on our personal use, use
chat GPT for creative thinking, right?
Like it help us brainstorm.
It help us come up with questions.
It help us come up with workshops that we want to have with our leadership
teams and so on and so forth.
So to your point, I definitely think that it's not necessarily like hallucinations.
It's like even Mutation to genetic algorithms like, um, that's part of
how we as you and humans evolved.
And it looks like there is a sub part of algorithms called genetic algorithms
that uses this mechanism to actually come up with new novel solutions.
So where can it go?
I don't know yet, and I don't want to guess because it seems like our
guesses in the past were very wrong, but it can definitely help us achieve.
I'd say like breakthroughs that we weren't even thinking about.
So last question.
We've been fairly positive here about AI as a good force.
And on the other hand, we've seen that some nurses in particular have
been raising the alarm bell over the use of AI and introduction of AI.
We actually did a A popular show on it.
I think it's one of our highest rated episodes about that.
What is your take Israel on why at least some nurses may be kind of raising
the alarm and even taking labor action when they see AI being introduced?
Yeah, so it's not only nurses.
You heard about the nurses, but, uh, so, so we, we sell voice AI assistance, right?
Why do you think the call center manager thinks about
that or the call center agents?
And I think that, uh, generally sticking Every time that, um, a new
technology wave, a big technology wave, like a platform shift is happening,
there's a question of, is it going to eliminate, uh, a lot of jobs and what
it's going to do to these populations?
And I think that what we've discovered in the past is that while it's eliminating
some jobs, it creates a lot of new ones.
So, um, I can end it in an optimistic manner, but because you requested a
pessimistic one, I will also share that, um, the pace in the change of pace, um,
that we're experiencing with AI is unlike anything that we've seen in the past.
Um, so the concerning factor is, um, are we as humans going to be fast enough
to adapt to the change that AI brings?
Given like the technological breakthroughs that happening really every week, um, and
in the past we had enough time, like with the again, internet, cloud, mobile as
platform shifts, we were able to adapt to it because it took like several decades.
Now it's going to take.
Less than half a decade to see a lot of implications of AI.
So that's the question.
Like, will the human race be fast enough to adapt to this new pace of, uh,
technological shift in the era of AI?
Excellent question to end on.
And I was going to ask another one about whether podcasts can be replaced with AI,
but we'll save that for another episode.
For podcasters.
Exactly.
We can't, we can't do that.
Well, that's it for yet another episode of CareTalk.
We've been talking today with Israel crush.
He's co founder of high row revolutionizing conversational AI.
I'm David Williams, president of health business group.
And I'm John Driscoll, senior advisor at Walgreens.
If you like what you heard or you didn't, we'd love you to
subscribe on your favorite service.
And thank you Israel for joining us.
Ver Más Videos Relacionados
Mustafa Suleyman on The Coming Wave of AI, with Zanny Minton Beddoes
Doctolib : Déployer une Stratégie IA Générative 🚀🤖 (#127)
GEF Madrid 2024: Ethical implications of AI
รู้จัก Multimodal AI ทำงานได้กับข้อมูลหลายประเภท | EP.56 - #TechByTrueDigitalPodcast
Navigating the Clinical AI landscape
The New Youngest Self-Made Billionaire In The World Is A 25-Year-Old College Dropout | Forbes
5.0 / 5 (0 votes)