AWS re:Invent 2023 - Principal Financial enhances CX using call analytics and generative AI (AIM223)
Summary
TLDRこのトークは、Amazon BedrockのAartika Sardana Chandrasが主催したセッションで、新しい生成型AI時代において、顧客体験をどのように向上させるかについて議論しています。Amazon TranscribeのChris Lottと、Principal Financial GroupのMiguel Sanchezが参加し、コンタクトセンターにおけるAIと生成型AIの活用方法、既存の顧客のメリット、そして最新の生成型AIのイノベーションについて説明しています。彼らは、AIを用いた自動化されたチャットボットやリアルタイム分析ツール、会話分析を通じて、顧客サービスの改善を実現する方法を提案しています。
Takeaways
- 🤖 AIと生成型AIの新しい時代において、顧客体験を向上させる方法について議論する。
- 📊 顧客会話などのデータを使用して、アクション可能な洞察を導き出し、ビジネスを向上させる。
- 🗣️ 顧客は待ち時間の短縮とセルフサービスのソリューションを望んでいる。
- 💼 客服代理店は、電話でのサポートに集中することが困難であり、管理者は全ての通話を分析することができない。
- 🤖 対話型IVRやチャットボットを通じて、顧客が必要な時に答えを見つけられるようにする。
- 📞 リアルタイム通話分析と代理店支援ソリューションを導入し、通話の進行中にエージェントが迅速に対応できるようにする。
- 🔍 通話分析を使用して、通話の全体的なセンチメントや代理店のパフォーマンスを分析する。
- 🏦 WaFdBankは、AWSの対話型AIプラットフォームを使用して、顧客がシンプルな通話で費や時間を大幅に削減。
- 📈 Magellan Healthは、リアルタイム通話分析と代理店支援ソリューションを導入して、代理店のトレーニング時間を短縮。
- 📊 State Auto Insuranceは、通話分析により運営費用を大幅に削減し、TSB銀行は通話分析を100%にまで拡大。
- 🌐 AWSのコンタクトセンターソリューションは、どの業界にも適用できる横断的なソリューションである。
Q & A
アパルティーカ・サーダ・チャンドラスはどのような役職を務めていますか?
-アパルティーカ・サーダ・チャンドラスはAmazon Bedrockの高級製品マーケティングマネージャーです。
今回のセッションの目的は何ですか?
-今回のセッションの目的は、新しい時代の生成的AIにおいて、顧客体験をどのように影響を与え、連絡先センター空間で顧客体験を向上させる方法について議論することです。
クライアントとの会話から得られるデータはどのように役立つか?
-クライアントとの会話から得られるデータは、行動可能な洞察を導き出し、パフォーマンスを向上させ、ビジネスを促進するために使用されます。
WaFdBankはどのようにして顧客体験を改善しましたか?
-WaFdBankは、AWSが提供するセルサービス会話AIプラットフォームを使用して、シンプルな電話(例えば残高照会)で顧客が費やした時間を90%削減しました。
マゼラン・ヘルスはリアルタイムコール分析とエージェント支援ソリューションをどのように活用しましたか?
-マゼラン・ヘルスは、リアルタイムコール分析とエージェント支援ソリューションを活用して、エージェントのトレーニング時間を3~5日間短縮し、1通あたり9~15秒の節約を達成しました。
ステート・オート・保険はポストコール分析によってどのくらいの費用削減を実現しましたか?
-ステート・オート・保険は、ポストコール分析を利用して、約80万ドルの運営費用を削減しました。
TSB銀行はポストコール分析をどのように活用していますか?
-TSB銀行は、1年間で500万回のコールを分析し、10~12%の分析率から100%にまで拡大し、顧客がコールする理由を特定し、顧客体験を向上させることができました。
アマゾン・トランスクリプトの新機能として発表されたものは何ですか?
-アマゾン・トランスクリプトの新機能として、100以上のロケールをサポートする新しいマルチ十億パラメーターの音声基礎モデルが発表されました。また、コール要約を含むTranscribe Call Analytics APIの一部として、コールの要約を生成する機能も提供されるようになりました。
プリンシパル・フィナンシャル・グループはどのようにしてポストコール分析を活用していますか?
-プリンシパル・フィナンシャル・グループは、ポストコール分析を使用して、1年以上にわたって100万以上のコールを処理し、複数のユースケースで使用しています。また、トピック階層定義やカスタマーインテントの特定、レポート機能の向上など、PCAフレームワークのカスタマイズと改善を進めています。
プリンシパル・フィナンシャル・グループのロードマップにはどのような計画が含まれていますか?
-プリンシパル・フィナンシャル・グループのロードマップには、ポストコール分析の使用、メールインタラクションの追加、Google Analyticsとの統合、トピック階層定義の改善、AWSのQ&Aボットの展開など、多岐にわたる計画が含まれています。
セッションの最後に提供されるリソースは何ですか?
-セッションの最後に提供されるリソースには、ディスカバリーワークショップやProof of Conceptの開始方法、Amazon Connectソリューションに関する情報、AWSのProServeチームやCCIパートナー、コンサルティングパートナー、ISVに連絡する方法などが含まれます。また、AIMLに関する他のセッションやワークショップ、チャットトーク、その他のブレイクアウトセッションの情報を提供しています。
Outlines
📣 イベントの開始と目的
イベントの司会であるAartika Sardana Chandrasが、Amazon Bedrockの製品マーケティングマネージャーとして、今日のイベントの目的と議題について説明しています。Generative AIの新しい時代において、顧客体験を向上させる方法について話し合い、特にコンタクトセンター空間でのAIMLとGenerative AIの活用に焦点を当てます。
🤖 顧客体験の向上とチャットボットの活用
Aartikaは、顧客が望むセルフサービスソリューションであるチャットボットや対話型アシスタントの重要性について説明し、コンタクトセンターの課題に対するAIMLの活用方法と、それがもたらす利点を紹介しています。また、Principal Financial GroupがAWSサービスを利用してポストコール分析ソリューションを構築し、顧客会話から洞察を得る方法についても触れています。
📊 顧客体験の改善と効果
Chris Lottが、WaFdBankやMagellan Health、State Auto Insurance、TSB Bankなどの企業がAWSのサービスを利用して、顧客体験を改善し、コスト削減を達成した事例を紹介しています。特に、WaFdBankがシンプルな電話での時間を大幅に短縮し、Magellan Healthがエージェントのトレーニング時間を短縮した例が挙げられています。
🛠️ ポストコール分析のアーキテクチャと活用
Chrisは、ポストコール分析のアーキテクチャとその活用方法について説明しています。オーディオフォーマットからAWSのLambda関数をトリガーし、ステップ関数ワークフローを開始して洞察を生成するプロセスを詳細に説明しています。また、Amazon Bedrockを利用してトピックやアクションアイテムを特定し、データレイクを構築する方法も触れています。
🌟 AWSの新しい機能とPCAの活用
Chrisは、Amazon Transcribeの新しい機能とPCAの活用方法について発表しています。新しい多億パラメーターの音声モデルの導入により、翻訳の正確性が向上し、さまざまな方言や環境でのサポートが強化されました。また、Transcribe Call Analytics APIの一部として提供される新しいコール要約機能についても説明しています。
Principal Financial GroupのPCA体験
Miguel Sanchezが、Principal Financial GroupがAWSと協力してPCAフレームワークを構築し、顧客体験を向上させる方法について話しています。PCAの導入により、1年以上で100万以上の通話が処理され、多角的なチャネルでの顧客エンゲージメントを提供することを目指しています。また、PCAのデータを使って、新しいチャットボットを展開し、トピック階層定義を改善する計画も紹介されています。
🗓️ 2024年のロードマップとPCAの未来
Miguelは、Principal Financial Groupの2024年のロードマップとPCAの未来について説明しています。PCAの成功をもとに、メールインタラクションの統合や、Google Analyticsとの連携、トピック階層定義の改善、新しいビジネスドメインへの対応、そしてAIエージェントの展開を計画しています。PCAデータを基に、AWS BedrockとKendraを活用して、カスタマーサービスを向上させる戦略も明らかされています。
📝 次のステップとリソース
Aartikaが、イベントの最後に参加者に対して、ディスカバリーワークショップやProof of Conceptの開始方法、AWSの専門家やパートナーとの連携、Amazon Connectソリューションについての詳細を提供し、今後のセッションやワークショップに参加するように呼びかけています。また、フィードバックを求めるアンケートを公開し、質問コーナーを開いています。
Mindmap
Keywords
💡Generative AI
💡Contact Center
💡Customer Experience
💡AIML
💡Real-time Call Analytics
💡Post-call Analytics
💡Amazon Transcribe
💡Amazon Comprehend
💡Amazon Bedrock
💡Amazon Connect
💡AWS CCI Solutions
Highlights
Aartika Sardana Chandras introduces the session on Generative AI's impact on customer experience in contact centers.
The focus is on using customer conversation data to derive actionable insights for business improvement.
Chris Lott and Miguel Sanchez join to discuss solutions for contact center challenges.
Customers prefer self-service solutions like chatbots, with over 80% seeking such options.
Contact center agents are often overburdened, with 30% of their time spent on administrative tasks.
Managers struggle to analyze all call data, with many unable to do so effectively.
Three solutions are presented: self-service virtual agents, real-time call analytics, and conversation analytics.
WaFdBank saw a 90% reduction in time for simple calls using AWS conversational AI platform.
Magellan Health reduced agent training time and saved significant hours per year with real-time call analytics.
State Auto Insurance saved $800,000 in operational expenses by analyzing all calls with post-call analytics.
TSB bank analyzed 5 million calls to identify call intents, improving customer experience by routing calls to the right agents.
AWS contact center solutions are industry-agnostic and can be applied across various sectors.
Amazon Connect and AWS CCI solutions are introduced as flexible options for call analytics and Generative AI.
Amazon Transcribe and Amazon Bedrock are key components in the AWS language AI services.
Post-call analytics uses a data lake approach to store and access insights from customer interactions.
Live Call Analytics, a sister solution to post-call analytics, provides real-time analysis during calls.
Amazon Transcribe launches a new multi-billion parameter speech model supporting over 100 locales.
Miguel Sanchez shares Principal Financial Group's journey with AWS Post Call Analytics and Generative AI.
Principal Financial Group aims to migrate all applications and data points to the cloud by 2026, with AWS as a strategic partner.
PCA has been successfully deployed at Principal Financial Group, processing over 1 million calls.
Principal Financial Group is working on integrating email interactions into the PCA framework for a holistic customer engagement view.
The company is also planning to deploy AWS Lex for virtual assistance, using PCA data to create conversational purposes.
Transcripts
- Good afternoon everyone.
Thank you so much for joining us today.
My name is Aartika Sardana Chandras
and I'm a senior product marketing manager
for Amazon Bedrock.
So why are we here today?
In this new era of Generative AI,
we thought we should discuss
how we can impact customer experience,
elevate customer experience use
in the contact center space,
using AIML as well as Generative AI.
We are gonna focus on how you can use all the data
that you've collected through your customer conversations
or other customer contacts,
and use it to derive actionable insights
to improve performance and boost your business.
Joining me today are my colleague Chris Lott
and our customer, Miguel Sanchez.
- Hey everybody, my name is Chris,
I'm a senior solutions architect for Amazon Transcribe.
- Good afternoon, I'm Miguel Sanchez.
I am an analytics director
and regional Chief data officer
at Principal Financial Group.
- Awesome.
So we do have a fully packed agenda
for the session today.
We are gonna start with the key contact center challenges,
the personas and what their day-to-day life looks like,
how you can use AIML
to form certain contact center solutions
to alleviate those challenges,
the benefits seen by some of our customers,
and then move on to some
of the latest innovations of Generative AI
and how Principal Financial Group
has used these solutions using AWS services
to form their post call analytic solutions
to derive insights from all their customer conversations.
So first of all, I wanna, you know,
start with three different personas
like I mentioned.
The first and the most important persona is our customers.
With a show of hands, how many of you
like to spend 10 minutes on a call?
Press one for this, press two for this,
press three for this.
No one, right?
Our customers don't like it either.
More than 80% of customers today
want self-service solutions.
They want a chatbot or a conversational assistant
to be able to solve their challenges.
Second, our agents.
They're the face of the company's
customer service department.
In the last couple of years, you all will agree
that the contact centers have been overburdened with calls.
With days like today, like a Cyber Monday
where people are sitting home and ordering things,
they want to call those contact centers
and solve all their problems.
And 30% of the times,
instead of being able to focus on calls,
these agents are spending time in admin jobs.
That's what we've heard from our customers,
so we need to solve that problem.
And lastly, managers and supervisors.
You are collecting data day in and day out,
but are you able to analyze those calls?
Our customers tell us, not all of them.
So we want to empower customers
to be able to analyze 100% of their calls,
which is why we have the three solutions
mapped to those three challenges.
The first one, as I was saying,
the self-service virtual agents, your conversational IVRs,
your chat bots, they're boosted using Generative AI
and attached to the same knowledge bases
that are used by company agents
so that customers can find answers when they want
at a time that's convenient to them.
The second solution for the challenges related to agents
happens while a conversation is still going on.
The real time call analytics and agent assist solution.
So this is the holy grail of understanding, you know,
what the customer wants,
when they are, you know, really troubled.
So you are able to pick up insights
like an ongoing call sentiment.
The agents are empowered to find answers faster
because they have prompts coming to them,
giving them responses off that ongoing conversation.
So they are more focused, the answers are given faster,
the call resolution time goes down
and of course the customers are happy.
Last but not the least,
and something we'll focus a lot on in today's presentation
is conversation analytics.
Like I was saying, you have millions of calls in a year,
but are you able to analyze them?
We have a post call analytics solution
that lets you do just that
so that you can derive insights like overall call sentiment,
how are your agents performing?
What are the upcoming business trends?
What are the top things your customers are complaining about
or maybe what are the top things they're happy about?
So that you can double down on those things
and boost your business performance.
So let's look at what some of our customers
have already seen in terms of the benefits.
Starting with WaFdBank,
which used the cell service conversational AI platform
provided by AWS.
They saw a 90% reduction in time
that customers spent on a simple call,
like a balance inquiry.
It went down from four and a half minutes to 28 seconds.
That's like huge.
And 30% of their calls
are now contained using these self-service solutions.
Moving on to Magellan Health,
which is using the real time call analytics
and agent assist solution.
They brought down the agent training time
by three to five days
and though it, you know, sounds like a small number,
they started saving 9 to 15 seconds per call.
But over 2.2 million calls per year,
they saved about 4,400 hours.
You can do the math by multiplying the agent salary
and other operational costs.
Then for our post-call analytics solution,
we have two customers, State Auto Insurance
that saved about $800,000 in operational expenses
because of being able to analyze all of their calls.
And TSB bank, they were able to analyze
5 million calls in a year.
They were analyzing about only 10 to 12%
and they moved to 100% call analysis
which helped them identify over 800 call intents,
the reason why their customers were calling,
which helped them improve customer experience
because they were able to transfer their calls
to the right agent that mapped to the intent of the call.
Now this is my favorite slide
because it shows the sweat and blood
that the team has put in
in making our customers happy
and trust us with contact center solutions.
AWS contact center solutions are horizontal.
No matter what industry you belong to,
I'm sure we can help you solve your challenges
by introducing AI and Generative AI
into your contact centers.
And with that, I'll hand over to my colleague Chris.
- Thanks Aartika.
So to get started quickly with call analytics
and Generative AI and AWS,
we have two flexible options.
The first one is Amazon Connect.
It's our contact center solution
that allows customers of any size
to get started with a contact center
and provide superior customer experience.
For those that are unable to move to Amazon Connect,
for example, if you have a custom solution
that you've already built
or you're locked into a contact center vendor,
we have what's called the AWS CCI solutions.
These are example APIs and code
that allow you to get started on AWS
no matter the contact center platform.
Now, regardless of which way you go,
they're both powered by AWS language AI services
such as Amazon Transcribe
to go from speech to text,
and we use Generative AI such as Amazon Bedrock.
Now the AWS CCI solutions cover the three use cases
that Aartika mentioned earlier, self-service chatbots,
real-time agent assist and conversational analytics.
The CCI solutions support many different contact sensors
such as 8x8, Cisco, Avaya and others.
And they do this by using industry standard file formats
and protocols such as WAV files, MP3s and (inaudible).
Today we're gonna be focusing on a solution
which is called post-call analytics,
and Miguel's gonna dive into the details
of how Principal Financial Group
is using post-call analytics.
But all these solutions are open source,
which means that you have access to all the code
and can get started quickly in building your solutions.
At the end of the session we're gonna show all the resources
that are available to you,
but one that's pretty easy to remember
that I put up there is amazon.com/post-call-analytics
When building post-call analytics,
we worked backwards from customer challenges
that we heard from customers with contact centers.
For example, lack of insights into why customers are calling
or challenges in being able to evaluate
how their agents are performing.
So we used AWS language AI services
such as Amazon Transcribe to generate call summaries.
We use Amazon Comprehend to do call analytics
and generate conversational insights.
And we do call summarization
and other Generative AI tasks with Amazon Bedrock.
And the result is being able to discover key business trends
and insights into identifying root causes
while your customers are calling
and improving agent productivity.
So now let's dive into the architecture
and see how post-call analytics works.
So first, like I mentioned,
it starts with standard audio formats
that are uploaded to AWS in an S3 bucket.
From here a lambda function is triggered
that starts a step function workflow
that will merge all those language AI services together
and use them to generate those insights.
Additionally, we use Amazon Bedrock
to go deeper into doing things like identifying topics
and identifying action items
that your agents have to perform at the end of the call.
We take all of this data,
the transcription, the insights,
and we store them in DynamoDB and Amazon S3.
Now it's important to note
that what we're doing here is building a data lake
of all of those insights.
Now we provide two different ways to access those insights.
The first one is post-call analytics contains
a react based user interface that's hosted in S3
and CloudFront that allow you to get started building
your own user interfaces on top of PCA.
And again, all of that source code is available
for you open source on GitHub.
Additionally, like I mentioned,
because we have this data lake in S3,
we can write SQL queries using Amazon Athena
and build aggregated insights.
And with that we can also build dashboards
with Amazon QuickSight.
Now finally I should call out
that post-call analytics actually has a sister solution
called Live Call Analytics
and that's powered by Amazon Chime SDK Voice connector.
With this, we can analyze the calls
in real time as they're happening
and the benefit of that of course,
like Aartika mentioned is we can provide agent assist
so for example, suggested responses
or completing tasks in real time.
So now I'm proud to announce a few new features
of Amazon Transcribe.
We've been seeing that
in order to effectively leverage Generative AI,
customers are increasingly looking to increase accuracy
and language support.
So today I'm excited to announce a launch
of a new multi-billion parameter speech foundation model
that powers Amazon Transcribe
that supports over 100 locales.
This multi-billion parameter model is trained
using the best in class supervision approach
and it learns the inherent patterns of universal speech
and accents across millions of hours
of unlabeled audio data.
The speech foundation model provides a 30% relative accuracy
improvement across all the locales,
and it also enhances the readability
with more accurate punctuation and capitalization.
The model provides expanded support for different accents,
noisy environments and other acoustic conditions,
and it supports the many features
that we love about Amazon Transcribe,
for example, automatic language identification
and speaker diarization.
The second announcement today, which is available as preview
is call summarization as part
of the Transcribe Call Analytics API.
So now with one single API Call,
you can transcribe the call,
generate insights such as issues, action items
and outcomes and sentiment,
and get a call summary again all with one single API call.
It optionally allows for redaction,
not of just the transcript but also the summary.
And with that I want to turn it over to Miguel Sanchez,
chief regional data officer of Principal financial Group
and he's gonna talk about how Principal
takes advantage of post-call analytics
and Generative AI on AWS.
- Thank you, Chris.
I'm so happy and glad to be here sharing our journey.
I'd like to take the first 30 seconds to honor someone.
My co-sign, Daniel Orozco Sanchez who used to work for AWS
and who passed away about a year ago.
We both had a dream to be presenting here at re:Invent.
So here we are, this is for Daniel.
(audience applauds)
I'm gonna be walking you through providing some context
on who we are, why we are working with AWS,
and specifically dealing
and working with the AWS Post Call Analytics framework.
Also, I'll be sharing the approach
that we still are having for deployment purposes,
sharing the roadmap that we are gonna be facing for 2024,
and also I'm gonna be sharing
some demos, demonstration, on the PCA console
with the latest features that Chris
was referring to summarization.
And also I'm gonna be sharing another
really important topic for us.
It is the topic hierarchy definition
that we assembled together with our business stakeholders.
So let's go to who we are.
So basically Principal Financial Group.
It's an established financial services firm
with more than 140 years in the market.
We are a global investment management leader
and serve more than 62 million customers around the world.
Right now we are accountable in managing
around $635 billion assets under management
and related with engagement centers,
it's worth to mention that we are
processing around 30,000 customer calls on a daily basis,
supported on more than 1,500 engagement centers.
The average call time, it's eight minutes.
The average speed to answer is 51 seconds
with callers waiting less than a minute
to talk to an available agent.
We are facing a real challenge
and we had to look for alternatives to improve
not only our engagement center operation,
but also to improve the customer experience.
Why AWS and PCA for Principle?
There is an strategic definition behind the scenes.
We set an aggressive goal
to migrate all the applications
and data points to the cloud by the end of 2026.
AWS is our strategic partner for that journey.
But in addition to that,
I am proudly leading a language AI team.
We had the chance to benchmark each one of the components
that are embedded within the PCA framework.
So we run a benchmark comparing
with another solutions offer in the industry.
We also validated that AWS is following
the enterprise architecture definitions.
And we found very high accuracy on one component
that Chris was referring to previously.
TCA, Transcribe Call Analytics.
To be honest with you, this is a unique component
that we didn't find in any other offering.
Basically it's the combination of transcription
with data mining,
and now it's getting infused by Gen AI.
This is a unique component that was, you know,
part of the rational that we used to define
to be working with PCA.
And the last, but not the least one,
it's the access to subject matter experts
and product owners.
We are so pleased to have the support
from people like Chris,
and Aartika to be working with us,
even debugging code and deploying the platform.
We have created a great partnership with AWS
for this specific journey.
Where we are at right now.
We started with PCA about year and a half ago.
Once we selected the platform,
we established a nice partnership with AWS
supported on two specific programs.
The first one our architect resident program
and the second one, something called a data lab.
So basically my language AI team
partnered with AWS supported on these programs
and we were able to refine
and personalize the PCA framework.
We were able to deploy it,
and then I'm very proud to say
that today we have been able to process
more than 1 million calls.
PCA has proven to be successful
and we are using it in multiple use cases
while actively improving scaling
and evaluating with product managers,
customer experience consultants and servicing leaders.
There is another important topic
that I would like to refer to.
It's an open source framework.
We've got a lot of flexibility
to incorporate additional components
and additional channels.
Right now we are bringing the customer email interaction
as a part of the PCA framework.
And of course with all these announcements now,
we are relying on Bedrock for multiple purposes.
I'm gonna be providing more details about it.
Now I'm gonna be pointing to the requirement
that we receive.
Basically this is the business requirement,
this is the challenge.
I already mentioned that we are dealing with a lot
of customer voice interactions
and we were looking to enable conversational analytics,
but this is related with the voice of customer program.
Basically we were told you need to find something out there,
an IT platform that it's gonna be basically dealing with
unstructured and unsolicited data
following some specific business rules.
Those rules are listen,
basically to provide the ability to capture data
from multiple data sources.
Interpret, synthesize data for actionable insights.
Act, implement enhancements to improve outcomes.
Monitor, which is quantify the performance
of customer experience efforts.
And last govern, align, commit and prioritize.
So those are the principles that were defined
by our business stakeholders.
The voice of customer program.
With that definition and that business requirement,
basically we define the approach.
How we are gonna be moving with PCA.
Once we selected the platform
and created this partnership with AWS,
we define three main phases.
The first phase was for technical deployment
and in there basically we were dealing with specific MVPs
and activities related with transcription.
I'm gonna be providing more details,
but we are dealing with Genesys Cloud
as our engagement center platform.
So we are pulling data from our Genesys Cloud platform
and we are going through PCA.
So the first step is to go through AWS Transcribe
and get transcripts, high quality transcripts.
We were able to provide sentiment analysis,
topic and intent identification,
PII reduction and obfuscation.
This is where the beauty of TCS is playing
a key role in here.
We are a high regulated industry
and we cannot expose our data to everyone.
So PII, it's a big, big deal for us.
So PII was also considered
for phase one and reporting.
It is also worth to mention
that we are relying on AWS QuickSight
and one particular feature called Q,
which is the NLP feature.
For phase two, we define the topic hierarchy definition.
This is (inaudible)
this is something that we created internally,
this is something that we refine
with our business stakeholders.
Basically we are relying on PCA data points
and supported by Bedrock,
we were able to create our own taxonomy,
topic taxonomy definition.
So this is something that we released
and this is a video that I'm gonna be sharing with you.
The second one was customer intent.
The customer intent is gonna be playing a key role
not only for voice of customer
but for the customer experience.
Because with the customer intent,
we are gonna be able to determine the why.
Why is the customer calling us?
And that why basically it's gonna be a foundational piece
for another initiative that was triggered by PCA,
which is virtual assistance, Lex,
we are gonna be deploying AWS Lex.
And the last two basically it's related
with reporting enhancements and additions
because we realized that considering
that we've got the voice interactions,
now let's bring additional channels into the equation.
The last one is related with virtual assistance.
I was already mentioning about this.
We are looking to deploy AWS Lex,
and the PCA data is being used to create
multiple conversational purposes.
There is another functionality that we've got in there
and that's relying on the topic hierarchy definition.
Basically it's the emerging team detection.
With that feature, we are able to detect
if there is something getting important
or perhaps something that it is creating friction
within the customer experience.
And there is another component here I would like to point
because it is related with Gen AI, it is model retraining.
I'm gonna be more specific on model retraining
because it is not related with Bedrock.
This is related with another feature
provided by AWS called SageMaker Jumpstart.
So for some specific and internal definitions,
we are working with the small pre-trained models
and that's where we are pointing to model retraining.
So that's basically the approach
that we are following for the PCA deployment.
I already mentioned that once we discovered
the power that we had with PCA,
our analyst and leaders were pointing to create
a holistic view on customer interactions.
So the first definition was
let's bring customer email interaction
and we are gonna be bringing more and more channels.
Right now we are working trying to glue the email
and voice interaction together,
and we are relying on a graph database approach,
working and dealing with another AWS component
called Neptune.
So we are gluing all those interactions
and now looking to bring customer surveys,
social media interaction and digital interaction.
Our goal as a company basically
is provide a comprehensive perspective
on multi-channel customer engagement.
How the PCA framework was implemented.
I already mentioned that we are integrated
with Genesys Cloud.
We are ingesting data on a daily basis
for some specific use.
It is also worth to mention
that we were not following a big bag approach.
Initially, we were dealing
with some specific business domains.
This is also important.
This is not an IT only related initiative.
This is a business initiative.
So the first business domain that we were working
and dealing with was money related with money out
and then we were moving on to money in.
So we were bringing data
and we're still bringing data on a daily basis
from Genesys cloud.
The data, the raw data is being saved
on an Amazon S3 bucket and relying on AWS Transcribe,
basically we are creating Metadata
and some basic KPIs related with the calls.
All that information is being exposed in a JSON format
and it is being consume using QuickSight.
From there, the workflow will be pointing to use Comprehend
and Comprehend basically it's gonna be related with TCA.
With Comprehend and Transcribe,
we are able to identify topics, intents,
issues, takeaways, sentiment analysis.
And now we've got Bedrock.
So with Bedrock, which is the next step within our workflow,
we are able to get call summaries
and also we can have Gen AI queries,
which is another cool feature
that you will be seeing on the video.
There is another component in here, Translate.
PCA was deployed for our US market,
but it was also deployed in Mexico.
So at some point we're foreseeing the need
to be sharing some of that information that we were getting.
Those topics, that topic hierarchy definition
that we created here perhaps can be extrapolated
for another member company.
The last component, Kendra,
and I would like to highlight Kendra
because Kendra is playing a key role on another initiative
that we are considering within our roadmap.
Kendra is the elastic search component
that will allow us to be looking for some specific keyword
that was mentioned within that customer interaction.
And we can go and search for that specific keyword,
and Kendra will be offering a rank
of all the multiple options
where that specific keyword was found.
We can go ahead, click on it,
and we can even listen to the conversation.
It is not only listening
because PII is there, it's part of TCA,
so the conversation is redacted.
So if within the conversation
a social security number is mentioned,
you will see, you will hear,
"My social security number is beep, beep, beep, beep."
Because it's redacted and obfuscated.
The information is being consumed,
I already mentioned about QuickSight
and also through the PCA console
that it is gonna be part of the demo.
So this is the the new PCA console.
We created this video relying on real data
so you will be seeing real data.
Of course it's been redacted for this presentation.
Gonna make sure that it is running.
Okay, it's running.
So the PCA console provides details, call details.
I dunno if it's running or not.
Yeah, it's running.
Including call metadata, queue name,
agent name, call duration, agent and sentiment trends.
It also provides transcribed details
and a speaker time for all the stakeholders
involving the voice interaction.
There is a new functionality that was going through
to check tone, loudness and sentiment,
which is very useful to determine
how effective the interaction was.
And now in there you will see that new cool functionality.
PCA now host a live Gen AI query on the call details page,
enabling users to ask questions in real time,
such as how could the agent have done better?
Did the agents show empathy?
And a visual summarization and identification tasks.
This is something that we released
no more than couple months ago,
and it's creating a lot of impact
and good, good feedback from our business stakeholders.
The next video, it's gonna be related
with the topic hierarchy definition.
These functionalities aim to help our business stakeholders
to detect trend topics, drill down,
and get detailed on specifics, allowing proactive actions
to improve the customer experience.
This is extremely important.
As I mentioned before,
this is something that we created in house.
We are relying on PCA data points
and also supported on Bedrock,
specifically on (inaudible) Instant.
Yeah, it's running.
So the report is built on AWS QuickSight
providing NLP functionality supported on QuickSight Queue.
The report allows to filter by a specific data ranges
and engagement center queues,
providing a specific KPIs like number of calls,
average talk time and call duration.
We have defined three levels within the hierarchy
and providing a summary for each one of them.
This has been a very detailed and refined initiative,
partnering with our business stakeholders
to include business relevant topics,
clustering the outcomes provided by Bedrock.
Finally, we created a timeline analysis
considering the number of calls per day
pointing to some specific topic.
Okay, now I'm gonna be moving to one of my favorite slides.
At Principle, as we think about understanding
the customer experience,
our goal is simple.
Ultimately we want to deliver simplified, personalized,
and anticipatory customer experience
that build a feeling of security
when customer interact with us
using their preferred channel.
In this case, either email or voice.
Extremely important because the cornerstone
for our customer experience now,
it's PCA, the voice interaction,
the richness that we may get on all that information,
and we are now combining that with email interaction.
For us it's extremely important to be dealing with the what,
what are the customers talking about?
That's gonna be the topics.
But also extremely important
to be relying on the why.
Why are the customers calling?
Why are the customer emailing us?
Right?
Topics and intents are the way
that we are using to connect different channels.
So now I already explain you that we are relying on Neptune
for this specific purpose,
but we are able to find hidden relationships
and customer experience using
or interacting through multiple channels.
So the omnichannel experience,
it's extremely important for us
and that's exactly where we are moving on.
I'm gonna be detailing some of those activities
within the roadmap that we are gonna be facing next year.
We are looking to continue our partnership with AWS
executing on a very exciting roadmap.
So for phase one, which is already in production,
we're using post call analytics
enabling PII reduction,
topic hierarchy definition and summarization.
To date, we've processed over 1 million calls
from multiple contact center queues
that has provided the insight into the customer
content of calls.
With enhanced VI capabilities,
we are relying on large language models
to gather additional customer insights.
For phase two,
I already mentioned about email interaction.
We are already processing email interaction
and we are looking to get additional integrations
with Google Analytics
because that's the platform that we are using
for a tagging strategy,
and bringing that digital interaction into this equation.
And of course we are looking to improve
our topic hierarchy definition
considering new business domains.
For phase three, this is, I would say
extremely strategic for us right now
because considering the substantial progress
that we have had with PCA
and although the different data points
that we are able to process,
we said we need to enable an intelligent agent,
basically relying on PCA data,
but eventually can be complimented
by additional knowledge bases.
So now we are working with AWS
to deploy intelligent agents supported on AWS, Q&A bot,
infused by AWS Bedrock
and of course using Kendra as a pivotal platform.
Why am I pointing to Kendra?
Because we are gonna be facing,
or we are facing a RAG approach,
it is a retrieval augmented generation
pointing to the PCA data
complemented by additional knowledge bases.
Q&A bot is providing the functionality
to create our own knowledge bases
or to point to a preexisting ones.
The user interface is gonna be a chatbot like interface,
but it is gonna be circumscribed
to customer omni-channel interaction.
So that's the challenge that we're, I would say
it is a challenge, but it is gonna be really,
really exciting roadmap
that we are gonna be facing for next year.
So Aartika.
- Awesome.
- Thank you so much.
- Thanks, Miguel.
(audience applauds)
All right.
Next steps.
If you wanna get in touch with us,
you can ask us for a discovery workshop
or starting a proof of concept.
You can work with the different contact center platform
providers that Chris showed,
with Contact Center Intelligence Solutions
or you can reach out to us
to know more about the Amazon Connect solution.
You can work with our AWS experts, the ProServe team,
our long list of CCI partners, consulting partners,
and ISVs.
Before we let you go, we do wanna leave you
with some resources which will help you understand
more about all the solutions that we just spoke about.
And if you wanna know more about AIML in contact centers,
we do have an interesting list of sessions lined up
for the rest of the week that you can attend,
workshops, chat talks, and other breakout sessions.
This was all of us.
Do remember to fill in the survey, give us your feedback,
that's always very helpful
and we'll open it up for questions.
I'm happy to come to you or you can stand up
and shout at the top of your voice
to ask all your questions.
浏览更多相关视频
AWS re:Invent 2023 - Use RAG to improve responses in generative AI applications (AIM336)
AWS re:Invent 2023 - Fast forward: Building the future of financial services today (FSI203)
【自動化で稼ぐ方法】7つの無料AIツールで財産を増やす方法!仕事の効率を10倍に向上させよう!
【ChatGPT活用術】SEO対策もサイト制作もデータ分析も、全て出来る使い方(GPT-4o)
生成AIで切り開く、新たなビジネス変革~活用の取り組みと、変革をもたらすビジネスアプリケーション事例~ (TOSHIBA OPEN SESSIONS 2023 テーマセッション)
【2割のロイヤル顧客が8割の利益をもたらす】良い売上と悪い売上/パレートの法則の罠/上場企業とスタートアップの罠/ウォールマートとKマートの違い【経営とマーケティングをつなぐ③:西口一希】
5.0 / 5 (0 votes)