AI-Driven Supply Chains: 3 Cases | MIT SCALE Webinar | English
Summary
TLDR在这段视频脚本中,来自MIT全球供应链与物流卓越网络(MIT Global SCALE Network)的专家们深入讨论了人工智能(AI)在供应链管理中的应用。Maria Jesus Saenz,MIT数字供应链转型实验室的主任,介绍了AI在供应链转型中的作用,强调了AI不仅是技术更新,而是一个涉及领导力、战略和性能预期的全面方法。通过Dell公司的案例,她展示了如何利用AI实现端到端的供应链优化,提高订单履行的准确性和效率。Dr. Yasel Costa介绍了如何从自然界中汲取灵感,运用生物启发式算法优化运输路线,特别是在Samsonite公司的配送路径优化中的应用。Cagil Kocyigit讨论了AI在资源分配决策中的效率与可解释性之间的平衡,通过洛杉矶无家可归者住房资源分配的实际项目,展示了如何实现两者的兼顾。整个讨论强调了AI在供应链管理中的实用性和战略重要性,以及如何通过AI提高决策的质量、效率和透明度。
Takeaways
- 🌟 人工智能(AI)在供应链管理中的应用是现实且多样的,涵盖了从供应链网络设计到绿色车辆路径规划等多个领域。
- 📈 AI 驱动的供应链转型不仅仅是技术或算法的更新,而是通过数据实现价值驱动的端到端供应链的转变。
- 🤝 MIT Global SCALE Network 是一个全球性的教育和研究中心网络,专注于物流和供应链的实用研究,并与150多个企业合作。
- 📊 Dell 利用 AI 技术优化其供应链,通过预测能力、实时执行和根源分析来提高性能,并使用“完美订单指数”作为衡量标准的 KPI。
- 🚀 Yasel Costa 博士讨论了生物启发式 AI 在优化送货路线中的应用,特别是在动态和随机性高的情境下,如何通过 AI 算法提高效率。
- 💡 Cagil Kocyigit 探讨了 AI 在数据驱动决策中的效率与可解释性之间的平衡,强调了在实践中同时实现高效率和高可解释性的可能性。
- 🔍 AI 可以帮助提高端到端的可见性,不仅限于内部 ERP 系统的数据,还包括外部信号,如供应商的 ESG 评分等实时信息。
- 📉 在智利圣地亚哥的一个案例中,通过使用蚁群优化(ACO)算法,成功将车队规模减少了50%,并显著降低了运输成本。
- 🤔 AI 在供应链中的应用需要考虑数据的成熟度和质量,以及如何结合公司的具体需求和上下文来定制解决方案。
- 📚 即使是非供应链直接相关的项目,如洛杉矶无家可归者的住房分配问题,其数据驱动的解决方案方法也可以启发供应链和物流中资源分配问题的新思路。
- ⚖️ 在使用 AI 进行决策时,需要考虑潜在的伦理问题,如歧视等,确保 AI 系统的公平性和透明度。
Q & A
如何理解MIT Global SCALE Network在供应链管理中应用AI的愿景?
-MIT Global SCALE Network的愿景是通过应用研究分享供应链的未来,与150多个企业合作伙伴合作,每年教育200多名学生,并拥有遍布全球的校友网络。他们强调AI在当今供应链和运营中的实际应用,展示了AI作为现实的一部分。
在供应链中应用AI时,如何避免‘弗兰肯斯坦效应’?
-‘弗兰肯斯坦效应’指的是AI的不同组件如最后一公里交付和预测等不相互交流,像孤立的部分需要不断磨合以形成对AI的全面视角。解决这个问题需要一个长期的旅程,需要公司有能力扩展AI应用,从原型制作到更广泛的区域、流程和产品。
Dell如何使用AI来优化其端到端供应链?
-Dell通过将AI集成到其供应链中,专注于其愿景和战略,以及性能预期。他们开发了五个体验,特别关注‘做出正确的承诺’。Dell使用AI进行预测、执行实时的预定动作,并在订单后进行根因分析,以提高供应链的效率和响应能力。
Yasel Costa博士如何将自然启发式算法应用于物流优化?
-Yasel Costa博士通过使用自然启发式算法,如进化算法和蚁群优化,来解决物流中的配送路线优化问题。这些算法模仿自然界的行为,如物种进化和蚂蚁寻找食物的最短路径,以提高物流网络设计的效率。
Cagil Kocyigit博士如何平衡AI决策的效率和可解释性?
-Cagil Kocyigit博士通过讨论她的项目,展示了在实践中同时实现AI决策的效率和可解释性的可能性。她强调了人类理解决策过程的重要性,以便信任并实施由模型做出的决策,特别是在资源分配问题上。
如何使用AI提高供应链的端到端可见性?
-除了使用ERP等内部系统外,还可以利用AI集成外部信号,如供应商的实时数据、市场变化等,以丰富供应链的可见性。AI可以从结构化和非结构化数据中学习,提供更全面的决策支持。
在AI驱动的供应链转型中,如何量化价值创造?
-可以通过开发关键绩效指标(KPI)来量化价值创造,如Dell的完美订单指数(POI),它衡量订单从准备到交付的每个环节的准确率。此外,通过关键学习指标(KLIs)监测AI学习进度,并将其转化为经济效益。
在实施新的AI算法驱动的补货软件时,如何处理数据成熟度不足的问题?
-即使数据成熟度不足,也可以采用AI模型,这些模型可以从有限的数据中生成信息,或者使用模拟数据和模糊推理系统来处理数据稀缺性问题。关键是找到适合当前数据条件的AI应用方法。
使用AI进行需求预测的改进百分比范围是多少?
-没有固定的百分比范围,因为AI在需求预测中的改进取决于多种因素,包括数据的质量、模型的选择和业务的具体需求。需要根据具体情况定制和优化AI模型,以实现最佳的预测效果。
在AI决策中,如何避免潜在的歧视或伦理问题?
-仅仅从数据中移除某些信息(如种族)并不足以防止AI使用这些信息进行决策,因为其他高度相关的数据可能暗示了这些信息。需要采取更全面的策略,包括使用公平的算法、透明度和持续的监控,以确保AI决策的公正性。
AI和数据科学在供应链管理中有什么不同?
-AI和数据科学在供应链管理中是相互重叠的领域。数据科学更侧重于从数据中发现知识,而AI则强调机器的学习和认知功能。在实际应用中,两者往往结合使用,以提高供应链的效率和效果。
Outlines
😀 开幕致辞与论坛介绍
视频开头,主持人Maria Jesus Saenz欢迎观众参加关于AI驱动供应链的讨论,并介绍了来自MIT Global SCALE Network的多个中心的代表。她宣布将展示AI在供应链管理中的实际应用,并强调这些技术已成为现实。接着,Maria介绍了两位小组成员:Yasel Costa和Cagil Kocyigit,分别从事工业工程和物流优化研究。最后,Maria简要介绍了MIT Global SCALE Network的全球合作网络及其教育和研究项目。
😀 论坛议程与案例研究
Maria解释了本次网络研讨会的流程,包括简短的介绍和三个案例研究。首先,她将讨论Dell公司如何利用AI进行端到端规划。第二个案例研究将由Yasel Costa介绍,关于在Samsonite公司如何使用生物启发型AI优化配送路线。第三个案例由Cagil Kocyigit呈现,讨论数据驱动决策中效率与可解释性的权衡。最后,她强调参与者可以在Q&A环节通过聊天功能提问。
😀 AI的定义与应用
在本段中,Maria强调了为本次研讨会定义AI的重要性,并说明了三位发言者关于AI应用的共同理解。她提出AI的定义应该围绕机器、算法或技术执行人类认知功能,如感知、学习和解决问题等。随后,Maria通过Dell的例子展示了AI在数字化供应链转型中的应用,并讨论了AI如何帮助企业从数据中发现价值并推动端到端供应链的发展。
😀 面对面讨论:Yasel的AI应用
Yasel Costa详细介绍了他在物流和供应链优化中使用的生物启发型AI算法。他讨论了从自然中学习并应用到算法中的重要性,尤其是在知识发现和优化问题中。Yasel强调,像Chat GPT这样的系统背后就是基于海量数据训练的神经网络。他还提到了其他自然启发算法,如基于真实蚂蚁行为的算法,这些都是解决运输和分配问题的有效工具。
😀 Cagil的解析:AI的效率与可解释性
Cagil Kocyigit讨论了在AI决策过程中效率与可解释性之间的关系,并通过一个关于洛杉矶县无家可归者的住房资源分配项目来展示这一点。她强调了解决方案的可解释性对于获得信任和实施是至关重要的,并提出了如何在保持公平的同时,通过使用简单的排队策略和机会成本调整来优化资源分配。此外,她还提到了她的团队如何在实际数据上验证这一政策的有效性。
😀 闭幕与互动问答
在论坛的最后部分,Maria感谢所有参与者
Mindmap
Keywords
💡人工智能(AI)
💡供应链管理
💡MIT全球供应链网络(MIT Global SCALE Network)
💡生物启发式AI
💡数据驱动决策
💡端到端规划
💡可解释性
💡资源分配
💡机器学习(ML)
💡预测能力
Highlights
讨论了AI在供应链管理中的应用,并强调了AI在当今供应链和运营中的实际应用。
介绍了来自MIT全球SCALE网络的多个中心的专家,分享了他们在AI供应链领域的最新研究。
Yasel Costa博士探讨了生物启发式AI在优化Samsonite的配送路线方面的应用。
Cagil Kocyigit博士讨论了AI在资源分配公平性和效率方面的应用,特别是在鲁森堡大学的研究。
Maria Jesus Saenz介绍了MIT数字供应链转型实验室,以及其在供应链管理硕士项目中的领导角色。
强调了MIT全球SCALE网络的全球合作和教育计划,以及与150多家企业合作伙伴的合作。
讨论了AI定义的多样性,以及在特定应用中定义AI的重要性。
Dell公司如何利用AI进行端到端规划,以及如何将AI与领导力、愿景和战略相结合。
提出了“完美订单指数”(KPI)作为衡量供应链中承诺价值的一种方法。
讨论了AI在供应链中的应用面临的挑战,包括弗兰肯斯坦效应和技术中心主义。
强调了在实施AI解决方案时可扩展性的重要性,以及如何从试点原型扩展到更广泛的过程。
介绍了如何使用AI来提高供应链的端到端可见性,包括内部ERP系统和外部信号。
讨论了AI在需求预测方面的潜力,以及如何通过定制AI/ML模型来提高预测的准确性。
强调了在AI决策中效率和可解释性之间的权衡,并通过洛杉矶无家可归者住房分配项目来说明这一点。
提出了一种基于数据驱动的资源分配解决方案,该方案既高效又可解释,并讨论了其在供应链中的应用潜力。
讨论了AI在供应链管理中的伦理问题,特别是在消除数据中的种族信息以防止歧视方面的挑战。
强调了AI在供应链中应用的现实性,并通过Dell的案例研究展示了AI如何帮助实现业务目标。
讨论了AI在供应链中应用的未来方向,包括与全球校友网络的互动和持续的教育计划。
Transcripts
- Hello everybody.
Good morning, good evening, wherever you are.
Thank you very much for being with us today.
It's a pleasure.
We are really excited to share the input from all,
I mean many centers from the MIT Global SCALE Network.
We are going to talk today about AI driven supply chains.
So let me share this screen
so then we can elaborate a little bit more.
I hope that then you can see my slides.
Can you?
Great.
So thank you.
So today again, we have a great set
of panelists here that we are going
to share our latest research
in the area of applications of AI
in supply chain management.
Actually, we wanted
to emphasize that they are all applications.
We wanted to show you that again,
AI is a reality in today's landscape of supply chain
and operations.
Let me introduce the panelists
that we are going to have today.
Let's start first with Yasel Costa.
Yasel is industrial engineer from University Marta Abreu
and then he obtained his doctorate degree
from the prestigious German Institution Otto von Guericke,
sorry Yasel, I mean, I can't pronounce it well.
And then his research interest covers a variety
of diverse topics.
Supply chain network design, sustainable operations,
green vehicle routing problems.
Also he is director
of the PhD program of Zaragoza Logistics Center.
Zaragoza Logistics Center is our first center
that created the core of the MIT Global SCALE Network.
Welcome, Yasel.
So the next panel is.
Let me see if I am okay pronouncing it, Cagil Kocyigit.
So we are glad to have you here.
Cagil is great.
So she is assistant professor of the Luxembourg Center
for Logistics and Supply Chain Management
at University of Luxembourg.
Her research focused on optimization
and the uncertainty applications
and policy design learning optimization,
especially for resource allocation fairness and equities.
Very exciting topics.
She is a PhD from the Ecole Polytechnique Federale
de Lausanne.
So yeah, this is a great panel here.
I will introduce myself as well.
My name is Maria Jesus Saenz.
I am the director
of the MIT Digital Supply Chain Transformation Laboratory
and also the executive director
of the MIT Supply Chain Management Master's Program.
I have been working for Global SCALE Network since 2003.
Actually I was at Zaragoza Logistics Center,
so I know very well the Global SCALE Network
and I'm very proud
of what we are doing there just to shape the future
of supply chains.
Okay, so before starting,
let me share what the MIT Global SCALE Network is.
We are a set of centers all over the world actually.
Then we at MIT are here
and also Zaragoza Logistics Center, Luxembourg.
But also we have the nimble supply chain center in China.
Also we have CLI in Colombia,
but it's a network of universities
and institutions all over Latin America.
In total, these are our figures.
We have more than ten educational programs,
master's degrees, executive education certificates,
more than 80 researchers and faculty from all
over the world with a variety of topics.
All of us working in logistics and supply chain.
Our main, main, let's say feature is that all of us,
we are doing applied research.
We want to share the future of supply chain.
So this is why we work with more
than 150 corporate partnerships.
And every year we are educating more than 200 students.
And then we have a rich network of alumni
from all over the world that are super committed
and they are coming
to MIT every single year here in January.
So then with that also before starting,
I wanted to emphasize that we have a lot
of different events just in a couple of hours.
11:00 AM today, we have Dr. Christopher Mejia talking
about social driven supply chain network design.
So how AI can help
to bring nutrition to underserved communities.
But please go to CTL event website.
We have there, for example,
the POMS conference in Latin America.
We have our annual events, CTL MIT, CTL event,
annual event crossroads.
Go there to CTL events and then please register.
We'd love to share all our insights with all of you
and discuss your challenges and opportunities with us.
Okay, so then we have one hour.
So then we need to go with the clock very carefully.
This is our panel dynamics.
We have these short introductions.
Then we are going to have three case studies.
I told you we want to make it very fractional oriented,
very actionable.
Then I'm going to start talking how Dell is leading,
right now we are working with Dell closely.
How is leading supply chain using AI in different topics,
especially end to end planning.
Then second case study will be with Dr. Yasel Costa
from Zaragoza Logistics Center, as I mentioned.
He will talk about bio-inspired AI
in the optimization of delivery routes at Samsonite.
So again we are bringing
to your companies just to illustrate that this is a reality.
And the third case study is by Cagil Kocyigit
about data driven decisions with AI.
We will talk about especially efficiency
and interpretability.
What are the trade-offs between two key words for AI,
efficiency and interpretability?
I love it.
And then we will have a panel discussions with you.
So the dynamics
is that then you are introducing your questions
in the chat to Q&A, and then we will moderate.
We will read all this in order to bring the questions,
I would say the last 25 minutes.
We want to have time for having discussion with you.
So this is we are going to try to be short
in our presentations.
So then let's start.
Let's start.
Some weeks ago here at MIT CTL,
all the researchers, around 60 researchers,
we sit together for almost two hours just
to discuss what is artificial intelligence,
what do we understand by artificial intelligence?
And yeah, the beauty of that is that we couldn't agree,
we couldn't get a consensus of one single definition of AI.
This makes sense.
Why?
Because then AI could be interpreted as an aspiration
about what could be.
So it's very important
for whatever kind of application of AI that we are doing,
that we define in advance what do we interpret by AI.
And this is why here with Yasel and Cagil
that we decided to agree what kind of definition of AI,
or what kind of focal point we are putting in AI
for the three applications that we are going
to share with you.
And this is what we think that could be a good understanding
of AI for the purpose of today's webinar.
I am sure that you have other definitions
in your application, and it's totally okay.
Please don't interpret that this is their definition.
We don't want to bring here their definition,
because now the application is
so broad that, Zoom is helping me with that.
This is great.
Then again, the AI is so broad that then it is difficult
to have one single definition.
So this is what we understand
for the purpose of this webinar,
then can be defined the ability of a machine, an algorithm,
a technology to perform cognitive functions associated
with human minds, such as perceiving, learning.
We are emphasizing learning because the three of us,
we are going to emphasize how AI is helping us to learn.
Helping us means the organizations that are using,
applying AI, interacting with environment,
problem solving, and interpreting, among others.
So I will start with how Dell is interpreting AI.
And I want to be quite quick,
because then we want to be agile with our webinar.
Then let's start what we understood
about AI driven digital supply chain transformation.
And then we consider what Dell did here is much more complex
than just renewing technology or renewing algorithms,
or translating processes into algorithms.
Much more than that.
And then we will be the Dell case.
So, let me start.
This is the definition that here
in the Digital Supply Chain Transformation Lab at MIT,
what we understand about AI driven supply chain, then,
especially transformation,
is the application of AI as a technologies.
And then it could be algorithms,
could be cobots or robots that are driven by AI algorithms,
that then we use data to transition towards value driven,
end to end supply chain.
If I had to highlight here two key words, are value.
And value is something that you expect.
And sometimes AI helps us to discover
and then driven end to end supply chain.
End to end is an aim, is a goal,
and then only a few companies are really doing end to end.
But let's see how Dell is doing end to end.
What we have observed
in the companies is that there is different challenges
and difficulties for applying AI, especially end to end.
And then it's much more complex.
And then first we observe the challenge
of the Frankenstein effect.
Then, okay, you are having different components of AI.
One AI is in the last mile delivery.
Typically one AI is in forecasting
and they are not talking to each other.
So then they are like isolated pieces that need
to be polished and polished and polished
in order to have a more cohesive view of AI.
This is a journey.
It's not something that happens even in months.
It will require years.
I will share what Dell is doing.
Dell is working with this kind of approach for, I will say,
five years right now.
They continue working with this vision.
Then not only Frankenstein effect,
but also there are other issues or challenges.
Technocentrism is when a company then focuses too much
on technology, then everything is focused on technology.
Let's translate.
I mean, the way of optimizing my cost
in last mile delivery according
to how right now I am running my last mile delivery, wrong,
because the idea will be
to envision how you want to do the last mile delivery
and then develop the algorithm for this future vision,
instead of just only translating what you
are doing right now.
So technology can help you in order to be more efficient.
But then the focal point is not on technology.
What technology can do for me,
the focal point is how do I envision,
that's my delivery process,
and then how technology, AI can help me.
It's a completely different vision.
And then you will achieve more with focusing on that vision.
And scalability.
So companies, sometimes they make pilot prototypes
of AI application.
This is great.
But typically this is
based on very highly motivated people with very clean
and available and granular data.
This is the perfect data set.
When you go to reality,
all these components are not so easy to get.
So this is why it's important
that the companies have the capability of scaling up,
of being able to do prototyping,
and then moving the prototyping to scale up to more regions,
to more processes, to more SKUs, et cetera.
So the lack of scalability capability is a problem.
And we have observed that most successful companies,
especially with AI, they are comfortable exploring,
experimenting and scaling this up.
And then this is very important.
Then, let's talk about Dell.
Dell was working.
They started their digital supply chain
transformation journey in 2017.
Then they were asking what technology can do for me.
And they discovered that then they should focus
on their vision,
focus on their strategy and their performance expectation.
And actually they developed these five experiences.
Let's focus on particular this one.
Make the right commitment.
Let me explain how they deployed AI,
and especially how they connect with leadership
and vision and strategy,
with the performance as an anchor point to make AI scale up.
This is the idea.
Make the right commitment.
Make the right commitment
for them was to put a commitment
in the north star of their vision, commitment beforehand.
So when they commit with a customer,
let's say 100 laptops for a retailer, yeah,
we can deliver in, let's say four days.
So this is a commitment that they establish in advance.
Then they can monitor the commitment, the order, end to end,
and then also after the fact, they can go,
and then they can monitor what could be going on.
This forward looking approach, this future approach,
analyzing with AI, root cause analysis, for example.
So then this approach of before the order, during the order,
after the order, is very powerful,
especially knowing how
to uncover what is expected performance
that is commitment with an order, end to end.
This is very powerful.
So then AI, sorry, Dell made this kind of loop with AI.
This is really interesting
in terms of how they measure performance,
both of the business, how AI is impacting the business,
and second, how they are scaling this up,
how they are expanding the reach and the effect of AI.
First, they started with value identification.
So then in this case it was commitment.
So this is a north star.
When we define AI driven supply chain transformation,
the value expectation is in commitment.
They wanted to measure, to quantify commitment.
Also they wanted to quantify this commitment with a KPI.
And they developed a KPI that is aimed to be end to end.
This KPI is perfect order index.
So it's the percentage that every element
of the order aimed end to end.
Let's say, for example, logistics service provider,
that is preparing an order.
What is a percentage
of the time that they are under the expected commitment,
under the committed commitment, let's say.
So the perfect order index is very end to end,
because then you deploy the different components
of the order while preparing,
but also in advance and then forward looking.
So then this is the way they quantified the value
in terms of a KPI.
That is perfect order index.
It's not only a simple on time in full,
because what they are doing is just splitting
into different components from the stakeholders
that are participating.
Value creation is what the key learning indicators,
what they are learning,
how they are activating these groups,
how they are scaling this up,
how they are progressing with artificial intelligence.
You remember, AI learns or is expected to learn.
So key learning indicator is important.
It's for example the delta,
how we increase perfect order index,
or maybe how we decrease
or we've had a problem with this logistics service provider,
they made a delay, so they are decreasing.
Why this happened?
So this decrease
in POI should trigger some root cause analysis.
Why this happened?
Just to avoid that this will happen in the future.
And net promoter score is another typical key LI.
Then they need to transform all this in money.
Of course, money is important.
And then just to map the AI, money map end to end.
What are the different impact of perfect order index?
If we have change in the commitment,
how this could change into using money or the opposite,
if we are improving, how we are saving.
And value appropriation is very important.
We are talking about supply chain.
So then how we incentivize our stakeholders,
for example suppliers
or this logistics service provider that is always on time
as expected, because we monitor his contribution
to perfect order index, to POI,
and how we incentivize this attachment to commitment.
And then the loop starts again.
AI is present in several facets here.
So then AI is present before making the commitment
because we predict the capabilities during the commitment,
because we are executing
in real time what kind of prescriptive actions we can take.
And also after the order,
because we can say what are the future scenarios
with root cause analysis.
So all these predictive capabilities, for example,
with forecasting demand, of course,
but forecasting lead time, root cause, et cetera.
And also, for example with resilience,
monitoring the risks behind, thinking ahead.
With that, I am finishing.
Yeah, I told you we wanted to be quick, dynamic.
We wanted to make sure that then you are engaged.
Let's go to the second case.
Dr. Yasel Costa.
Doctor, ready?
- Yes.
Can you see my screen?
- Yeah, perfect.
- Excellent.
So thank you so much, Maria.
I'm so glad to be here joining you guys.
I've been learning a lot from your presentation.
I do have another definition of AI.
It's certainly not science fiction, right?
And I do like that word about learning.
We consider that AI,
it's constantly learning from different kind of sources,
some kind of creative learning, right?
And this is exactly the point of my presentation today,
but in a very applied context.
So when you double check sometimes the different AI based
algorithmic proposals,
they are already linked with these two fields,
according to my understanding,
mostly related to knowledge discovery,
but very few applications in the context of optimization
as the traditional problem sharing, for instance.
This learning that I mentioned has
to be basically with the natural inspiration.
When we learn from nature, we get an abstract,
the most creative knowledge.
And for that, we have been using that repeatedly since,
I don't know, ancient time,
and a different industry, of course,
we talk about manufacturing sector,
biological sector, pharmaceutical sector, right?
And from that learning, of course, we have some, many,
I would say, application context.
In the field of knowledge discovery, for instance,
one of the most famous one has
to we do the neural networks, right?
That natural inspiration related to the biological
or the bioelectricity that flows through our brain.
And of course, we just want
to understand what's the better output.
Consider multiple inputs.
So when we compare that
with the traditional regression analysis.
So AI based algorithms is simply superior, right?
And I will say that the most effective AI applications,
they all have natural inspiration
or bio-inspired source, right?
So in many cases,
when we heard and get excited about Chat GPT,
so what do we have right behind the algorithm of Chat GPT?
It's clear.
So there are multiple
kind of neural networks trained with billions
and billions on cases within the knowledge base.
So that's there
where you see constantly multiple application
of artificial intelligence,
and particularly by inspire algorithmic proposals, right?
But this is not all.
There are many other application contexts
where we could see different source of natural inspiration.
And, well, this is very well known,
like the evolutionary algorithm, in particular,
the first one proposed genetic algorithm,
all inspired in the evolution
of a species where the most adapted individuals,
they prevailed, right?
So in our case, there's no individual anymore.
When I try to put this into the context of logistics,
so it can be seen like a distribution problem, where,
I don't know, we take two different solutions
and then we cross this solution
in order to get a better adaptive solution.
In our context,
that will mean less total travel distance, right?
And for instance, for this particular one,
we had patents that they have like four vehicles
for the fleet size, and this one, it has like three.
And then we have a better adaptive solution
with better total travel time.
And in that case, we have the fleet size equal to three.
This is what we were trying to do,
but with a different source of inspiration.
And this algorithm is also well known,
and it's truly inspired by the behaviors of the real ants,
where they constantly find the shorter path between the nest
and the source of food.
And of course, also artificial intelligence
in this context revealed a nice feature,
which is the swarm intelligence.
So a single ant basically doesn't matter if it is real
or artificial ant, makes a random selection
of the path here.
So we clearly see that the shortest path is this one.
So once there is an ant that realize
or randomly select this shorter path,
then it lets this trail of pheromone, and of course,
the next ant will take that trail
where the pheromone smell is emphasized somehow.
So that kind of collective,
or what is called technically swarm intelligence,
help us a lot to solve transportation problem.
Like for instance,
could be described according to these metrics here.
And of course, if we have multiple ants departing
from different cells here,
and then following all the subsequent stages,
then we explore greater area within the solution space,
solution space that traditionally
describe transportation problems,
resource assignment problems,
even the one that Maria was mentioned,
the forecasting problems
in that field of knowledge discovery.
So we use that inspiration to solve a realistic problem.
In this case, the problem was set up in Chile,
and Santiago de Chile particularly.
And this problem was about a daily delivery process.
When we had 350 customer geographically spread in that city,
I mentioned, there was a three PL that basically was hired
for doing that delivery product deliveries
in the last mile context.
And in that regard,
they charge money based on the fleet size,
and they got homogeneous fleet of vehicles, of course,
where traditionally call like different capacity,
vehicles with differing capacity.
And it was very challenging.
Why?
Because when they made a contract with the customer,
the customer clearly emphasized
about one of the most difficult constraints
in this problem setting.
And it's about the time windows, right?
So when we have very tight time windows
that impose a very high constraint
to the optimization process.
And sometimes it makes,
when you have demand picks during the day,
so customers that you wouldn't imagine,
then the problem is not anymore only stochastic.
It's also a problem with dynamic structure,
like the so called dynamic vehicle routing problem,
where the customers appears and disappears,
and therefore the structure
of the problem change over the planning horizons, right?
And of course, at the moment this was examined,
there was a manual scheduling of the process,
which definitely takes a lot of time,
not even a reasonable time
for performing a better operational decision, right?
So that's pretty much the idea with the problem.
And well, there were certainly penalties
when they were late, they rebates, and of course,
that implied most of the time delayed.
And we were talking about a problem
that is actually not considered a small scale problem
in the field of BRPs.
In the field of BRPs, more than 50 nodes
or more than 50 customers
for which we should deliver something,
then it's considered a problem with substantial complexity.
So this is the way it looks, one-day delivery.
So as I mentioned,
the three PL charge based on the fleet size
and many other things, but particularly the fleet size.
So this was for the business, how they made the decision.
And it took eight trucks
to complete that workload they had at the moment.
But when we were using our ACO optimization, AI inspired,
then we reduced the fleet size by 50%.
Not to mention that there was also a substantial reduction
in terms of the total cost, transportation cost, right?
About 38%
So before my time is about to being gone.
So this is a summary for more days of road planning,
and totally just in ten days,
we could actually save like 24%.
Some cost metric,
I don't have time to mention what it was about.
And substantial reduction also in terms of the fleet size.
And one of the most important reduction was that compared
with exact methods that mostly find the optimal solution,
we reduced substantially, of course,
the computational time, and compared even
to the traditional time that they were using for,
or the frequently time they were used
to schedule the vehicles.
Then it was also a substantial reduction.
So this is one example
of how buy inspired methods could be applied
with a very frequent problem in the logistical context,
which is, for instance, in this case, transportation.
So I hope you like it,
and I'll hand it over to my dear colleague.
And thank you very much, Maria.
- Thank you.
Let me share my screen.
- Thank you, Yasel.
- Do you see my slides?
- [maria] Yes.
- Thank you.
Can I just start this one?
Hello everyone.
I'm going to talk about the interplay
between efficiency and interpretability
when considering data driven decisions with AI.
Even though there is typically a trade off
between efficiency and interpretability of AI decisions,
I'm going to show you that achieving both efficiency
and interpretability simultaneously
can be possible in practice
by discussing a recent project of mine and my collaborators.
The project that I'm going
to discuss is not directly related to supply chains
or logistics, but it's a resource allocation problem.
And I'm going to argue
that a similar data driven solution approach can be used
for other resource and capacity allocation problems,
including those that arise in supply chains and logistics.
Okay, so when we talk about decisions, including decisions,
data driven decisions with AI,
we want them to be both efficient and interpretable.
Efficiency typically involves maximizing payoffs
while minimizing costs,
and interpretability means that humans can understand
and explain how decisions are made.
To emphasize the interpretability is not just
about understanding the models used,
it is important to understand the decisions themselves.
This is important in practice
because this allows us to trust the decisions made
by the models, making their implementation easier for us.
Actually, in my interactions with practitioners
from various fields,
including healthcare, logistics, and energy,
this desire for interpretability emerges as a common theme.
Practitioners always express
that they do not want decisions made by a black box.
They want to understand the decision making process.
Besides enabling trust,
interpretability is also important
for human machine collaboration,
which is arguably safer than relying solely
on machine made decisions.
So if humans can understand the decisions,
they can make adjustments as needed.
Efficiency of AI is unquestionable from my point of view,
but interpretability raises concerns.
For example, you may be aware
of that there are some ongoing lawsuits
against various institutions,
including some law firms and banks in US,
raising concerns about AI made decisions
allegedly discriminating people
based on protected features such as race.
It is really important to understand the decisions
and proactively prevent any potential discrimination
or ethical issue.
There is typically a tradeoff between efficiency
and interpretability of AI decisions.
The more advanced the model that you use,
it tends to offer better decisions, but on the other hand,
more advanced models and their decisions.
For example, you could consider models
such as gradient boosting
and neural networks for forecasting.
These are less interpretable than compared to simpler models
such as linear regression or decision trees.
In the reminder of my talk, I'm going to talk
about a recent project of mine focusing on learning policies
for allocating scarce housing resources
to people experiencing homelessness in LA.
This project that I'm going to talk
about isn't directly about supply chains or logistics,
but I'm going to argue
that the solution approach can actually be applied
to other resource and capacity allocation problems.
And actually we are implementing,
we are trying to establish a similar data driven solution
framework for freight shipping revenue management
at the moment.
Okay, so the work I'm going to talk
about is inspired by housing allocation
for individuals experiencing homelessness in LA County.
According to the Los Angeles Homeless Services Authority,
LAHSA, there are more
than 75,000 people experiencing homelessness in LA,
whereas the availability of permanent housing units used
for supporting these people is extremely limited.
LAHSA currently uses a vulnerability tool to decide
on how to prioritize people
for different housing resource types.
When an individual seeks house,
a survey for this individual is completed
and this survey contains questions
such as, how long has it been since you lived
in a stable housing?
These survey responses are then used
to calculate a vulnerability score for each individual
and to make decisions about prioritization.
Unfortunately, the current system is not linked to outcomes
nor to capacity limitations.
Our objective in this project is
to use the data that is already there,
specifically the data
from the LA County Homeless
Management Information System Database,
to learn optimal policies
for online allocation of scarce housing resources
to people experiencing homelessness,
maximizing outcomes,
specifically maximizing the exits from homelessness,
while considering capacity limitations
and fairness with respect to protected features
such as race.
We propose a very simple queuing policy.
This policy establishes separate queues
for each of the housing resource types.
When an individual arrives to the system and seeks house,
this policy assigns the individual to the queue
for the resource that maximizes their estimated likelihood
of exiting homelessness
if they receive that particular resource,
minus the opportunity cost of assigning that resource.
Here the likelihoods and opportunity cost,
we estimate them from the data that we have,
and we can use interpretable parametric models
such as logistic regression for estimating the likelihoods.
For example, and we showed on real data,
these type of models actually perform well.
And to ensure different notions of fairness,
we can actually adjust opportunity costs
for different groups, for example,
lowering this cost for minority groups.
We actually managed
to prove theoretically that our proposed policy is optimal
in the long run, meaning that as the number
of individuals arriving to the system grows,
but I'm going to show you our results on the real data
because we tested our policy also on real data.
This plot here shows the proportion
of the population with a positive outcome,
specifically the proportion that exits homelessness
on test data under historical allocations
and under our proposed policy outcome.
Minority priority here represents our proposed policy
where we enforce fairness for outcomes.
This means that we want outcomes
for minority racial groups to be as high as those
for the majority racial groups, and in this case,
we consider Black, African American, Hispanic,
and other to be minority groups.
What you can see
from this plot is that under our proposed policy,
outcomes for almost every group improve
in comparison to the historical allocations,
and the overall improvement here roughly amounts
to 300 more people exiting homelessness per year
on the test data.
Due to limited time,
I can only give you a glimpse of our work and findings,
but if you are interested, I want to share this QR code
that would take you to our paper.
In addition, I would like
to mention that my co-author, Phebe Vayanos,
recently gave a TED AI talk on this topic.
So if you are interested,
I would encourage you to see the recording of her talk,
which is available from the TED webpage.
Okay, so to conclude,
I presented to you a data driven solution approach
for resource allocation that is both efficient
and interpretable.
Even though the housing allocation problem isn't directly
related to supply chains or logistics,
this solution approach can actually be applied
to other resource capacity allocation problems.
And actually, the solution approach itself is inspired
by bid price policies used in network revenue management.
As I mentioned before, actually with collaborators from ICL,
we are currently establishing a similar solution approach
for freight shipping revenue management.
And I anticipate
that this solution approach could incorporate some
sustainability targets similar to fairness integration.
For example, if we are talking about procurement
or supplier selection targets of the sort,
I want at least 25% of all purchased goods
and services to come from green suppliers.
This is the end of my talk.
Thank you very much for your time and attention.
I would be happy to answer your questions
during the discussion part.
- Thank you very much, Yasel and Cagil.
This has been great.
As a good logistician, we are right on time,
which is also great, and it shows our commitment.
We have tons of questions.
So then I'm going to try to go one to one.
Let's try to be agile in answering, quick answering,
so we can go to as much as we can.
And this is part of the idea of the webinar.
So Dr. Costa, from Sunita Ray,
then she recommends your colony optimization
and Python coding that you made at MIT some years ago.
Thank you, Sunita, best regards.
Then, are there any more of these popular as this?
- Yes, yes.
Well, for the sake of simplicity, I'm time saving.
I did not present this here, or the progress we had made,
but we propose other variants where we explore more areas
of the solution space.
So to make it simple,
we have other variants that examine greater areas
of the solution space and provide better solution quality,
because someone else was asking if that improved the CPLEX.
Of course it doesn't improve the CPLEX.
This is exact solution.
But it was very close.
In many instances it was very close to the absolute optimum.
Computational time was pretty much the same.
Although you think, okay, exploring more costs, more time.
No.
So there have been a lot of improvements since that time.
Thank you for that question,
and I'm glad you recalled my talk at the MIT.
- So, an anonymous attendee, how can we prevent
or filter bad data from the artificial intelligence?
It is fed to the AI.
What can we do to undo it?
Example for bad data could be a feedback loop.
So then who wants to answer this?
- I could go and answer what I would do.
So basically there are a lot of methods in AI
and machine learning that deals with noisy
and bad data to robustify the solution
against such noisy data or bad data.
One of the well known methods is like regularization,
use of regularization or robust optimization.
So there are available methods to prevent such cases.
I think a priority.
It would be difficult to say what is noisy or bad.
There are methods for doing that as well,
but even if you are not able to tell, as I said,
you could robustify your solution against certain noise.
- Yeah, thank you very much.
Another question on Dell.
So I think this is for me,
how does pricing analytics interact
or align with this end to end value chain?
Will this happen during sales
and operation planning?
Then pricing analytics should be a component
of the end to end supply chain.
So if you have an order, at the end of the day,
the order should have a predicted price.
It will be a price that is offering the commitment.
And then also on forward looking,
so in future then you
can also predict how the price could change.
So then it's not purely a function of supply chain,
it's more a function of marketing, commercial, et cetera.
But definitely in order to measure the trade offs with AI
for cost to serve, you need to input this,
because then this could create also some
kind of distortions if the price is going to change based
of unexpected, for example, commercial promotions.
So then the forecasting should be able
to understand why this is changing.
This is maybe changing for a kind of exogenous variable.
So I don't know that maybe a computer
in certain disruption will change the price.
So then this is an exogenous factor due
to the exogenous disruption.
So then as much you can grab information that is exogenous
to your supply chain.
So let's say how the world is moving,
how the warnings are doing over there that are not directly
from the supply chain,
the better you can create predictive capabilities
with these exogenous factors that are coming
from all over the world.
It's a very general answer,
but then you should input price information
into your question
because it's a way of also monitor cost to serve trade offs.
But pricing is not typically supply chain decision.
Okay, the next one then.
Julia Xiao, what is difference between AI
and data science for supply chain in your understanding?
Wow, this is good.
Yasel, do you want to answer that one?
- Well, these are overlapping fields, honestly.
I mean, data analytics,
whatever you do in terms of knowledge discovery,
which is according to my understanding,
the more comprehensive terminology,
knowledge discovery in general,
if you're using a neural network
or if you are using other kind of natural
or not natural inspiration,
you can use it in data analytics
for whatever kind of application context
in that field of supply chain management.
So maybe if you ask this question ten years ago,
then we will say that clearly for data analytics,
then we have regression analysis,
we have the traditional more mathematical oriented,
and in this case now for AI,
then these are more computational oriented.
But nowadays it's hard to discriminate.
- Yeah, this is difficult.
This is why at the beginning we define what we interpret
with AI, how we deploy these cognitive functions,
especially learning.
Does data science learn?
Of course.
So then again, how to discriminate.
This is why one single definition of AI does not work.
Sorry, if it's not yes or not, kind of,
it depends how you apply.
Whatever you are doing,
whatever is going to impact your performance,
and then allows you
to learn to transform your supply chain to be better
or to test new business models.
Then this is good.
Okay, next one.
Then, I think this is for me then.
Amit Ray, thank you, Amit.
Can't you help to understand how AI helps
to improve end to end visibility of Dell system?
Traditionally, companies are using ERP
and other systems for creating visibility.
How AI can help it further?
These are very good questions.
And then ERP is playing a key role.
But what we have observed in the most successful cases
is that visibility
is much more what you have internally in your ERP.
Much more than that.
I mean, advanced companies are using external signals.
Not only what are internal signals coming from your ERP,
coming from your, let's say, manufacturing operations,
but external signals are coming from what is going on
in the world that can help me to contextualize my actions
for operations.
So contextualization
is another beautiful feature expected from AI.
Not only interpretability as we presented,
but also contextualization.
So then, end to end visibility
is not only internally within an ERP.
Let me put an example.
There are some startups
that are collecting intensively data
using AI knowledge graphs, natural language processing
about what is going on with suppliers all over the world.
So these are real time information.
So for example, what are your ESG scores,
your sustainability scores.
So then you can input in your system,
in your internal system,
whatever is the source could be, ERP or, I don't know,
a procurement tool.
And then you incorporate this information
from the current status of current suppliers
or maybe future suppliers
in order to decide what will be my best set of suppliers.
For example, if I am running a new product,
I am running a new business model
or a new action in the market.
So then again, end to end visibility is much more than ERPs,
what we observe in the better companies.
For example, end to end visibility is another question
that is over there in the chat,
and it was about how we could extract information
from the bottom line that maybe we don't track.
So there are some applications,
beautiful applications based on AI,
another stacked up that are beautiful work.
Then for example,
they scan all the emails that you are doing
with natural language processing
and they are extracting what are the key insights
from emails in order to enrich visibility.
So then it's not purely data that is structured in your ERP
or work management system, or warehouse management system,
transportation management system, whatever,
is that then you are extracting the data
that is not a structure.
You are extracting the data from the decision makers,
from emails, just for example,
to feed up how to run a process
or how to standardize a process.
So again, the beauty of AI is that it can learn
from a structure, from non-structured data.
So this is the power
that you can again transform all your decisions
and what is going on into the language of data.
And then again, for some couple it could be science fiction,
but for others is a reality.
I'm playing with these toys in order to make more
and more end to end visibility.
Okay, so let's go with the next one.
Hugo Arella, thanks, Hugo.
At the company I am currently working,
we are going to implement a new demand
and replenishment software
that already incorporates AI algorithms.
One of the challenges we face is that the maturity level
of data is not what is expected for this type of software.
Welcome to the world.
But isn't the same, right?
How to achieve a match with the company's need
to implement this type
of software with a low readability of the data?
So who wants to answer this question more
about replenishment?
Yeah.
No?
- Well, I just want
to say that maybe it's somehow related with that one,
but there are many other questions that I went through,
and they were asking about the application, for instance,
the algorithm I proposed
to other application contexts like inventory management
or resource allocation, that you were mentioned, Cagil.
And of course, whatever problem you can model as a network,
for instance, the traditional ALV column optimization
can be used in that particular case too, for replenishment,
for instance, too.
Or there's even variance with continuous optimization
where you could easily apply that.
So maybe it's not related to that question,
but I went through the questions
and maybe I'll save in time, Maria, in that regard.
- Yeah.
- Sorry, maybe I could answer a little bit the question.
The data is very important, I think, in case of AI,
but there are also AI models that generate data
from limited data.
So that could be one solution,
but I'm not immediately clear it would apply
to the particular case here.
But it's really like you probably have seen that,
for example, Google tools generating photos of other people
or dogs and cats, and they are not real photos,
they just learn from the photos feed to the models,
and they generate similar photos.
So similar approaches could be possible
in case of limited data as well.
Maybe through some stimulation also,
you could generate more data that could be useful.
- Synthetic data, yeah.
- Yeah.
- In that regard, it's also related to other questions.
Don't forget the one part which is also popular right now,
particularly within the Iranian community.
The possibilistic distributions too.
With data scarcity and some judgmental opinions,
you can actually develop something
which is called fuzzy inference systems
to translate judgmental opinions
and various card information
into numeric branches which you can use
to work subsequently.
- Good, yeah.
From Mernam Baske, something about demand forecasting.
Is there a percentage range of improvement
that we can achieve from using a demand forecasting system
that applies AI,
compared to another system that don't applies AI?
I will start answering this
because we have been doing a lot
of work on demand forecasting, AI ML demand forecasting.
Then my recommendation is that you contextualize,
you customize the way you do it
during your demand forecasting, that then just plugging
and playing software available model could be good,
but then try to do some kind of customization
about what you need to do.
It's not only the software that you can bring from a vendor,
it's how you include your feature,
the behaviors not only from the data, but also, for example,
exogenous factors that could affect your demand forecasting.
And it's true, not always.
The more sophisticated AI ML models provide the better
results on demand forecasting.
There are several studies that are doing
that in certain contexts the traditional demand forecasting
with the right setting,
of course, then could bring very good results.
But then I think that you need to work a lot
in order to contextualize how to better input your context
and your expectations.
In the case of Dell, for example,
commitment, it was very important.
So they were doing demand forecasting
and also lead forecasting, right?
And then actually they were playing the two things
in certain context.
So then this brings that the way
of demand forecasting could be richer if you input
and then you align even more with more features
of that can, I mean, affect demand,
and also if you go upstream with other effects,
that could create uncertainty in your demand realization.
So at the end of the day, it's not one single recipe.
So then I think that there is no answer.
We should not rely,
if there is an answer, say, "Oh, you can increase 5.5%
if you apply this profit demand forecasting model
versus the whatever."
I think it's dangerous.
So what we have been doing in our lab is just
to create automatic systems that test different
kind of AI ML models,
and then compare and contrast
in order to learn not only how each model can better adapt,
but also where is the best model
for different circumstances context
that you want to predict?
Any input here from any of you?
No?
Okay, so thank you, Ernan, big regards.
Feel free to come here if you see a question
that then you feel comfortable answering,
because I am just following the queue.
But then Yasel, because you are also reading.
Yasel, any question you want to answer?
- Well, there are some also linked
to that part you were mention.
It's hard to generalize, to say,
"Okay, every time you use random forest
for estimating demand, customer demand,
it's always improved traditional approaches at this level."
So it's very hard to generalize.
What I do know is that under certain circumstances,
there's no way to beat a neural network, for instance.
This had been formalized.
This had been certainly formalized.
There is a huge variety of application contexts.
So I think if someone makes a book,
like, under these circumstances,
I do have a ranking of what's the best performance,
from the worst performance to the worst performance
of those AI based methods.
This is a very nice, I think,
set of knowledge, to put it there.
But it's hard to generalize.
It's very hard to generalize.
So I don't see any other question here
from my side, because, okay.
- Yeah, I think it's hard,
and I would say even dangerous to expect to generalize.
So every context is different
because your expectations are different
and your business is running a different way.
So then be able to put effort on contextualizing.
Cagil, any question and answer from you?
- I see a couple of questions asked to me.
So one interesting one
is that ethical concern associated with AI,
talking about race, for example.
Would it be sufficient
to eliminate the corresponding information from the data,
ensuring that AI doesn't use this information?
This is a good question, actually,
because I feel like some people have this perception,
but this is not necessarily true
because imagine that you remove the race
from your data completely.
There may be some other information
that is highly correlated with race itself.
So it doesn't guarantee that your AI won't be using the race
to make decisions.
So this is not sufficient.
- No, I think that then this is great.
So again, thank you everybody for your time,
for being with us during this one hour.
Especially, thank you to Yasel
and Cagil for your very insightful contributions.
Thank you also to the Marketing and Communication Team
of MIT for being with us
and helping to support this.
And then go to 11:00 AM today is the time
that we have another event for my Co-Master community.
But you are all invited, okay?
Thank you, have a good, beautiful day, bye.
- Thank you, bye.
- Thank you, bye-bye, guys.
Parcourir plus de vidéos associées
![](https://i.ytimg.com/vi/9bBJcPC2ck0/hq720.jpg)
AI-Driven Supply Chains: 3 Cases | MIT SCALE Webinar | Spanish
![](https://i.ytimg.com/vi/3RfGq-SKwSE/hq720.jpg)
From Automated to Autonomous Supply Chains
![](https://i.ytimg.com/vi/vRD8FCIHXWs/hq720.jpg)
Leveraging supply chain optimization and visibility to achieve carbon reduction targets
![](https://i.ytimg.com/vi/S2zgNFimIAI/hq720.jpg)
The Critical Role of Supply Chains in Business and Society
![](https://i.ytimg.com/vi/O6uUTKrmeO0/hq720.jpg)
How Secure IoT is Transforming Supply Chains
![](https://i.ytimg.com/vi/vvHY5PgRIAg/hq720.jpg?sqp=-oaymwEmCIAKENAF8quKqQMa8AEB-AH-CYAC0AWKAgwIABABGGQgZChkMA8=&rs=AOn4CLB1U6RRmPedHh7Ot3zkjIUN3te3pw)
ALP Podcast Episode 1: Supply Chain & Logistics Challenges and Opportunities with Carl Hemus
5.0 / 5 (0 votes)