AI for Business: #5 How to do AI Experiments?

Omar Maher
17 Apr 202424:14

Summary

TLDRThis episode of the AI for Business course delves into the importance of conducting proof of concept (PoC) experiments for AI projects. It outlines the necessity of testing AI's viability, gathering stakeholder feedback, and planning for production. The script guides through 10 key elements for a successful PoC, using an industrial visual inspection case study. It emphasizes setting clear success criteria, managing data, selecting tools, and assessing the outcomes to inform production planning, ultimately aiming to minimize risk and investment before full-scale implementation.

Takeaways

  • 🧐 Proof of Concept (PoC) is essential for AI projects to manage uncertainty and assess the feasibility of an idea before significant investment.
  • πŸ” PoCs serve three main purposes: testing the AI approach, gathering early feedback from stakeholders, and aiding in production planning.
  • πŸ“ Planning a PoC involves considering 10 key elements: problem definition, hypothesis, scope, success criteria, data, modeling and tools, infrastructure, deliverables, team, and timeline.
  • πŸ”‘ Success criteria for a PoC should be clear and realistic, focusing on improvements over the current state, such as quality, speed, or cost efficiency.
  • πŸ“ˆ Data is crucial for PoCs, requiring a well-defined dataset to train and evaluate the AI model, with annotations where necessary.
  • πŸ› οΈ Modeling and tools define the techniques and technologies used in the PoC, which may include cloud AI services or custom deep learning frameworks.
  • πŸ’» Infrastructure needs for a PoC might include powerful hardware for model training, especially when using deep learning frameworks like PyTorch.
  • πŸ“‘ Deliverables from a PoC should include code, documentation, knowledge transfer, a prototype application, and recommendations for scaling to production.
  • πŸ‘₯ A balanced team for a PoC should consist of data scientists, domain experts, and project managers to cover AI, data engineering, and domain knowledge aspects.
  • ⏱ The duration of a PoC should be concise, typically ranging from 3 to 8 weeks, to quickly test the idea with minimal investment.
  • πŸ€” Post-PoC critical questions include assessing if AI is the right solution, expected accuracy boost, reasons for subpar results, and whether to use off-the-shelf models or build from scratch for production.

Q & A

  • What is the primary purpose of a Proof of Concept (PoC) in AI projects?

    -The primary purpose of a PoC in AI projects is to test the feasibility of an AI solution, gather early feedback from stakeholders, and assist in production planning by providing insights into the complexity, time, effort, and cost involved in building a full-fledged production system.

  • Why are PoCs especially critical for AI projects compared to other types of projects?

    -PoCs are especially critical for AI projects due to the inherent uncertainty and the need to validate whether AI is the right approach to solving the problem, the availability of required data and skills, and to ensure the outcome serves the business needs effectively.

  • What are the three main reasons for conducting a PoC?

    -The three main reasons for conducting a PoC are testing to answer essential questions quickly, feedback to ensure the solution meets stakeholders' needs and increases adoption, and production planning to understand the resources required for a full-scale implementation.

  • Can you explain the importance of setting clear success criteria for a PoC?

    -Setting clear success criteria for a PoC is essential as it provides measurable goals to assess the PoC's effectiveness against the current state of affairs. It helps determine if the PoC has shown significant improvement in terms of quality, speed, cost, or other business metrics.

  • What should be the focus of the scope when planning an AI PoC?

    -The scope of an AI PoC should be limited and focused on building the model rather than a full production system. It should concentrate on a specific subset of products, geography, or a narrowed-down problem to limit variables and complexity, making the project more manageable and easier to test.

  • Why is it important to have a team with diverse skills for a PoC?

    -A diverse team is important for a PoC as it ensures coverage of AI and data engineering, domain knowledge, and project management aspects. This interdisciplinary approach helps in effectively addressing the technical and practical challenges that may arise during the PoC.

  • What are some examples of deliverables one might expect from a PoC?

    -Deliverables from a PoC may include a trained AI model, a data pipeline for preprocessing and transforming data, a code repository with source code, a detailed report on model development and evaluation, a prototype application for testing the model, and recommendations for scaling to a production system.

  • How long should a typical PoC take to complete?

    -A typical PoC should ideally take between 3 to 8 weeks, focusing on testing the idea quickly with minimal investment rather than spending excessive time on it.

  • What are some critical questions to ask after completing a PoC?

    -Critical questions post-PoC include assessing if AI is the right solution for the problem, estimating the expected accuracy boost for a production system, identifying reasons for suboptimal results and deciding on further investment, choosing between off-the-shelf models or building from scratch, and estimating the total expected cost for a production system.

  • What is the significance of the infrastructure element in planning an AI PoC?

    -The infrastructure element defines the hardware, storage, and compute resources required for the PoC. It's crucial for training the model and, if necessary, for deploying the model in a prototype application. The choice between cloud services and on-premise solutions will impact the infrastructure needs.

  • How does the outcome of a PoC help in planning for a production system?

    -The outcome of a PoC provides insights into the feasibility, potential challenges, and the expected performance of the AI solution. It helps in making informed decisions about the resources, budget, and timeline required for a production system, and in building a business case for the investment.

Outlines

00:00

πŸš€ Introduction to AI Proof of Concept

The first paragraph introduces the concept of a Proof of Concept (PoC) in AI projects, emphasizing its importance due to the inherent uncertainty in AI endeavors. It sets the stage for the AI for Business course's fifth episode, which focuses on the initial phase of bringing AI ideas to life through PoC experiments. The paragraph outlines the necessity of PoCs for testing AI feasibility, gathering early feedback, and aiding in production planning. It also introduces the 10 elements to consider when planning an AI PoC, using the example of automating industrial visual inspection with deep learning.

05:01

πŸ” PoC Planning: Elements and Execution

This paragraph delves into the specifics of planning an AI PoC, discussing the 10 elements in detail. It explains the significance of defining a problem hypothesis, setting limited scope, establishing success criteria, and gathering data. The paragraph further elaborates on the choice of modeling techniques and tools, infrastructure needs, deliverables, team composition, and the timeline for the PoC. Using the industrial visual inspection example, it illustrates how to apply these elements to develop a computer vision model to detect defects in products.

10:02

πŸ“ˆ Assessing PoC Outcomes and Next Steps

The third paragraph focuses on evaluating the outcomes of a PoC and determining the subsequent steps. It highlights the importance of understanding whether AI is the optimal solution, estimating the potential accuracy improvement for a production system, and identifying reasons for subpar PoC results. The paragraph also addresses the decision-making process regarding whether to conduct another round of PoC or to move to production, considering factors like data sufficiency, team skills, and problem relevance.

15:04

πŸ›  Decision Making Post-PoC: Tools and Strategies

This paragraph discusses the critical decisions to be made after a PoC, such as whether to use off-the-shelf models or build a system from scratch. It explores the advantages and disadvantages of leveraging existing AI services versus custom development. The paragraph also emphasizes the importance of assessing the total expected cost for a production system based on the PoC findings, including infrastructure, software engineering, and ongoing maintenance. It guides on building a business case for the production system by evaluating ROI and making informed decisions.

20:05

🌟 Wrapping Up and Future Outlook

The final paragraph wraps up the discussion on PoCs, summarizing their role in validating AI ideas and planning for production systems. It previews the next episode's topic, which will cover building AI capacity, including team building, skills assessment, and the considerations of in-house development versus outsourcing. The paragraph concludes with an invitation to share the episode and an acknowledgment of the audience's time, promising further insights in the upcoming episodes.

Mindmap

Keywords

πŸ’‘Proof of Concept (PoC)

A Proof of Concept (PoC) is an initial demonstration of a concept or idea to evaluate its feasibility and potential value. In the context of the video, a PoC is critical for AI projects as it allows businesses to test whether AI is the right approach to a problem, gather early feedback, and plan for production. The script mentions that PoCs are especially important for AI due to their inherent uncertainty and the need to minimize risk before significant investment.

πŸ’‘AI Projects

AI Projects refer to initiatives that involve the application of artificial intelligence to solve complex problems or improve business processes. The video discusses the challenges of initiating AI projects due to their uncertain nature and how a PoC can be a valuable first step in the project lifecycle, as seen in the exploration of automating industrial visual inspection with AI.

πŸ’‘Uncertainty

Uncertainty in the video script refers to the unpredictability and risk associated with AI projects, which can make it difficult to determine the value of pursuing a particular AI idea. The concept is integral to the discussion on why PoCs are necessary, as they help mitigate this uncertainty by providing a low-risk way to test AI hypotheses.

πŸ’‘Stakeholders

Stakeholders are individuals or groups who have an interest in the outcome of a project. In the video, the importance of getting early feedback from stakeholders is emphasized to ensure the AI solution meets their needs and to increase their buy-in and adoption of the solution.

πŸ’‘Production Planning

Production planning involves the process of preparing for the full-scale implementation of a project or system. The script discusses how PoCs aid in production planning by providing insights into the complexity, time, effort, and cost involved in building a full-fledged production system, which is essential for businesses looking to implement AI capabilities.

πŸ’‘Deep Learning

Deep Learning is a subset of machine learning that uses artificial neural networks with multiple layers to model and solve complex problems. The video script uses the example of automating industrial visual inspection, where deep learning is employed to develop a model capable of detecting defects in products or parts.

πŸ’‘Computer Vision

Computer Vision is a field of AI that enables computers to interpret and understand visual information from the world, such as images and videos. In the script, a PoC for computer vision is used to develop a deep learning model to detect defects in industrial products, illustrating the application of this technology in real-world scenarios.

πŸ’‘Success Criteria

Success criteria are the predefined metrics or conditions that determine whether a project or experiment has achieved its objectives. The video emphasizes the importance of setting clear success criteria for a PoC, such as improving quality, speed, or cost, and provides an example where the criteria include a 70% accuracy rate and the ability to inspect 100 images in under a minute.

πŸ’‘Data Sets

Data sets are collections of data that are used for training and testing AI models. The script mentions the use of a data set of 5,000 annotated images for the PoC in industrial visual inspection, highlighting the importance of having a robust and relevant data set for the success of an AI project.

πŸ’‘Infrastructure

Infrastructure in the context of the video refers to the hardware, storage, and compute resources required to support the development and operation of an AI model. The script discusses the need for infrastructure when using cloud AI services like Amazon Lookout for Vision or when training custom models with frameworks like PyTorch.

πŸ’‘ROI (Return on Investment)

ROI is a measure used to evaluate the efficiency of an investment or compare the profitability of different investments. The video script discusses assessing the ROI of AI projects after a PoC to determine if the potential benefits justify the costs, emphasizing the importance of understanding the financial implications of pursuing AI solutions.

πŸ’‘ETL (Extract, Transform, Load)

ETL refers to the process of extracting data from various sources, transforming it to fit operational needs, and loading it into a target system for further use. In the script, ETL is mentioned as a component of production planning, where it is necessary to automate the flow of data into and out of the AI model within a production system.

Highlights

AI projects often have uncertainty, making it challenging to determine the viability of ideas without significant investment.

Proof of concept (POC) experiments can help mitigate risks before substantial investments in AI projects.

Proof of concept experiments are critical for testing, obtaining feedback, and planning production.

Testing during a POC answers essential questions quickly, such as the suitability of AI for the problem and data requirements.

Early feedback from stakeholders during a POC ensures the AI solution meets their needs and increases adoption.

POCs aid in production planning by providing insights into complexity, time, effort, and cost for building full-fledged systems.

Ten elements to consider in an AI POC include the problem, hypothesis, scope, success criteria, data, modeling and tools, infrastructure, deliverables, team, and time.

Setting clear success criteria is crucial for a POC, focusing on significant improvement over the current state in terms of quality, speed, cost, or other metrics.

Realistic target model accuracy for a POC should be set lower than the desired production system accuracy to assess feasibility quickly.

Data required for the POC includes specific datasets with annotations, such as 5,000 images of industrial products with defect labels.

POCs can utilize cloud AI services, such as Amazon Lookout for Vision, or custom model training using frameworks like PyTorch.

The deliverables from a POC should include the trained model, code, documentation, knowledge transfer, and recommendations for scaling to production.

It is important to assess whether AI is the right solution for the problem, as traditional techniques might sometimes be more efficient.

Evaluating the expected accuracy boost from POC to production helps decide if further investment is justified.

Identifying reasons for poor POC resultsβ€”insufficient data, skills, or working on the wrong problemβ€”is crucial before deciding on further rounds or moving to production.

Deciding between off-the-shelf models and custom-built systems for production depends on the flexibility and control needed versus technical skill and time required.

The total expected cost for a production system should be estimated based on POC insights, including data pipelines, software engineering, infrastructure, and ongoing maintenance.

Transcripts

play00:00

AI projects by Design can have a lot of

play00:02

uncertainty which makes it really

play00:04

challenging to detm mind whether a

play00:06

certain idea is worth pursuing or not

play00:09

however there is a way to drisk AI ideas

play00:12

before making significant Investments

play00:14

doing a proof of concept or a PC for

play00:17

short welcome to the fifth episode of

play00:19

the AI for business course your ultimate

play00:22

non-technical intro to the world of AI

play00:24

and how to applies in the real world in

play00:26

previous episodes we explored the

play00:28

fundamentals of AI looked at over 80

play00:31

different machine learning use cases

play00:32

across various domains and learned how

play00:34

to select our first AI projects today

play00:38

we'll Kickstart the initial phase of

play00:40

bringing these ideas to life through

play00:42

proof of concept

play00:47

experiments we'll be exploring many

play00:50

aspects including why PCS are especially

play00:52

critical for AI projects how to plan and

play00:55

execute them and set the right success

play00:57

criteria we'll explore different p see

play01:00

plans for real life use cases and most

play01:03

importantly how to use the outcome to

play01:05

plan for production so with that said

play01:07

let's Dive Right In so why do we need

play01:09

poc's three main reasons testing

play01:12

feedback and production

play01:15

planning the first reason testing they

play01:18

allow you to answer essential questions

play01:20

quickly such as whether AI is the right

play01:22

approach to solving the problem if you

play01:25

have the required data and skills

play01:26

whether the outcome is serving the

play01:28

business and how much it will cost for

play01:30

our production system feedback getting

play01:32

early feedback from stakeholders who

play01:34

will be using the AI solution is

play01:35

essential this ensures that the solution

play01:38

is addressing their needs and

play01:39

requirements increasing their Buy in and

play01:41

Adoption of the solution the third

play01:43

reason is production planning PC's help

play01:46

with production Planning by giving a

play01:48

sense of the complexity time effort and

play01:50

cost involved in building a full-fledged

play01:52

production system this is crucial

play01:54

information for any business looking to

play01:56

implement AI as a real production

play01:58

capability not just as a lab experiment

play02:01

there are 10 elements to take into

play02:03

consideration when planning an AI proof

play02:05

of concept those are the problem

play02:09

hypothesis scope success criteria data

play02:13

modeling and tools infrastructure

play02:16

deliverables team and finally

play02:19

time let's explore each element by using

play02:22

an example use case automating

play02:25

industrial visual inspection in this use

play02:27

case we'll be using deep learning to

play02:29

automate the detection of defects in

play02:32

products or parts instead of heavily

play02:34

relying on manual inspection by subject

play02:36

matter experts let's start with a

play02:38

problem which should be a pain to solve

play02:40

or a value to bring in this example the

play02:43

problem is the manual inspection of

play02:45

Industrial Products which is timec

play02:47

consuming resource intensive and leads

play02:49

to increased costs and reduced

play02:51

efficiency the hypothesis is how you

play02:54

think the POC will contribute to the

play02:56

solution in this case AI can help

play02:58

automate the inspection process process

play03:00

by detecting defects in Industrial

play03:02

Products leading to faster inspections

play03:04

and reduced inspection costs the

play03:06

hypothesis is that AI will enable the

play03:08

team to conduct more inspections faster

play03:11

and free of the time of workers

play03:13

ultimately reducing the number of

play03:14

inspectors

play03:16

needed scope defines the nature of work

play03:18

to be done it should be limited as much

play03:21

as possible and really focused on

play03:22

building the model not a full production

play03:25

system examples of limited scope include

play03:28

focusing on a certain subset of products

play03:30

instead of every product that you

play03:32

have making the POC scope on a certain

play03:35

geography or a subset of problems

play03:37

instead of the big problem you're trying

play03:39

to solve and so on by narrowing down the

play03:42

scope of a puc you can really limit the

play03:44

amount of variables and complexity of

play03:47

the overall project which would make it

play03:48

easier to manage and

play03:51

test in this example the PC will focus

play03:53

on developing a computer vision deep

play03:55

learning model to detect four types of

play03:57

defects in images of IND products the

play04:01

model will be trained on a data set of

play04:03

5,000 adapated images and the evaluation

play04:06

will be done on a test data set and new

play04:08

images if you'd like to know more about

play04:11

training or label data sets for AI

play04:13

models I recommend you watch the first

play04:16

episode of this course setting clear

play04:19

success criteria is essential for a

play04:21

successful PC the experiment needs to

play04:23

show a significant improvement over the

play04:25

current state of things whether this is

play04:27

going to be measured in terms of quality

play04:29

speed cost or something else success

play04:32

criteria could be based on specific

play04:33

business metrics such as for example

play04:36

higher click-through rate for a use case

play04:38

like personalized recommendations as

play04:40

well as the model's accuracy which could

play04:42

be expressed in different metrics such

play04:44

as Precision recall intersection of a

play04:47

union mean average Precision rmsse or

play04:51

something else depending on the model

play04:53

type and the specific problem you're

play04:54

trying to solve it's important to set

play04:56

realistic Target Model accuracy for the

play04:58

proof of concept stage while high

play05:01

accuracy is generally desirable for a

play05:03

production system that's not really the

play05:04

focus of this stage let's say for your

play05:07

real production system you would love to

play05:09

have like 90 95% model accuracy for a

play05:13

proof of concept you can Target

play05:14

something like 75% accuracy because it's

play05:18

understood that by time with more data

play05:21

and better modeling you're going to

play05:23

reach that 95% right but for the sake of

play05:26

the experiment that you'd like to run

play05:27

real quick to assess the feasibility you

play05:29

might want to set lower accuracy

play05:33

objective in our visual inspection

play05:35

example the success criteria include an

play05:37

accuracy of 70% Precision an 80% recall

play05:42

and the model's ability to inspect 100

play05:44

images in under 1 minute The Fifth

play05:46

Element is data this is basically the

play05:48

data sets required to do the PC in this

play05:50

example the data sets include 5,000

play05:53

images of Industrial Products with

play05:55

annotations indicating the presence or

play05:57

absence of defects modeling and tools

play06:00

Define the modeling technique and any

play06:02

tools services or libraries expected to

play06:04

be used in this example the PC will use

play06:07

Amazon lookout for vision as the primary

play06:09

Cloud AI service optimized especially

play06:11

for defect detection however if the

play06:14

accuracy is not satisfactory the PC may

play06:17

also explore using Amazon recognition or

play06:20

even custom model training using a deep

play06:22

learning framework like pytorch the

play06:24

infrastructure element defines the

play06:26

hardware storage and compute required

play06:29

for the PC in this example when using

play06:32

AWS lookout for vision no dedicated

play06:34

infrastructure is needed users can

play06:36

leverage the service directly everything

play06:39

from training to inference is managed

play06:40

within the service itself however if

play06:42

we're going to train a custom model

play06:44

using a deep learning framework for

play06:45

example like pytorch a powerful on

play06:48

premise or a cloud virtual machine will

play06:51

be required this machine should have

play06:53

enough processing power memory and

play06:56

storage to handle the size and

play06:58

complexity of the data set and the

play07:00

training process of the model for

play07:02

example if you're training a deep

play07:03

learning model you're going to need the

play07:04

machine with a powerful GPU the

play07:07

infrastructure will also need to provide

play07:10

a way to deploy the model in case a

play07:12

prototype application will need to

play07:14

interface with it one example of a

play07:17

virtual machine from Amazon is the

play07:18

Amazon ec2 P3 2x large instance which is

play07:23

optimized for high performance Computing

play07:25

workloads such as machine learning this

play07:28

instance type includes clud a single

play07:30

Nvidia v00 GPU which provides

play07:33

significant acceleration for compute

play07:35

intensive training tasks in addition it

play07:37

features 8 V CPUs 61 GB of memory and

play07:42

1.5 tbte of NVM SSD storage we can

play07:47

consider this or a similar on Perm

play07:49

machine if we're going to adopt a custom

play07:51

model development methodology versus a

play07:54

cloud-hosted AI service like Amazon

play07:57

recognition or lookout for vision the

play07:59

deliverable section highlights the

play08:01

specific outputs that you're going to

play08:03

get from the puc beyond the model itself

play08:06

so it's understood that you're going to

play08:07

get a trained model ideally with a good

play08:09

accuracy perfect but what exactly do you

play08:12

need beyond that so that you can

play08:14

continue to build on top of that PC and

play08:17

build a production system ideally here

play08:19

are some things that are typically

play08:21

received after the PC is done one code

play08:25

you need the code files for all the

play08:27

stages of the PC whether it's sourcing

play08:29

the data cleaning it processing it you

play08:32

know any feature engineering work that

play08:34

was done in case you're using classical

play08:36

machine learning algorithms and so forth

play08:39

so you need those source code files so

play08:41

that your team or other you know

play08:43

Consultants can eventually build on top

play08:46

of that work two documentation you need

play08:49

documentation for all the pieces in the

play08:51

proof of concept how do they talk to

play08:53

each other and stuff like that and you

play08:55

need a summary of the outcome and how

play08:58

that outcome was achieved

play09:00

number three you probably need some sort

play09:02

of knowledge transfer in many cases you

play09:05

have some consultant or an AI company

play09:07

helping you do a prototype or a proof of

play09:09

concept and then you have your own data

play09:10

science team right if you would like

play09:13

your team to carry on the work you need

play09:15

some sort of knowledge transfer from

play09:16

that consultant or external company to

play09:19

your team in case you don't have a team

play09:22

that's not probably going to be required

play09:23

if the same company is going to still

play09:25

build a production system you might also

play09:27

need a prototype application that

play09:28

interfac with the model so here's the

play09:31

thing once you are done with the PC you

play09:33

would like to test that you know maybe

play09:34

with new data see how it works there are

play09:36

different ways that you can test that

play09:38

pilot right whether you have a data

play09:40

science team or someone who understands

play09:42

python so you can just run scripts in a

play09:44

notebook for example or in a python

play09:47

environment right or if you're not able

play09:49

to do that or you need an easier way to

play09:51

interface and test the model you're

play09:53

probably going to need some sort of a

play09:54

web interface or a mobile application or

play09:56

a desktop application some sort of

play10:00

um a graphical user interface that you

play10:02

know can integrate the model and you can

play10:04

use it as a way to test it and you know

play10:06

provide it with some data and get some

play10:08

output for example so a prototype

play10:09

application might be required and then

play10:12

finally and the most important thing

play10:13

here some sort of recommendations on

play10:16

what to do to scale that pilot or

play10:19

experiment to our production system this

play10:21

is one of the most important things of

play10:22

our proof of concept that you learn a

play10:23

lot in that experiment and you know the

play10:26

kind of recommendations on what to do

play10:27

next are really critical to be provided

play10:30

so to summarize you need code resources

play10:33

documentation of the system and a

play10:34

summary of the results some sort of

play10:36

knowledge transfer if it's required a

play10:39

prototype application if you require

play10:41

some sort of a dedicated app to

play10:43

interface with the model and test it and

play10:45

finally some recommendations on what to

play10:47

do next and things to consider if you

play10:49

would like to scale that to our

play10:50

production system in our example for the

play10:52

industrial visual inspection use case

play10:55

the primary deliverable is a computer

play10:57

vision Model A convolution Neal Network

play11:00

in that case that can accurately detect

play11:02

defects in Industrial Products

play11:04

deliverables will also include the

play11:05

following a data pipeline that can

play11:08

efficiently pre-process and transform

play11:10

row images into a format suitable for

play11:13

training the model the data pipeline can

play11:15

also be used to update the model with

play11:17

new data in the future a code repository

play11:20

that contains the source code for the

play11:21

data pipeline model training and

play11:23

evaluation scripts and any other tools

play11:25

developed during the PC a report

play11:28

detailing the develop and evaluation of

play11:30

the model including a description of the

play11:32

data set used for training the model

play11:34

architecture on hyperparameters and the

play11:36

evaluation metrics the report should

play11:39

also include a discussion of any

play11:41

challenges or limitations encountered

play11:43

during the PC and recommendations for

play11:45

further

play11:46

improvements the PCU will also result in

play11:48

a prototype application in our case a

play11:51

web- based interface for uploading

play11:53

images and receiving defect detection

play11:55

results or an integration with an

play11:58

existing inspection system

play11:59

the team should ideally include

play12:01

resources covering the AI and data

play12:03

engineering piece the domain knowledge

play12:05

piece and the project management aspect

play12:08

in our example use case the team

play12:10

actually consists of a data scientist an

play12:12

inspection engineer and a project

play12:14

manager finally the PC should ideally

play12:17

take somewhere between 3 to 8 weeks more

play12:19

or less again the objective is to test

play12:22

the idea quickly with the minimal

play12:23

investment instead of like spending a

play12:25

lot of time on it um in our case for the

play12:27

visual inspection use case we're going

play12:29

to take about one and a half month for

play12:31

the proof of concept I highly recommend

play12:33

you include these 10 elements when

play12:35

planning for your proof of concept

play12:36

experiments whether you're doing it

play12:38

in-house or Outsourcing to a contractor

play12:40

these 10 elements will help you

play12:42

structure the planning of your proof of

play12:44

concept for better familiarity with this

play12:48

tool I have included four different

play12:50

examples for POC plans one of which is a

play12:53

generative AI use case in the video

play12:55

description so check those out you're

play12:57

going to find the detail proof of

play12:59

concept plans for these different use

play13:01

cases and use these as examples to

play13:04

understand the methodology better and

play13:06

apply it in your future experiments

play13:08

throughout my work with more than 100

play13:10

customer that I have helped a lot of

play13:11

them pursue successful PC's I found that

play13:14

there are some critical questions to

play13:16

address after the puc stage I'm going to

play13:19

go through these questions one question

play13:20

at a time right now and provide some

play13:23

Reflections on how you should think

play13:24

about it the first question is is AI the

play13:28

right solution to solve that that

play13:29

problem or not AI is amazing it is so

play13:32

powerful it can solve so many problems

play13:34

but guess what sometimes it's just not

play13:36

the right solution sometimes more

play13:38

traditional techniques could be more

play13:40

efficient in solving that problem versus

play13:42

AI after trying machine learning

play13:44

techniques in the proof of concept you

play13:46

need to really assess the ROI is it

play13:48

significant are we talking 2x 3x 10x

play13:51

something like that again speed quality

play13:53

time customer experience on any of these

play13:56

metrics or the return isn't really that

play13:58

significant

play13:59

again sometimes more traditional

play14:01

techniques could be more efficient and

play14:02

if that's the case don't get to fixate

play14:04

on AI and try to brainstorm and think

play14:07

about other use cases where AI could

play14:09

really bring significant return on

play14:11

investment the second question is what

play14:14

is the expected level of accuracy boost

play14:16

we can get for the model for a real

play14:18

production system versus the proof of

play14:21

concept proof of concept act as good

play14:23

reality checks for what we can

play14:25

potentially achieve if we invest more

play14:28

for a production system

play14:29

let's say you're getting 70% accuracy

play14:31

for your model maybe with more data and

play14:33

better modeling techniques and more

play14:35

iterations we can achieve you know like

play14:37

85% or maybe 90% accuracy the question

play14:40

is would that be enough for your for

play14:42

your use case or not sometimes that's

play14:44

enough sometimes it's not depending on

play14:46

the use case for self-driving cars for

play14:48

example you can't freely go with that

play14:50

you have to hit like

play14:52

99.999% or something for your perception

play14:55

systems because it's really vital for

play14:56

those cars to see what's on the road

play14:58

right well but for other use cases let's

play15:00

say automating visual Quality Inspection

play15:03

uh for industrial use cases maybe for

play15:06

your use case you can just go with 80%

play15:08

accuracy right it can save you a lot of

play15:09

time so depending on the levels of

play15:13

accuracies you're getting for your model

play15:15

for the PC you can start you know

play15:17

estimating what level of boost you can

play15:19

get for a production system and have the

play15:22

decision whether this is going to be you

play15:24

know enough or not and decide to invest

play15:26

more in that experiment or not based on

play15:29

that the third question is if you aren't

play15:31

getting good results from your PC what

play15:34

are the reasons for that and is it worth

play15:37

doing a second round of a PC or moving

play15:40

to a production system is it worth

play15:42

allocating Dev investment or not and

play15:44

what kind of changes we need to do to

play15:46

ensure better outcomes if you're get if

play15:48

you're not getting good results there

play15:50

are usually three main reasons

play15:51

insufficient data insufficient skills or

play15:55

working on the wrong problem let's take

play15:57

this one by one and have a closer look

play15:59

at it insufficient data sometimes you

play16:02

either need like different data sets or

play16:04

more from the data that you have already

play16:06

you know sometimes the data that you

play16:08

have that you're working with simply

play16:10

does not have enough predictive power

play16:12

for you to use and find relationships

play16:15

and stuff like that and make predictions

play16:17

let's say for example you're trying to

play16:18

forecast retail store demand or retail

play16:22

demand in general and you're using only

play16:24

historical sales data for that if you're

play16:27

not getting good accuracy probably need

play16:29

to add more data sets like for example

play16:32

weather and promotions and historical

play16:34

marketing campaigns and maybe economic

play16:36

indicators and stuff like that now you

play16:38

start adding these data sets you might

play16:40

get better accuracy sometimes you you're

play16:43

only working with one or two years of

play16:45

data maybe you need like five or six

play16:47

years of data historically to solve that

play16:49

problem the question is in either case

play16:51

Is it feasible to get these data sets

play16:53

whether it's more from the data that you

play16:54

have or different data sets if the

play16:56

answer is yes that's usually a good sign

play16:58

then maybe it's worth having another

play17:00

round of the puc getting more data and

play17:02

seeing if you can get more results the

play17:05

second reason could be insufficient

play17:07

skills let's say you're trying to solve

play17:08

a computer vision problem you know the

play17:10

very example we have at hand automating

play17:12

visual Quality Inspection and you're not

play17:14

getting good results maybe it's not

play17:16

about the data Maybe it's about the team

play17:18

skills maybe you have a team inhouse who

play17:20

have great experience in you know

play17:22

Predictive Analytics dealing with

play17:24

tabular data to make predictions about

play17:27

you know demand or predict analytics for

play17:29

other use cases but they haven't worked

play17:31

on deep learning computer vision

play17:33

problems before and they're still

play17:35

learning and they might not be familiar

play17:37

with the latest architectures the latest

play17:38

tips and tricks to optimize the

play17:40

hyperparameters uh the latest tips and

play17:42

tricks to achieve better accuracies and

play17:44

stuff like that so maybe that's the

play17:46

reason in that case it's not a ship

play17:48

stopper there are other routes that you

play17:50

can proceed with you can hire a

play17:52

consultant who have been working on

play17:53

these problems before you can consider

play17:55

partnering with uh a technology company

play17:58

who have been solving these problems

play17:59

before right uh and see but if you don't

play18:02

have access to these and at the same

play18:04

time you don't have a meaningful way to

play18:06

upgrade the skills of your team you need

play18:09

to make a decision here because it's not

play18:11

expected that you're going to get

play18:13

amazing results if you're not able to

play18:15

improve either of those the third

play18:16

possible reason for not getting great

play18:18

results is as we mentioned in the

play18:20

previous points maybe you're just

play18:21

working on the wrong problem maybe

play18:23

you're trying to solve a problem that

play18:25

could be really solved with traditional

play18:27

Noni techniques and we trying to force

play18:29

AI to solve it right and I find this

play18:31

happening a lot of times by the way some

play18:33

customers that have been working with

play18:35

they are really excited about AI they

play18:37

want to they have some mandate to use it

play18:39

to solve problems so they Rush uh into

play18:41

using it to solve specific problems that

play18:43

honestly like other tools could could

play18:45

solve it more efficiently so if that's

play18:47

the case then again you need to consider

play18:49

those other tools and move on to other

play18:51

use cases the fourth question is a

play18:53

really important one are you going to be

play18:55

using off-the-shelf models or services

play18:58

or tools to build the production system

play19:02

after the PC or you're going to need to

play19:04

build it from scratch that's a very

play19:05

important question the PC will give you

play19:07

a very good chance to explore the

play19:09

different tools and services in the

play19:10

market you know the different Cloud AI

play19:12

Services the pre-train models the open

play19:15

source tools and stuff like that and see

play19:17

if you can leverage any of these when

play19:20

building your real production system if

play19:22

you have tried a lot of these tools and

play19:23

you see that it's not providing you know

play19:25

decent results maybe you need to

play19:27

consider building a system fromt scratch

play19:29

that has its own advantages right you

play19:31

have a lot of control into crafting

play19:33

exactly the workflows that you want um

play19:35

you can control so many different things

play19:37

but at the same times it comes with a

play19:39

cost it requires probably high technical

play19:42

skills and it will probably take longer

play19:45

time right on the other side leveraging

play19:47

offthe shelf tools Solutions and Cloud

play19:49

AI Services can save you a lot of time

play19:51

you know we can proceed with using it

play19:54

while having less AI experience inhouse

play19:57

but it might not give you the

play19:58

flexibility

play19:59

of building things from scratch there is

play20:01

no right or wrong here you have to weigh

play20:03

the pros and cons of each but you need

play20:05

to understand based on the effort that

play20:06

you have put in the proof of concept you

play20:08

know what are the available Solutions

play20:10

out there that you can leverage and if

play20:12

they are enough or not answering this

play20:14

question will help you identify the

play20:15

complexity of a production system and

play20:18

the expected time and budget for it

play20:20

which are very important to assess

play20:21

before committing a specific budget or

play20:23

an investment or team effort to solve

play20:26

that problem question number five what

play20:28

is the total expected cost for a

play20:30

production system in the light of what

play20:32

we have seen in the proof of concept

play20:34

experiment here's the thing for the

play20:37

proof of concept usually we just focus

play20:38

on the science aspects of the problem we

play20:40

focus on building a good model with good

play20:42

accuracy to test the hypothesis uh of

play20:45

our problem right for a production

play20:46

system you're going to need to scale

play20:48

this across the Enterprise and for that

play20:50

to happen you're going to need to move

play20:52

from the limited scope from the PC to a

play20:54

fully fledged scope you're going to need

play20:55

for example to take care of different

play20:57

items each comes with that owns cost for

play20:59

example the ETL or data pipelines the

play21:02

how are you going to get input data from

play21:04

different systems integrate it process

play21:06

it clean it and pass it to the model and

play21:08

then you know how are you going to get

play21:10

data or predictions out of the model and

play21:12

feed it to different information

play21:14

products in order to do that you're

play21:16

going to need software engineering to

play21:17

integrate the model with different

play21:19

systems in your company you're going to

play21:21

need this flow to happen in an automated

play21:23

fashion right in in a PC usually you run

play21:26

the model in a jupyter notebook for

play21:27

example or run a python script in a real

play21:30

production system that's not going to

play21:31

work you're going to need this flow to

play21:33

be automated right so there is a

play21:35

software engineering cost infrastructure

play21:37

cost ETL cost uh there is ongoing

play21:40

maintenance and support sometimes the

play21:41

model results change by time like they

play21:45

degradate for example and different

play21:46

types of drifts could happen so there's

play21:49

going to need be a need to potentially

play21:51

continuously retrain the model redeploy

play21:53

to production stuff like that so you're

play21:55

just going to need to take care of these

play21:57

different items assess the cost

play21:59

associated with each and build a

play22:01

business case for the kind of return

play22:04

that you're expecting from a production

play22:05

system with more data and more

play22:07

iterations and the kind of cost you're

play22:09

looking at for this production system

play22:11

how is the ROI looking like is it

play22:14

justifiable and if yes then definitely

play22:16

proceed if not then maybe you reconsider

play22:19

cutting some parts of that production

play22:21

system pushing some parts to a later

play22:23

stage and focusing on the core aspects

play22:25

Instead at the end of the day the proof

play22:27

of concept should should give you a good

play22:30

idea of how much cost potentially our

play22:32

production system would look like and

play22:35

that will help you in taking uh the

play22:38

decision and building the business case

play22:40

to justify these Investments these were

play22:42

just some examples for how the puc can

play22:45

help you ask the right questions get the

play22:47

needed answers to plan for the next

play22:49

stage building a production system I

play22:52

hope that today's episode was useful for

play22:54

you I hope that you have learned how to

play22:57

take your idea to action and start

play23:00

planning for that experiment and what

play23:02

are the main elements to take into

play23:04

consideration when doing these

play23:06

experiments and for the next episode

play23:08

we're going to be looking into a very

play23:09

interesting subject building AI capacity

play23:13

we're going to talk about different

play23:14

things we're going to talk about

play23:15

building AI teams what kind of skills

play23:18

and resources and talent you need to

play23:20

have based on your stage based on what

play23:22

you're trying to do there is no you know

play23:25

black and white here there is no one

play23:26

right answer there are different flavors

play23:29

of this uh based on what you're trying

play23:31

to do and we're also going to be looking

play23:33

at the option of Outsourcing and compare

play23:35

these two different uh options together

play23:37

building AI teams versus Outsourcing

play23:39

each one has its own pros and cons we're

play23:41

going to be exploring that and we're

play23:42

going to be looking at so many different

play23:44

scenarios for what kind of AI team you

play23:46

might need to have in house and some

play23:48

tips and tricks on how to hire the best

play23:50

AI talent and retain them if you like

play23:53

today's episode I really appreciate if

play23:55

you share it with others who might

play23:57

benefit from it and you know like the

play24:00

video on social media and on YouTube and

play24:03

with that said see you next episode and

play24:06

thank you for your time today

play24:08

[Music]

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
AI ProjectsProof of ConceptBusiness AIMachine LearningData ScienceInnovation StrategyRisk ManagementTech SolutionsProject PlanningAI Adoption