Data Science Life Cycle | Life Cycle Of A Data Science Project | Data Science Tutorial | Simplilearn

Simplilearn
22 Jun 202017:48

Summary

TLDRIn this session on data science, Mohan introduces the life cycle of a data science project, starting with the concept study to understand the business problem and available data. He then discusses data preparation, including data gathering, integration, and cleaning. Mohan explains model planning and building, highlighting various algorithms and exploratory data analysis techniques. The session covers training and testing models, deploying them, and communicating results to stakeholders. Finally, he summarizes the process, emphasizing the importance of presenting and operationalizing the findings to solve business problems effectively.

Takeaways

  • 📚 The first step in a data science project is the concept study, which involves understanding the business problem and available data, and meeting with stakeholders.
  • 🔍 Data preparation, also known as data munching or manipulation, is crucial for transforming raw data into a usable format for analysis.
  • 🔧 Data scientists explore and clean the data, handling issues like missing values, null values, and improper data types.
  • 📈 Data integration, transformation, reduction, and cleaning are all part of the data preparation process to ensure data quality for analysis.
  • ⚖️ Handling missing values can involve removing records, filling them with mean or median values, or using more complex methods depending on the dataset's size and importance.
  • 📊 Exploratory data analysis (EDA) uses visualization techniques like histograms and scatter plots to understand data patterns and relationships.
  • 🤖 Model planning involves selecting the right statistical or machine learning model based on the problem, such as regression for continuous outcomes or classification for categorical outcomes.
  • 🛠️ Model building is the execution phase where the chosen algorithm is trained with the cleaned data to create a predictive model.
  • 📉 Testing the model with a separate dataset ensures its accuracy and reliability before deployment.
  • 🛑 If the model fails to meet accuracy expectations during testing, it may need to be retrained or a different algorithm may be required.
  • 📑 Communicating results effectively to stakeholders and operationalizing the model to solve the initial business problem is the final step in the data science lifecycle.

Q & A

  • What is the first step in the life cycle of a data science project?

    -The first step is the concept study, which involves understanding the business problem, meeting with stakeholders, and assessing the available data.

  • Why is it important to meet with stakeholders during the concept study phase?

    -Meeting with stakeholders helps to understand the business model, clarify the end goal, and determine the budget, which are all crucial for the project's success.

  • What are some examples of data issues that might be encountered during data preparation?

    -Examples include missing values, null values, improper data types, and data redundancy from multiple sources.

  • What is the purpose of data munching or data manipulation in the data preparation phase?

    -Data munching or manipulation is necessary to transform raw data into a usable format for analysis, addressing issues like data gaps, structure inconsistencies, and irrelevant columns.

  • How can data scientists handle missing values in a dataset?

    -They can handle missing values by removing records with missing data if the percentage is small, or by imputing values using the mean, median, or mode of the dataset.

  • Why is it essential to split data into training and test sets during model preparation?

    -Splitting data ensures that the model is tested on unseen data, providing a more accurate measure of its performance and preventing overfitting.

  • What is exploratory data analysis, and why is it important?

    -Exploratory data analysis is the initial examination of data to discover patterns and understand the data types and distributions. It's important for identifying data issues and guiding the choice of models.

  • What are some common tools used for model planning and building in data science?

    -Common tools include R, Python with libraries like pandas or numpy, MATLAB, and SAS, each offering capabilities for statistical analysis, machine learning, and data visualization.

  • Can you explain how linear regression works in the context of model building?

    -Linear regression works by finding the best-fit straight line that represents the relationship between an independent variable and a dependent variable. The model training process determines the slope (m) and y-intercept (c) for the given data.

  • What is the final step in the data science project life cycle after obtaining results?

    -The final step is operationalizing the results, which involves communicating the findings to stakeholders, getting their acceptance, and putting the model into practice to solve the stated problem.

Outlines

00:00

📚 Introduction to Data Science Lifecycle

The script introduces the concept of a data science project lifecycle, beginning with the 'concept study' phase. This phase involves understanding the business problem, engaging with stakeholders, and assessing available data. The importance of asking questions, identifying specifications, and previous problem-solving examples are highlighted. The script sets the stage for a deeper dive into the subsequent steps of a data science project.

05:00

🔍 Data Preparation and Exploration

This paragraph delves into the intricacies of data preparation, also known as 'data munching' or 'data manipulation'. It discusses the challenges of working with raw data, such as gaps, structure inconsistencies, and redundancy. The paragraph outlines subtopics like data integration, transformation, reduction, and cleaning. It also touches on handling missing and null values, and the importance of data cleaning for accurate analysis. Strategies for dealing with large datasets and missing values are suggested, emphasizing the variability in approaches based on the project's specific needs.

10:01

📈 Model Planning and Building

The script moves on to model planning, where the type of model or algorithm to be used is decided based on the problem at hand. It explains the iterative process of model training using cleaned data and the importance of exploratory data analysis for understanding data relationships and preparing for model building. The paragraph also introduces the concept of splitting data into training and test sets to ensure the model's accuracy. Tools for model planning, such as R, Python, MATLAB, and SAS, are mentioned, highlighting their roles in statistical analysis and machine learning.

15:02

💬 Communicating Results and Operationalizing Solutions

The final paragraph focuses on the importance of communicating the results of data analysis to stakeholders and the process of operationalizing the findings. It emphasizes that presenting the results effectively and getting them accepted is crucial for solving the initial problem stated. The paragraph summarizes the entire data science lifecycle, from concept study to data preparation, model planning, building, and finally, the presentation and implementation of the solution.

Mindmap

Keywords

💡Data Science

Data Science is an interdisciplinary field that uses scientific methods, processes, algorithms, and systems to extract knowledge and insights from structured and unstructured data. In the video, it is the central theme, with the focus on the life cycle of a data science project, which includes various stages from understanding business problems to deploying solutions.

💡Life Cycle

The life cycle in the context of the video refers to the series of stages a data science project goes through from its inception to completion. This includes concept study, data preparation, model planning, model building, and operationalization, which are all essential for a comprehensive understanding of how data science projects are managed.

💡Concept Study

Concept Study is the initial phase of a data science project where the business problem is understood, stakeholders are met, and data availability is assessed. It sets the foundation for the project by defining the problem and determining the feasibility of finding a solution with the available data, as mentioned in the script.

💡Data Preparation

Data Preparation, also known as data munching or data manipulation, involves cleaning, integrating, transforming, and reducing raw data to make it suitable for analysis. The script explains that this step is crucial because it ensures the data is in a usable format, free from errors and gaps, which directly impacts the accuracy of the analysis.

💡Data Munching

Data Munching is a subset of data preparation where the data scientist explores and manipulates the data to fill gaps, remove unnecessary columns, and ensure the data structure is appropriate for analysis. The term is used in the script to describe the detailed work of making raw data ready for the modeling process.

💡Model Planning

Model Planning is the phase where the type of model and algorithm to be used for the data science project is decided. It depends on the nature of the problem and involves statistical or machine learning models. The script emphasizes the importance of this step in determining the approach to solving the business problem at hand.

💡Exploratory Data Analysis (EDA)

Exploratory Data Analysis is an approach to analyze data sets to summarize their main characteristics, often using visual methods. In the script, EDA is mentioned as a preparatory step to understand the data's properties, detect outliers, and discover patterns, which informs the choice of model and the subsequent analysis.

💡Machine Learning

Machine Learning is a subset of artificial intelligence that provides systems the ability to learn and improve from experience without being explicitly programmed. The script refers to machine learning models such as logistic regression, decision trees, and SVMs, which are trained to make predictions or decisions based on input data.

💡Training Data

Training Data is the subset of the data set used to train machine learning models. The script explains that the data is used to teach the model to make predictions or decisions, and it is a critical part of the model building process, where the model learns from the data to perform tasks without being explicitly programmed.

💡Testing Data

Testing Data is the portion of the data set held back from the training process and used to evaluate the performance of a trained model. In the script, it is mentioned that testing data helps to ensure the model's accuracy by providing a measure of how well it performs on unseen data.

💡Operationalization

Operationalization is the final step in the data science project life cycle where the validated model or findings are put into practice to solve the business problem. The script highlights the importance of this step as it involves communicating the results, getting acceptance, and implementing the solution in a real-world context.

Highlights

Introduction to the life cycle of a data science project by Mohan.

Concept study involves understanding the business problem and meeting with stakeholders.

Examples of concept study include understanding specifications, end goals, and budget.

Data preparation involves data gathering, exploration, and manipulation.

Data munching is the process of making raw data usable for analysis.

Handling missing and null values as part of data cleaning.

Data integration addresses conflicts and redundancy in merged data sets.

Data transformation ensures consistency when merging data from multiple sources.

Data reduction techniques for managing large data sizes without losing information.

Exploratory data analysis to understand relationships between variables and data appropriateness.

Visualization techniques such as histograms and scatter plots for exploratory data analysis.

Model planning includes deciding on the type of statistical or machine learning model to use.

Model building involves training the chosen model with cleaned data.

Iterative training process for models to achieve good accuracy.

Tools used for model planning include R, Python, MATLAB, and SAS.

Linear regression as an example of model building for predicting diamond prices.

Communicating results to stakeholders through presentations or dashboards.

Operationalizing the model by putting it into practice to solve the stated problem.

Summary of the data science project life cycle from concept study to operationalization.

Transcripts

play00:04

hello and welcome to this session on

play00:06

data science my name is mohan and today

play00:09

we are going to take a look at what this

play00:12

buzz is all about so now let's talk

play00:14

about the life cycle of a data science

play00:18

project okay the first step is the

play00:20

concept study in this step it involves

play00:23

understanding the business problem

play00:26

asking questions get a good

play00:27

understanding of the business model meet

play00:30

up with all the stakeholders understand

play00:32

what kind of data is available and all

play00:34

that is a part of the first step so here

play00:38

are a few examples we want to see what

play00:40

are the various specifications and then

play00:43

what is the

play00:44

end goal

play00:45

what is the budget is there an example

play00:48

of this kind of a problem that has been

play00:50

maybe solved earlier so all this is a

play00:52

part of the concept study and another

play00:55

example could be a very specific one to

play00:58

predict the price of a 1.35 carat

play01:00

diamond and there may be relevant

play01:03

information inputs that are available

play01:05

and we want to predict the price the

play01:08

next step in this process

play01:11

data preparation data gathering and data

play01:13

preparation also known as data munching

play01:17

or sometimes it is also known as data

play01:19

manipulation so what happens here is the

play01:21

raw data that is available may not be

play01:25

usable in its current format for various

play01:28

reasons so that is why in this step a

play01:31

data scientist would explore the data he

play01:34

will take a look at some sample data

play01:36

maybe there are millions of records pick

play01:39

a few thousand records and see how the

play01:41

data is looking are there any gaps is

play01:43

the structure appropriate to be fed into

play01:45

the system are there some columns which

play01:48

are probably

play01:49

not adding value may not be required for

play01:52

the analysis very often these are like

play01:54

names of the customers they will

play01:56

probably not add any value or much value

play01:59

from an analysis perspective the

play02:01

structure of the data maybe the data is

play02:03

coming from multiple data sources and

play02:06

the structures may not be matching what

play02:08

are the other problems there may be gaps

play02:10

in the data so the data

play02:12

all the columns all the cells are not

play02:14

filled if you're talking about

play02:15

structured data there are several blank

play02:18

records or blank columns so

play02:21

if you

play02:22

use that data directly you'll get errors

play02:24

or you will get inaccurate results so

play02:26

how do you either get rid of the data or

play02:30

how do you fill this gaps with something

play02:32

meaningful so all that is a part of data

play02:36

munching or data manipulation so these

play02:39

are some additional

play02:41

sub topics within that so

play02:44

data integration is one of them if there

play02:46

are any conflicts in the data that may

play02:48

be data may be redundant data resident

play02:50

redundancy is another issue there may be

play02:52

you have let's say data coming from two

play02:54

different systems and both of them have

play02:57

customer table for example customer

play02:59

information so when you merge them there

play03:02

is a duplication issue so how do we

play03:04

resolve that so that is one data

play03:06

transformation as i said there will be

play03:08

situations where data is coming from

play03:10

multiple sources and then when we merge

play03:13

them together they may not be matching

play03:15

so we need to do some transformations to

play03:17

make sure everything is similar we may

play03:20

have to do some data reduction if the

play03:22

data size is too big you may have to

play03:25

come up with ways to reduce it

play03:27

meaningfully without losing information

play03:30

then data cleaning so there will be the

play03:32

wrong values or you know values or there

play03:35

are missing values so how do you handle

play03:37

all of that a few examples of very

play03:40

specific stuff so there are missing

play03:43

values how do you handle missing values

play03:45

or null values here in this particular

play03:48

slide we are seeing three types of

play03:49

issues one is missing value then you

play03:51

have null value you see the difference

play03:53

between the two right so in the missing

play03:55

value there is nothing blank null value

play03:57

it says null now the system cannot

play03:59

handle if there are null values

play04:01

similarly there is improper data so it's

play04:03

supposed to be numeric value but there

play04:05

is a string or a non-numeric value so

play04:07

how do we clean

play04:09

and prepare the data so that our system

play04:11

can work flawlessly so there are

play04:14

multiple ways and there is no one common

play04:17

way of doing this it can vary from

play04:20

project to project it can vary from what

play04:23

exactly is the problem we are trying to

play04:24

solve it can vary from data scientist to

play04:27

data scientist organization to

play04:28

organization so these are like some

play04:30

standard practices people come up with

play04:32

and and of course there will be a lot of

play04:34

trial and error somebody would have

play04:36

tried out something and it worked and

play04:38

will continue to use that mechanism so

play04:40

that's how we need to take care of data

play04:42

cleaning now what are the various ways

play04:44

of doing you know if values are missing

play04:47

how do you take care of that now if the

play04:49

data is too large and

play04:52

only a few records have some missing

play04:54

values then it is okay to just get rid

play04:57

of those entire rows for example so if

play05:00

you have a million records and out of

play05:02

which 100 records don't have full data

play05:05

so there are some missing values in

play05:06

about 100 cards so it's absolutely fine

play05:09

because it's a small percentage of the

play05:12

data so you can get rid of the entire

play05:14

records which are missing values but

play05:16

that's not a very common situation very

play05:18

often you will have multiple or at least

play05:22

a large number of a data set for example

play05:24

out of million records you may have 50

play05:26

000 records which are like having

play05:28

missing values now that's a significant

play05:30

amount you cannot get rid of all those

play05:32

records your analysis will be inaccurate

play05:35

so how do you handle such situations so

play05:39

there are again multiple ways of doing

play05:40

it one is you can probably if a

play05:43

particular values are missing in a

play05:45

particular column you can probably take

play05:48

the mean value for that particular

play05:50

column and fill all the missing values

play05:53

with the mean value so that first of all

play05:55

you don't get errors because of missing

play05:56

values and second you don't get results

play05:58

that are way off because these values

play06:00

are completely different from what is

play06:02

there so that is one way then a few

play06:05

other could be either taking the median

play06:07

value or depending on what kind of data

play06:10

we are talking about so something

play06:11

meaningful we will have put in there if

play06:13

we are doing some

play06:15

machine learning activity then obviously

play06:17

as a part of data preparation you need

play06:20

to split the data into training and test

play06:23

data set the reason being if you try to

play06:25

test with a data set which the system

play06:28

has already seen as a part of training

play06:30

then it will tend to give a reasonably

play06:33

accurate results because it has already

play06:35

seen that data and that is not a good

play06:37

measure of the accuracy of the system so

play06:40

typically you take the entire data set

play06:43

the input data set and split it into two

play06:46

parts and again the ratio can vary from

play06:48

person to person individual preferences

play06:50

some people like to split it into 50 50

play06:53

some people like it as 63.33

play06:56

and 33.3 is basically two-thirds and

play06:58

one-third and some people do it as 80 20

play07:02

80 for training and 20 for testing so

play07:04

you split the data perform the training

play07:06

with the 80 percent and then use the

play07:09

remaining 20 for testing all right so

play07:12

that is one more data preparation

play07:14

activity that needs to be done before

play07:17

you start analyzing or applying the data

play07:20

or putting the data through the model

play07:22

then the next step is model planning now

play07:25

this models can be statistical models

play07:27

this could be machine learning model so

play07:29

you need to decide what kind of models

play07:32

you're going to use again it depends on

play07:34

what is the problem you're trying to

play07:36

solve if it is a regression problem you

play07:38

need to think of a regression algorithm

play07:40

and come up with a regression model so

play07:43

it could be linear regression or if

play07:45

you're talking about classification then

play07:47

you need to pick up an appropriate

play07:50

classification algorithm like logistic

play07:52

regression or decision tree or svm and

play07:55

then you need to train that particular

play07:59

model so that is the model building or

play08:01

model planning process and the cleaned

play08:04

up data has to be fed into the model and

play08:07

apart from cleaning you may also have to

play08:09

in order to determine what kind of model

play08:11

you will use

play08:13

you have to perform some exploratory

play08:15

data analysis to understand the

play08:18

relationship between the various

play08:19

variables and see if the data is

play08:22

appropriate and so on right so that is

play08:24

the additional preparatory step that

play08:27

needs to be done so a little bit of

play08:29

details about exploratory data analysis

play08:31

so what exactly is exploratory data

play08:33

analysis is basically to as the name

play08:35

suggests you're just exploring you just

play08:37

receive the data and you're trying to

play08:38

explore and

play08:40

find out what are the data types and

play08:43

what is the is the data clean in in each

play08:46

of the columns what is the maximum

play08:47

minimum value so for example there are

play08:49

out of the box functionality available

play08:52

in tools like r so if you just ask for a

play08:55

summary of the table it will tell you

play08:57

for each column it will give some

play08:58

details as to what is the mean value

play09:00

what is the maximum value and so on and

play09:02

so forth so this exercise or this

play09:04

exploratory analysis is to get an

play09:07

understanding of your data and then you

play09:10

can take steps to during this process

play09:12

you find there are a lot of missing

play09:13

values you need to take steps to fix

play09:15

those you will also get an idea about

play09:17

what kind of model to be used and so on

play09:20

and so forth what are the various

play09:21

techniques used for exploratory data

play09:23

analysis typically these would be

play09:26

visualization techniques like you use

play09:28

histograms uh you can use box plots you

play09:31

can use scatter plots so

play09:33

these are very quick ways of identifying

play09:36

the patterns or a few of the trends of

play09:38

the data and so on and then once your

play09:42

data is ready you you decided on the

play09:44

model what kind of model what kind of

play09:46

algorithm you're going to use if you're

play09:49

trying to do machine learning you need

play09:51

to pass your 80 percent the training

play09:54

data or rather you use that training

play09:56

data to train your model and the

play09:58

training process itself is iterative so

play10:01

the training process you may have to

play10:03

perform multiple times and once the

play10:05

training is done and you feel it is

play10:08

giving good accuracy then you move on to

play10:11

test so you take the remaining 20 of the

play10:14

data remember we split the data into

play10:16

training and test so the test data is

play10:19

now used to check the accuracy or how

play10:23

well our model is performing and if

play10:26

there are further issues let's say and

play10:28

model is still during testing the

play10:30

accuracy is not good then you may want

play10:32

to retrain your model or use a different

play10:35

model so this whole thing again can be

play10:37

iterative but if the test process is

play10:40

passed or if the model passes the test

play10:42

then it can go into production and it

play10:44

will be deployed all right so what are

play10:47

the various tools that we

play10:49

use for

play10:51

model planning r is an excellent tool in

play10:53

a lot of ways whether you're doing

play10:55

regular statistical analysis or machine

play10:58

learning or any of these activities are

play11:00

in along with our studio provides a very

play11:03

powerful environment to do data analysis

play11:06

including visualization it has a very

play11:08

good integrated visualization of plot

play11:11

mechanism which can be used for doing

play11:13

exploratory data analysis and then later

play11:16

on to do

play11:17

analysis detailed analysis and machine

play11:19

learning and so on and so forth then of

play11:21

course you can write python programs

play11:23

python offers a rich library for

play11:26

performing data analysis and machine

play11:28

learning and so on matlab is a very

play11:31

popular tool as well especially during

play11:34

education so this is a very easy to

play11:37

learn tool so matlab is another

play11:39

tool that can be used and then last but

play11:41

not least sas sas is again very powerful

play11:45

it is a preparatory tool and it has all

play11:48

the components that are required to

play11:50

perform very good statistical analysis

play11:53

or perform data science so those are the

play11:56

various tools that would be required for

play11:59

or that that can be used for model

play12:01

building and

play12:02

so the next step is model building so we

play12:06

have done the planning part we said okay

play12:08

what is algorithm we are going to use

play12:10

what kind of model we are going to use

play12:11

now we need to actually train this model

play12:14

or build the model rather so that it can

play12:16

then be deployed so what are the various

play12:19

uh ways or what are the various types of

play12:22

model building activities so it could be

play12:24

let's say in this particular example

play12:27

that we have taken you want to find out

play12:29

the price of 1.35 carat diamond so this

play12:33

is let's say a linear regression problem

play12:36

you have data for various carets of

play12:40

diamond and you use that information you

play12:43

pass it through a linear regression

play12:45

model or you create a linear regression

play12:47

model which can then predict your price

play12:51

for 1.35 carat so this is one example of

play12:56

model building and then a little bit

play12:58

details of how linear regression

play13:01

works so linear regression is basically

play13:03

coming up with a relation between an

play13:06

independent variable and a dependent

play13:09

variable so it is pretty much like

play13:10

coming up with equation of a straight

play13:13

line which is the best fit for the given

play13:15

data so like for example here y is equal

play13:18

to mx plus c so y is the dependent

play13:22

variable and x is the independent

play13:24

variable we need to determine the values

play13:26

of m and c for our given data so that is

play13:30

what the training process of

play13:33

this model does at the end of the

play13:35

training process you have a certain

play13:37

value of m and c and

play13:40

that is used for predicting the values

play13:43

of any new data that comes all right so

play13:46

the way it works is we use the training

play13:49

and the test data set to

play13:51

train the model and then validate

play13:54

whether the model is working fine or not

play13:57

using test data and

play13:59

if it is working fine then it is taken

play14:02

to the next level which is put in

play14:04

production if not the model has to be

play14:07

retrained if the accuracy is not good

play14:09

enough then the model is retrained maybe

play14:12

with more data or you come up with a

play14:14

newer model or algorithm and then repeat

play14:17

that process so it is an iterative

play14:18

process once the training is completed

play14:21

training and test then this model is

play14:24

deployed and we can use this particular

play14:26

model to determine what is the price of

play14:29

1.35 carat diamond remember that was our

play14:32

problem statement so now that we have

play14:35

the best fit for this given data we have

play14:38

the price of 1.35 carat diamond which is

play14:42

10 000. so this is one example of how

play14:46

this whole process works now how do we

play14:49

build the model there are multiple ways

play14:52

you can use python for example and use

play14:54

libraries like pandas or numpy to build

play14:57

the model and implement it this will be

play15:00

available as a separate tutorial a

play15:02

separate video in this playlist so

play15:05

stay tuned for that moving on once we

play15:07

have the results the next step is to

play15:09

communicate this results to the

play15:12

appropriate stakeholders so which is

play15:15

basically taking this results and

play15:17

preparing like a presentation or a

play15:21

dashboard and communicating these

play15:23

results to the concerned people so

play15:26

finishing or getting the results of the

play15:28

analysis is not the last step but you

play15:30

need to as a data scientist take this

play15:32

results and present it to the team that

play15:35

has given you this problem in the first

play15:37

place and explain your findings explain

play15:40

the findings of this exercise and

play15:43

recommend maybe what steps they need to

play15:46

take in order to overcome this problem

play15:48

or solve this problem so that is the

play15:51

pretty much once that is accepted and

play15:53

the last step is to operationalize so if

play15:56

everything is fine your data scientists

play15:58

presentations are accepted then they put

play16:01

it into practice and thereby they will

play16:04

be able to improve or solve the problem

play16:06

that they stated in step one okay so

play16:10

quick summary of the life cycle you have

play16:12

a concept study which is basically

play16:15

understanding the problem asking the

play16:16

right questions and trying to see if

play16:18

there is enough data to solve this

play16:20

problem and then even maybe gather the

play16:23

data then data preparation the raw data

play16:26

needs to be manipulated you need to do

play16:28

data munching so that you have the data

play16:31

in a certain proper format to be used by

play16:34

the model or our analytics system and

play16:37

then you need to do the model planning

play16:39

what kind of a model what algorithm you

play16:41

will use for a given problem and then

play16:43

the model building so the exact

play16:45

execution of that model it happens in

play16:48

step four and you implement and execute

play16:52

that model and

play16:53

put the data through the analysis in

play16:55

this step and then you get the results

play16:57

these results are then communicated

play17:00

packaged and presented and communicated

play17:03

to the stakeholders and once that is

play17:05

accepted that is operationalized so that

play17:08

is the final step so with that we come

play17:11

to the end of this session thank you

play17:13

very much for watching this video and if

play17:16

there are any feedback any comments

play17:19

please or any questions please put it

play17:21

below and we will get back to you

play17:23

provide your contact information or

play17:25

email so that we can respond to you and

play17:28

thank you very much once again and have

play17:31

a good day bye bye

play17:36

hi there if you like this video

play17:38

subscribe to the simply learn youtube

play17:40

channel and click here to watch similar

play17:42

videos turn it up and get certified

play17:44

click here

Rate This

5.0 / 5 (0 votes)

الوسوم ذات الصلة
Data ScienceProject Life CycleConcept StudyData PreparationModel PlanningMachine LearningRegression AnalysisData MunchingExploratory DataModel Deployment
هل تحتاج إلى تلخيص باللغة الإنجليزية؟