Complete Road Map To Prepare For Deep Learning🔥🔥🔥🔥

Krish Naik
1 Oct 202016:23

Summary

TLDRIn this video, Krishna provides a complete roadmap for learning deep learning, highlighting its importance in modern data science jobs. He explains the foundational concepts like neural networks, loss functions, optimizers, and activation functions. Krishna also discusses advanced topics such as artificial neural networks (ANN), convolutional neural networks (CNN), recurrent neural networks (RNN), and transfer learning. Viewers are encouraged to practice these techniques through projects, deployments, and cloud platforms. The video emphasizes hands-on learning and provides guidance on using popular libraries like Keras, TensorFlow, and PyTorch.

Takeaways

  • 😀 Deep learning is essential for data science jobs today, alongside machine learning.
  • 📚 The core of deep learning includes neural networks that mimic the human brain's learning process.
  • 🧠 The foundational topics include neural networks, loss functions, optimizers, and activation functions, which are essential before diving into specific models.
  • 🤖 Mastering artificial neural networks (ANN) is a critical first step, followed by deployment knowledge using tools like Flask and cloud platforms.
  • 🖼️ Convolutional Neural Networks (CNN) are vital for handling image and video data, with key concepts like filters, strides, and layers to understand.
  • 📈 Recurrent Neural Networks (RNN) and its variants like LSTM, GRU, and attention models are crucial for sequence-based tasks like natural language processing (NLP) and time series forecasting.
  • 🛠️ Libraries such as Keras, TensorFlow, PyTorch, Hugging Face, and Ktrain are important tools for implementing deep learning models.
  • 📦 Deployment skills, including Dockerization and cloud services, enhance practical knowledge and project completion.
  • 🔍 For image-based tasks, CNN combined with transfer learning techniques (like ResNet and Inception) is highly recommended.
  • 🏆 Staying updated with state-of-the-art models like GPT-3 is important, but the focus should remain on mastering the fundamentals of ANN, CNN, and RNN for interviews.

Q & A

  • What is the significance of deep learning in the current job market for data science roles?

    -Deep learning is becoming increasingly important in the data science job market. While machine learning was sufficient to get jobs a few years ago, many companies now seek candidates proficient in both deep learning and machine learning to meet industry demands.

  • Why did deep learning come into existence, and what is its primary objective?

    -Deep learning was developed to mimic the human brain, allowing machines to learn in a way similar to humans. The concept of neural networks, which underpins deep learning, was introduced by researchers like Geoffrey Hinton, particularly through the backpropagation algorithm.

  • What foundational concepts should be understood before diving into deep learning?

    -Before learning deep learning, one should understand neural networks, loss functions, optimizers (like gradient descent and Adam), and activation functions (like ReLU, tanh, and sigmoid). Mastering the math behind these concepts is essential for grasping deep learning.

  • What is the role of optimizers in deep learning, and why is Adam Optimizer widely used?

    -Optimizers, such as gradient descent and Adam, adjust the weights of a neural network during training to minimize the loss function. Adam is popular because it adapts the learning rate dynamically, improving the speed and accuracy of training compared to other optimizers.

  • How can you apply the knowledge of artificial neural networks (ANN) to real-world projects?

    -Once familiar with ANN, you can solve machine learning problems like regression and classification. You should also learn how to deploy ANN models using web frameworks like Flask, and deploy them on cloud platforms like AWS, Azure, or Heroku for practical applications.

  • What are the key differences between artificial neural networks (ANN) and convolutional neural networks (CNN)?

    -While ANN is used for general machine learning tasks, CNN is specialized for processing image and video data. CNNs use additional layers like convolution layers, which apply filters to images, enabling the network to detect patterns and features in visual data.

  • What are some advanced topics to explore in convolutional neural networks (CNN)?

    -Advanced topics include transfer learning techniques like VGG, ResNet, and Inception. For more experienced learners, object detection algorithms like RCNN, Mask RCNN, SSD, and YOLO are crucial to understand for tasks involving object recognition in images.

  • What are the key concepts in recurrent neural networks (RNN) and NLP applications?

    -In RNNs, concepts like LSTM (Long Short-Term Memory), GRU (Gated Recurrent Units), word embeddings, and sequence-to-sequence models are critical. NLP applications often involve tasks like translation, sentiment analysis, and text generation using models like BERT and Transformers.

  • Why is sequence-to-sequence modeling important in NLP, and how are encoders and decoders involved?

    -Sequence-to-sequence models are important for tasks where the input and output are both sequences, such as language translation. Encoders process the input sequence, while decoders generate the output sequence, often improved by using attention mechanisms for better context understanding.

  • What are some resources or platforms recommended for learning and practicing deep learning?

    -The speaker suggests using platforms like Keras, TensorFlow, and PyTorch for deep learning implementation. Additionally, blogs and resources like Keras' documentation, Hugging Face (for NLP), and practical projects using Docker and cloud platforms like AWS are recommended.

Outlines

00:00

🧠 Introduction to Deep Learning Roadmap

Krishna introduces the topic of deep learning preparation and addresses the growing importance of deep learning for data science jobs. He explains that deep learning is crucial alongside machine learning for landing data science roles, and then lays out a roadmap to learn deep learning. He emphasizes the foundational topics like neural networks, loss functions, optimizers, and activation functions, which form the basis for tackling deep learning techniques.

05:00

🔍 Core Concepts: Neural Networks and Optimization

This section focuses on understanding essential deep learning concepts such as weight initialization, hyperparameter tuning, and different optimizers like gradient descent, RMSprop, and Adam. Krishna explains how these concepts relate to training neural networks and discusses libraries like Keras, TensorFlow, and PyTorch for implementation. He also suggests learning Google Colab for utilizing GPUs, along with project deployment on cloud platforms like AWS, Heroku, and Azure.

10:02

🖼️ Convolutional Neural Networks (CNN) and Image Processing

Krishna introduces Convolutional Neural Networks (CNNs) and explains their application to image and video frame processing. He highlights the key mathematical concepts behind CNNs, such as filters, strides, and how images are reduced in size through convolutions. The section emphasizes mastering CNNs, transfer learning techniques like VGG16, and image classification projects, followed by advanced topics like object detection using algorithms like RCNN, SSD, and YOLO.

15:02

💬 Natural Language Processing (NLP) and Recurrent Neural Networks (RNN)

Krishna shifts focus to sequence data and NLP techniques. He explains RNN-based methods, including LSTM, GRU, and bidirectional LSTM, along with word processing techniques like word embeddings and attention models. He introduces libraries such as Hugging Face and KTrain for implementing NLP and discusses how to apply these techniques to tasks like sales forecasting and language translation using models like BERT and Transformers.

Mindmap

Keywords

💡Deep Learning

Deep Learning is a subset of machine learning that focuses on artificial neural networks and deep neural networks. It is a method of teaching computers to do what comes naturally to humans: learning by example. In the video, the speaker discusses the importance of deep learning and how it has become a crucial skill for data science jobs, indicating that companies are increasingly looking for professionals with expertise in both deep learning and machine learning.

💡Neural Networks

Neural networks are a set of algorithms modeled loosely after the human brain that are designed to recognize patterns. They interpret sensory data through a kind of machine perception, labeling, or clustering raw input. The video emphasizes the foundational role of neural networks in deep learning, highlighting that they are used to mimic the human brain's learning process.

💡Backpropagation

Backpropagation is a method used to calculate the gradient of the loss function with respect to all the weights in the network, which is the key to learning in neural networks. The script mentions Mr. Jeffrey Hinton's paper on the backpropagation algorithm, which is foundational to the development of many deep learning techniques discussed in the video.

💡Convolutional Neural Network (CNN)

A Convolutional Neural Network is a type of deep learning algorithm commonly applied to analyze visual imagery. The video script discusses CNNs as a pillar of deep learning, emphasizing their use for processing image data, such as for image classification, and mentions the importance of understanding the mathematical concepts behind the convolution layer.

💡Optimizers

Optimizers in the context of neural networks are algorithms that help in adjusting the weights of the network to minimize the loss function. The video mentions several types of optimizers, including gradient descent, stochastic gradient descent (SGD), adagrad, RMSprop, and Adam, which are crucial for the training process of neural networks.

💡Loss Functions

Loss functions are a crucial part of training neural networks, as they measure how well the model is performing and how much it is missing its target. The script discusses different types of loss functions like categorical cross-entropy, which are used to train models in deep learning.

💡Activation Functions

Activation functions are used to introduce non-linearity into the model, allowing neural networks to learn arbitrary decisions boundaries. The video script lists functions like ReLU, tanh, and sigmoid as examples of activation functions that are fundamental to how neural networks process information.

💡Transfer Learning

Transfer learning is a method where a pre-trained model is adapted to a target task. The video script mentions transfer learning as a technique used in deep learning, particularly in the context of CNNs, where pre-trained models like VGG16, ResNet, and Inception V3 are used to boost performance on new tasks.

💡Recurrent Neural Network (RNN)

Recurrent Neural Networks are a class of neural networks that are designed to work with sequential data, like time series or natural language. The video discusses RNNs as a key component of deep learning for sequence-to-sequence data problems, such as natural language processing.

💡Object Detection

Object detection is a computer vision technique that not only classifies the object in an image but also determines the object's location. The video script suggests that while object detection is an advanced topic, it is an important area of deep learning, mentioning algorithms like R-CNN, YOLO, and SSD.

💡Deployment

In the context of machine learning and deep learning, deployment refers to the process of putting a trained model into a production environment where it can make predictions. The video script stresses the importance of deployment skills, suggesting that learners should practice deploying models using web frameworks and cloud platforms.

Highlights

Deep learning is becoming essential for data science jobs, as companies increasingly look for both deep learning and machine learning expertise.

The main goal of neural networks in deep learning is to mimic the human brain and its learning processes.

Jeffrey Hinton's backpropagation algorithm played a pivotal role in the development of deep learning techniques.

A strong foundation in neural networks, loss functions, optimizers, and activation functions is critical before diving into advanced deep learning concepts.

Important optimization techniques include gradient descent, stochastic gradient descent (SGD), Adagrad, RMSprop, and Adam Optimizer, with Adam being the state-of-the-art technique.

Artificial neural networks (ANNs) can solve similar problem statements as machine learning but with the added benefit of deeper learning capabilities.

Deploying deep learning projects using frameworks like Flask, and deploying them on cloud platforms (Heroku, AWS, Azure) is a recommended practice.

Hyperparameter tuning, including weight initialization and deciding hidden layers and neurons, is crucial for ANN performance optimization.

Keras, PyTorch, and TensorFlow are recommended libraries, with Keras being a simple and easy-to-use choice, especially with TensorFlow 2.0.

Convolutional Neural Networks (CNNs) are primarily used for image and video processing and include layers like convolution and fully connected layers.

Understanding CNN concepts like filters, strides, and image reduction is essential, as well as the math behind them.

Transfer learning techniques, such as VGG16, ResNet, and InceptionV3, are highly beneficial for improving model performance in specific tasks.

Object detection algorithms, including RCNN, Mask RCNN, SSD, and YOLO, are essential for more advanced applications, especially for those with 2-3 years of experience.

Recurrent Neural Networks (RNNs) and their variants like LSTM, GRU, and attention models are vital for handling sequence-to-sequence data, such as natural language processing (NLP) and time series forecasting.

Hugging Face and KTrain libraries are recommended for practical applications in NLP, especially when working with transformers like BERT.

Transcripts

play00:00

hello all my name is Krishna and welcome

play00:01

to my YouTube channel so guys many of

play00:03

you are actually requesting to upload a

play00:04

video on complete road map to prepare

play00:06

deep learning so in this particular

play00:08

video I'll be discussing about that I'll

play00:10

be telling you the path that you should

play00:12

actually take to learn deep learning in

play00:14

a proper way so that you'll be able to

play00:16

cover almost most of the things so that

play00:18

it'll be helpful for you for interviews

play00:21

and one more thing guys yes deep

play00:22

learning is becoming much more popular

play00:24

because there were many questions put up

play00:26

by many of my subscribers saying that is

play00:27

machine learning enough to get into jobs

play00:30

with respect to data science I'll say

play00:32

guys nowadays it will be difficult

play00:33

because companies are looking for both

play00:35

deep learning and machine learning

play00:36

techniques it is good to have both of

play00:37

them and that is what is the trend if

play00:40

you go three years back I think I I

play00:42

would definitely say that if you know

play00:43

machine learning you'll be able to get

play00:45

the jobs itself so let's understand this

play00:47

road map after making you understand

play00:49

I'll also be showing you from where you

play00:50

can learn all these things yes some of

play00:52

the part of this is actually remaining

play00:54

to be uploaded in my playlist but it

play00:56

will be soon getting uploaded till you

play00:58

learn all the things till then all the

play01:00

things will get actually uploaded so let

play01:03

be let's begin guys now one of the most

play01:05

important thing is that why did deep

play01:07

learning come into existence and what is

play01:08

the main aim of the neural networks

play01:11

which is basically used in deep learning

play01:12

it is used to mimic the human brain so

play01:15

scientist researchers actually thought

play01:17

that can we make the machine learn in

play01:20

such a way that like how we human being

play01:22

learn right so because of that they came

play01:24

up with the concept of perceptron neural

play01:26

networks and it was Mr Jeffrey Hinton

play01:29

right because of him because of his uh

play01:31

paper that was on back propagation

play01:33

algorithm that led to the invention of

play01:35

all these techniques that are right now

play01:38

yes so let us go ahead and try to start

play01:40

with how we should start with deep

play01:41

learning itself now guys in the base

play01:43

over here in the bottom part you you can

play01:46

see that I've actually written

play01:47

introduction to neural network then I

play01:49

have written loss function optimizes

play01:51

gradient descent stochastic gradient

play01:52

descent adagrad RMS prop and Adam apart

play01:56

from this there are also activations

play01:57

functions like reu tan sigmoid

play02:00

activation function so that all are the

play02:02

base for everything that we will try to

play02:04

learn because on training right suppose

play02:07

if we are training our images in

play02:09

convolution neural network indirectly

play02:10

you'll be using some kind of optimizers

play02:12

you'll be using some kind of loss

play02:13

functions apart from that you'll also be

play02:15

using some kind of activation functions

play02:17

so in short to begin with guys all the

play02:19

topics that I have actually mentioned

play02:21

over here in the base are pretty much

play02:23

important you need to learn this you

play02:25

need to know the maths how it actually

play02:26

works because implementation of this is

play02:29

just like hardly a single line of code

play02:31

and training it will automatically take

play02:33

until you get a optimal Minima right a

play02:35

global Minima so inside this please make

play02:38

sure that you learn about what is the

play02:40

how does the neural network work where I

play02:42

talk about inputs weights bias you know

play02:45

because we update the weights in such a

play02:47

way that once that particular uh you

play02:49

know where we will be actually getting a

play02:51

global Minima till that point the

play02:53

training will be going on then apart

play02:55

from that you also have loss functions

play02:56

I'll be showing you from where you can

play02:57

learn loss functions there are amazing

play02:59

amazing blogs on Keras right they are

play03:01

amazing amazing loss functions like uh

play03:04

categor cross entropy cross entropy

play03:06

binary cross entropy and many more

play03:07

things which we'll be going through then

play03:09

you have optimizers like gradient

play03:11

descent stochastic gradient descent this

play03:13

gradient descent if you understand I

play03:15

think stochastic grading descent will

play03:16

also become easy because there is some

play03:17

minor change there was some disadvantage

play03:19

in grading descent so they came up with

play03:21

SGD then they came up with adagrad then

play03:23

they came up with RMS prop and then

play03:25

finally which is a state of art which is

play03:27

called as Adam Optimizer nowadays

play03:29

everybody's using Adam Optimizer because

play03:31

it is nothing but it is it is also able

play03:34

to change the momentum that is basically

play03:36

your learning rate as your training is

play03:37

actually going on so pretty amazing task

play03:39

and these all are common to all the

play03:41

things all the three pillars that we

play03:43

have I have actually drawn over here

play03:45

okay so please make sure that before

play03:47

you're starting with Ann before you're

play03:48

starting with convol neural network RNN

play03:50

you have you'll be perfect with all

play03:52

these Concepts and all these concepts

play03:53

are available in my playlist okay and

play03:56

again I'll go through the playlist at

play03:57

the last uh then we basically go to the

play04:00

first pillar which is artificial neural

play04:02

network once you're able to understand

play04:03

all these things I think directly you'll

play04:05

be able to implement artificial neural

play04:07

network now in machine learning you will

play04:08

be having a data set right like a

play04:10

regression or classification problem

play04:12

that similar problem statement you can

play04:14

solve with the help of artificial neural

play04:15

network okay now when you implement

play04:18

artificial neural network you can do it

play04:20

in your local machine you can uh you

play04:21

should also have the knowledge of Google

play04:23

collab because there there'll be a usage

play04:25

of gpus you'll get an idea how to work

play04:27

in the GPU how to execute the programs

play04:29

and see that how the training actually

play04:31

happens right after learning artificial

play04:33

neural network don't stop over there

play04:35

guys try to deploy some projects by

play04:37

using some web Frameworks like flask uh

play04:39

in the heru cloud in the uh AWS Cloud

play04:42

Azure Cloud any type of cloud that you

play04:44

want again you can check my deployment

play04:45

playlist you should also try to uh

play04:48

dockerize this whole uh model and try

play04:50

dockerize this whole web application try

play04:52

to deploy that that will also be a very

play04:54

very good idea now as I said you that

play04:57

whatever problem statement that you're

play04:58

solving with machine learning can also

play05:00

definitely solve with artificial neural

play05:01

network there are also some Concepts in

play05:03

artificial neural network like weight

play05:05

initialization how do you do the weight

play05:06

initialization what are the different

play05:08

parameters how you can actually perform

play05:11

hyperparameter tuning with respect to

play05:12

artificial neural network right like how

play05:14

do you decide like how many number of

play05:15

hidden layers are there how many number

play05:17

of neurons should I take in the hidden

play05:18

layer that all can be done by the help

play05:20

of hyper parameter tunings and for that

play05:22

you can use caras tuner it is there is

play05:24

also something called as Auto caras that

play05:25

also you can actually use again

play05:27

everything has been uploaded in my

play05:29

playlist I'll go through the playlist at

play05:30

the last so once you're comfortable with

play05:32

this the type of libraries that you can

play05:34

use is py toch caras tensorflow okay I

play05:37

would suggest you to go with Keras

play05:38

because it'll be very very easy if you

play05:40

are trying to use Kaz with tensorflow

play05:41

2.0 because Kaz is already included in

play05:44

that okay so go with that and also pytor

play05:47

videos also I'm trying I've been

play05:49

uploading in the other playlist now

play05:51

coming to the second pillar now with

play05:53

respect to the second pillar you have

play05:55

convolution neural network again the

play05:57

important Concepts like loss function

play05:59

optimizer like gradient descent SGD

play06:01

adagrad arm fromom Adam uh activation

play06:04

functions will be used the same things

play06:05

will be used as I said in the

play06:06

convolution neural network on top of

play06:08

that in converstion neural network you

play06:10

have an additional layer which is called

play06:11

as convolution you need to understand

play06:13

how does this convolution actually work

play06:15

and remember guys whenever we are

play06:17

talking about convolution neural network

play06:18

we are talking about images we are

play06:19

talking about video frames so those kind

play06:21

of inputs gets processed with the help

play06:24

of conation neural network that is

play06:26

pretty much simple and that is the thing

play06:28

that you need to understand okay almost

play06:31

remaining all the things will be working

play06:32

like the Ann because at the last layer

play06:34

you'll be seeing that there will be

play06:35

fully connected layer at the end of the

play06:36

convolution neural network and with

play06:38

respect to that you can also solve a

play06:40

image classification problem okay now in

play06:43

convolution neural network please make

play06:44

sure that you understand all the math

play06:46

concepts with respect to convolution

play06:48

layer there you'll be seeing something

play06:49

called as filters what are filters what

play06:51

are strides what is the formula how the

play06:54

image will get reduced when you apply a

play06:56

filter of 3 cross 3 5 cross 5 if your

play06:58

image is of 224 224 what is the next

play07:01

layer that will actually happen all

play07:02

those explanation is again given in my

play07:05

uh videos guys it is pretty much

play07:07

important to understand okay apart from

play07:10

that guys I've also started with the

play07:11

transfer learning techniques and I have

play07:13

also uploaded a lot of videos I've also

play07:15

done end to-end projects with respect to

play07:17

transfer learning like vg6 Al xet you

play07:19

have Inception V3 you have reset

play07:21

everything so all those videos are

play07:24

available in my playlist you just have

play07:25

to follow them guys okay please make

play07:28

sure that you follow them to do this now

play07:31

as a beginner just focus on these two

play07:33

things don't worry about the object

play07:34

detection as a beginner but if you are

play07:38

around 2 to 3 years experience you

play07:39

should also start with something called

play07:41

as object detection in object detection

play07:43

first of all start with rcnn then you

play07:45

have M rcnn then you have SSD single

play07:47

shot detector right then you have the

play07:50

YOLO algorithm so YOLO is pretty much

play07:52

handy for the object detection it is

play07:53

also pretty much fast when compared to

play07:56

rcnn and master RC Master rcnn is like

play07:59

object detection they'll also be doing

play08:00

masking a lot of research are also going

play08:02

on with respect to this so these all

play08:05

topics I've actually mentioned guys and

play08:06

again I'm not telling you as a fresher

play08:08

you should directly go understanding

play08:10

object detection so you start with

play08:12

convol neural network then go with

play08:14

transfer learning techniques once you

play08:15

are very good with transfer learning

play08:17

techniques try to do some kind of end to

play08:19

endend projects where you are able to do

play08:20

some kind of applications like image

play08:22

classification by taking some many

play08:24

number of images try to create a front

play08:26

end try to create a flash framework try

play08:28

to create with the help of apis how do

play08:30

you can do the deployment everything so

play08:32

whole web application once you're able

play08:34

to create again do the deployment in

play08:36

different platforms like heru AWS Azure

play08:39

and just try to check that how that

play08:41

particular application actually works

play08:43

okay and once you comfortable with all

play08:44

the transfer learning techniques then

play08:46

move into object detection and try to

play08:47

cover this uh five algorithms or four

play08:50

algorithms that I have actually

play08:51

mentioned because these are some amazing

play08:53

algorithms itself right now once this is

play08:56

done this is your second pillar again

play08:58

the most popular pillar of out of this

play09:00

I'll I'll suggest that it is the NLP

play09:02

part or you can say the RNN part the

play09:04

most of the applications are with

play09:06

respect to sequence sequence to sequence

play09:08

data whenever your input data are like

play09:09

sequence to sequence like uh you have

play09:12

sentence you have uh sales forecasting

play09:15

this all kind of use cases are there

play09:17

again guys uh with respect to this

play09:18

you'll be H using mostly RNN again the

play09:22

base RNN you need to understand that is

play09:24

recurrent neural network on top of that

play09:26

you have lstm Gru B directional lstm you

play09:29

have word pre-processing techniques like

play09:31

word embedding word Toc then you have

play09:34

encoder and decoders these are like

play09:35

sequence to sequence model uh if you

play09:37

remember guys there is something called

play09:39

as neural language translation all those

play09:41

applications are actually done by this

play09:42

encoders and decoders you have to learn

play09:45

about attention models what is the

play09:46

advantages of attention models then we

play09:49

are coming to Transformers B right so

play09:51

Transformers again I've taken the live

play09:53

session only the things is about bird

play09:56

but is also Prett pretty much amazing

play09:57

and if I talk about bird guys all the

play09:59

NLP transfer learning techniques can be

play10:01

actually done with the help of bir and

play10:03

Transformers right and the libraries

play10:05

that are actually used are called as

play10:06

hugging face and K train now you will be

play10:09

thinking Krish what all videos you have

play10:10

not uploaded guys I need to upload this

play10:13

object detection all the videos and I

play10:15

need to upload uh bir theoretical videos

play10:18

and with the help of hugging face and KR

play10:20

I'll be showing you a lot of practical

play10:22

application only this two and this two

play10:24

are remaining guys remaining all the

play10:27

videos has been uploaded and till till

play10:29

you prepare till all this particular

play10:31

Concepts I will be uploading all these

play10:33

topics very soon okay I just need some

play10:35

amount of time I've been preparing

play10:37

materials from hugging face it will take

play10:39

time because I really want to show you

play10:41

uh that is the thing that is not

play10:43

available in the documentation of

play10:44

hugging face instead I'll try to show

play10:46

you so that I'll be training that thing

play10:48

in front of my GPU and uh I'll also be

play10:50

showing you some custom technique in hug

play10:52

face and K train Library again uh I'm

play10:55

also planning to make some deployment

play10:56

projects end to end so that you'll be

play10:58

able to accommodate all these things now

play11:00

these all are pretty important you have

play11:02

to go again the base and and the most

play11:04

favorite interview questions will be

play11:06

revolving around this base things like

play11:09

optimizers you know loss functions

play11:11

activation functions things about

play11:13

conition neural network again they

play11:15

directly will not ask you about object

play11:16

detection you know they'll just ask you

play11:18

that whether you know or not give us

play11:20

some basic idea something like that but

play11:21

mostly they'll be focusing on this three

play11:24

pillars this three base pillars that is

play11:25

a Ann CNN and RNN they may talk about

play11:28

bidirectional but out of this uh NLP

play11:31

part which is this one right is much is

play11:33

becoming much more popular second I'll

play11:35

say that it is uh conol neural network

play11:38

then you have Ann apart from this guys

play11:40

in conversation neural network there is

play11:41

also a separate topic which is called as

play11:42

computer vision uh computer vision try

play11:45

to learn in such a way that you'll be

play11:47

able to capture the frames from your

play11:49

webcam and try to use it and detect it

play11:52

through your uh convention neural

play11:54

networks something like it may be face

play11:56

recognition it may be different kind of

play11:58

image classification live image

play11:59

classification and many more things like

play12:02

you you you'll be creating some bounding

play12:03

boxes and many more things and those

play12:05

kind of video has also been uploaded in

play12:07

my playlist so this in short is the

play12:10

whole road map to prepare the Deep

play12:12

learning again remember the important

play12:14

libraries are py Tor K tensive flow then

play12:16

you have having phase then you have

play12:17

catering libraries right now let's go

play12:20

ahead and try to I'll try to show you

play12:22

from where you can learn from my

play12:23

playlist uh so let's go over here guys

play12:26

uh so yes uh this is my whole deep

play12:29

learning playlist you'll be able to see

play12:30

that I have uploaded uh somewhere around

play12:34

uh 53 videos and this actually covers

play12:36

Alex vet transfer learning techniques

play12:39

till Transformers I have explained in

play12:41

depth yes there is a plan that I will

play12:43

start uploading the Practical videos

play12:45

very soon now I have planned it I'm

play12:48

preparing the material as soon as that

play12:49

will be available it will be available

play12:51

to you all also and I'll try to upload

play12:53

it as soon as possible guys the other

play12:55

thing that you should follow is

play12:56

basically your K blog this K blog is

play12:58

pretty much amazing if you really want

play13:00

to know about losses you can click away

play13:02

what are the different loss technique

play13:04

like with regression losses will be mean

play13:05

square error mean absolute error I think

play13:07

we have also done this in machine

play13:08

learning right in uh classification

play13:10

problem you'll be having binary cross

play13:12

entropy then category cross entropy

play13:14

sparse category cross entropy poison

play13:16

class and all these different different

play13:17

techniques are there okay uh apart from

play13:20

this you also have like model API so all

play13:22

the different kind of model apis will be

play13:24

used over here so here you have

play13:25

sequential model and everything I've

play13:27

explained in my uh this whole playlist

play13:30

guys we I have included both practical

play13:32

and theoretical implementation like you

play13:34

here you can see how we can perform word

play13:35

embedding implementing word embedding

play13:37

using kasas develop your neural networks

play13:40

kaggle fake news classifier stock price

play13:42

prediction forecasting right uh remember

play13:45

with the help of RN you'll also be able

play13:46

to do the forecasting because it deals

play13:48

with sequences of data and remember guys

play13:50

only two main things this is the

play13:52

playlist that you should follow this is

play13:53

the loss function uh I mean this is the

play13:55

Keras uh block page that you should

play13:57

actually follow uh the transfer learning

play13:59

techniqu is also given if I click on K

play14:02

application here you'll be able to see

play14:03

all the transfer learning techniques

play14:04

that it supports you can actually reuse

play14:07

any of these techniques and train your

play14:09

model as soon as possible right so these

play14:12

all are some stateof art algorithms

play14:14

you'll be able to see that how many

play14:15

parameters are there uh like recently if

play14:18

you know about gpt3 it has 175 billion

play14:20

parameters I guess now here if you take

play14:22

an example of vj6 it has 138 million

play14:25

parameters so billion and million right

play14:28

it's a huge Gap so it is said that uh

play14:30

gp3 is one of the state of the art

play14:32

algorithm and probably it should be

play14:33

available I don't know whether it'll be

play14:35

available to everyone or not I've still

play14:37

requested for the API but I've not got

play14:39

it yet so that I start exploring things

play14:42

so this was the complete road map guys

play14:45

uh now one important thing is that I may

play14:48

teach you any number of times I may

play14:50

teach you with different different

play14:51

things I may teach you everything as

play14:53

such but the main thing is that you have

play14:55

to learn things you have to try

play14:56

different different things again I'm

play14:58

tell telling you these all tutorials

play15:00

that I have uploaded may not be just

play15:01

sufficient to just be an expert in a

play15:04

company that you'll be able to work in

play15:05

any uh computer vision or deep learning

play15:08

application but it is up to you after

play15:10

having this base strong I I've uploaded

play15:13

all the videos which will actually make

play15:15

your base strong once that is strong you

play15:17

can on top of that you can apply any

play15:19

application right when I talk about

play15:21

Jeffrey Hinton who's the creator of this

play15:23

back propagation who invented this idea

play15:25

of back propagation algorithm now every

play15:27

people are actually using it researchers

play15:29

are using and they're creating something

play15:31

amazing things right similarly all the

play15:33

base part of all these particular videos

play15:35

has been uploaded for you all you just

play15:37

have to practice try to do some more

play15:39

additional things on top of that work on

play15:41

audio classification work on different

play15:43

different things and try to make some

play15:45

good application from your own it will

play15:47

not be possible for me that you have an

play15:49

idea you'll just ask me chrish how to do

play15:51

it because I also have to explore it

play15:53

right I'm also a common man guys I may

play15:56

be knowing this base right but on top of

play15:58

it if I really want to implement

play15:59

anything I also have to put that much

play16:01

effort so that is the reason why I tell

play16:03

you that you have to practice things you

play16:05

you don't have to leave those things

play16:07

whenever you have an interest on

play16:08

something start working on it okay if

play16:11

you have some amazing ideas start

play16:12

working on it because the base is ready

play16:15

for you so I hope you like this

play16:16

particular video please do subscribe to

play16:18

the channel if you have not already

play16:19

subscribed and I'll see you all in the

play16:20

next video have a great day thank you

play16:21

and all bye-bye

Rate This

5.0 / 5 (0 votes)

Связанные теги
Deep LearningAI EducationNeural NetworksMachine LearningData ScienceConvolutional NetsRNN TechniquesTransfer LearningDeployment GuideAI Trends
Вам нужно краткое изложение на английском?