tensorflow custom object detection model | raspberry pi 4 tensorflow custom object detection

FREEDOM TECH
18 Jan 202224:23

Summary

TLDRIn this Freedom Tech YouTube tutorial, viewers learn to train a custom TensorFlow Lite object detection model using Google Colab. The video guides through installing necessary software on a Raspberry Pi 4, capturing and labeling images, and using LabelImg software. It covers uploading the labeled data to Google Drive, accessing it via Google Colab, and training the model. Finally, it demonstrates detecting objects like ESP8266 and Raspberry Pi Pico using the trained model, a mobile camera, and IP Webcam software.

Takeaways

  • 😀 The video is a tutorial on training a custom model using Google Colab for object detection with a Raspberry Pi Pico and ESP8266.
  • 🛠️ The tutorial requires installing Raspbian OS Bullseye on a Raspberry Pi 4 and the latest version of OpenCV.
  • 📱 The presenter uses a mobile camera with IP Webcam software to detect objects through the custom model.
  • 📁 The images for training the model are stored in a folder named 'img' on the Raspberry Pi.
  • 🏷️ LabelImg software is used to label the images for training the TensorFlow Lite model.
  • 📂 Images are organized into 'train' and 'validate' folders for the training process.
  • 🔗 A GitHub repository is mentioned for downloading necessary files and accessing a Google Colab file.
  • 📦 The 'yt' folder containing the labeled images and XML files is zipped and uploaded to Google Drive.
  • 💾 The zipped folder is then downloaded into Google Colab, where the model training process is initiated.
  • 🎯 The trained TensorFlow Lite model is exported and downloaded to the Raspberry Pi for object detection using the IP camera.

Q & A

  • What is the main objective of the video?

    -The main objective of the video is to guide viewers on how to train a custom model using Google Colab to detect objects such as a Raspberry Pi Pico and ESP8266.

  • Which devices are used in the demonstration?

    -The devices used in the demonstration are a Raspberry Pi Pico and an ESP8266.

  • What software is required to be installed on the Raspberry Pi 4 for this project?

    -The software required includes Raspbian OS Bullseye, OpenCV version 4.5.5, TensorFlow Lite, and the IP Webcam software.

  • How can viewers find instructions for installing the necessary software?

    -Viewers can find instructions for installing the necessary software by watching the creator's previous videos on installing Raspbian OS Bullseye, OpenCV, and TensorFlow Lite.

  • What is the purpose of the 'img' folder mentioned in the script?

    -The 'img' folder contains 70 images of Raspberry Pi Pico and ESP8266 that will be used to train the custom model.

  • What is LabelImg and why is it used in this project?

    -LabelImg is a graphical image annotation tool. It is used to label the images of the objects (Raspberry Pi Pico and ESP8266) to create bounding boxes for training the TensorFlow model.

  • How are the images organized after labeling with LabelImg?

    -The labeled images are organized into 'train' and 'validate' folders, with corresponding XML files, to be used for training and validating the TensorFlow model.

  • What is the purpose of zipping the 'yt' folder and uploading it to Google Drive?

    -The 'yt' folder is zipped and uploaded to Google Drive to make the labeled images and XML files accessible to Google Colab for training the TensorFlow model.

  • How is the TensorFlow model trained in Google Colab?

    -The TensorFlow model is trained in Google Colab by using the TFLite Model Maker and the training data from the uploaded 'yt' folder.

  • What is the final output of the training process?

    -The final output of the training process is a TensorFlow Lite model that can detect the specified objects (Raspberry Pi Pico and ESP8266).

  • How is the trained TensorFlow Lite model used to detect objects in real-time?

    -The trained TensorFlow Lite model is used in conjunction with IP Webcam software and OpenCV to detect objects in real-time through a mobile camera feed.

Outlines

00:00

💻 Introduction to Training a Custom Model with TensorFlow Lite

The video begins with a warm welcome to the Freedom Tech YouTube channel. The host outlines the session's agenda, which includes training a custom model using Google Colab to detect objects like a Raspberry Pi Pico and ESP8266. The host encourages viewers to subscribe if they find the content helpful. The practical demonstration involves setting up a Raspberry Pi 4 with Raspbian OS Bullseye and installing OpenCV and TensorFlow Lite. The host also mentions using a mobile camera with IP Webcam software to detect the objects through the custom model. The video emphasizes the need to watch previous videos for installation instructions and to prepare the environment for model training.

05:04

🖼️ Preparing Images and Using LabelImg Software

The host demonstrates how to prepare images for training the TensorFlow Lite model by capturing 70 images of the Raspberry Pi Pico and ESP8266 using a mobile camera. These images are saved in a folder named 'img'. The next step involves using LabelImg software to label these images. The host guides viewers on installing LabelImg on Raspbian OS Bullseye by providing a GitHub repository link. The process includes cloning the repository and moving the necessary files to the desktop. The host then walks through the steps of using LabelImg to label the images by creating bounding boxes around the objects and assigning them labels.

10:08

📁 Organizing Data and Zipping the Dataset

After labeling the images, the host explains how to organize the data. This involves creating a 'freedom tech' folder with 'train' and 'validate' subfolders. The labeled images and their corresponding XML files are then copied into these folders. The host proceeds to zip the 'freedom tech' folder, ensuring that all the labeled images and XML files are compressed. The zipped file is then moved to Google Drive for easy access and use in Google Colab for the next steps of the model training process.

15:09

🔄 Uploading Data to Google Colab and Setting Up the Environment

The video continues with the host showing how to upload the zipped dataset to Google Colab. This is done by accessing Google Drive and uploading the 'freedom tech.zip' file. Once uploaded, the host demonstrates how to open Google Colab, upload the file there, and set up the runtime environment for model training. This includes installing the TensorFlow Lite Model Maker and TensorFlow Lite Support package, as well as mounting the Google Drive to access the uploaded dataset.

20:10

🚀 Training the Model and Evaluating its Performance

With the environment set up, the host begins the model training process in Google Colab. This involves unzipping the dataset, specifying the object labels, and initiating the training process. After the model is trained, it is evaluated using the validation data. The host then demonstrates how to export the trained model as a TensorFlow Lite model. The final steps include downloading the model to the Raspberry Pi and moving it to the appropriate TensorFlow Lite folder. The host concludes the video by showing a live demonstration of the model detecting the ESP8266 and Raspberry Pi Pico using a mobile camera and IP Webcam software.

📹 Final Demonstration and Conclusion

In the final segment, the host conducts a live test of the custom TensorFlow Lite model. They run a Python script that utilizes the model to detect the ESP8266 and Raspberry Pi Pico through a mobile camera feed. The successful detection of the objects confirms the effectiveness of the trained model. The host expresses hope that viewers have learned from the video and looks forward to the next session, bidding farewell to the audience.

Mindmap

Keywords

💡Google Colab

Google Colab is a free cloud-based Jupyter notebook environment that requires no setup and runs entirely in the cloud. It allows users to write and execute Python code via a web browser, which is useful for machine learning and data analysis. In the video, Google Colab is used to train a custom model with TensorFlow Lite, indicating its role in facilitating machine learning workflows without the need for local computational resources.

💡Raspberry Pi Pico

The Raspberry Pi Pico is a microcontroller board developed by Raspberry Pi Foundation. It is a compact, low-cost device that can be programmed to perform various tasks. In the video, the Pico is one of the objects to be detected using the custom model, showcasing its application in object detection scenarios.

💡ESP8266

ESP8266 is a low-cost Wi-Fi microcontroller chip that can be programmed for various network applications. It is used in the video as another object to be detected by the custom model, demonstrating its relevance in IoT projects and how it can be integrated with machine learning models for detection purposes.

💡TensorFlow Lite

TensorFlow Lite is a lightweight solution for machine learning on mobile and embedded devices. It enables on-device machine learning inference with low latency and a small binary size. In the video, TensorFlow Lite is used to create a custom model for object detection, highlighting its utility for deploying machine learning models on resource-constrained devices.

💡Object Detection

Object detection is a computer vision technology that deals with detecting and identifying multiple objects in an image or video. In the context of the video, object detection is the main goal, where the custom model is trained to detect the ESP8266 and Raspberry Pi Pico, illustrating the practical application of machine learning in recognizing specific objects.

💡Raspbian OS Bullseye

Raspbian OS Bullseye is the latest version of the Raspbian operating system based on Debian Bullseye. It is designed specifically for Raspberry Pi devices. The video mentions installing Raspbian OS Bullseye on a Raspberry Pi 4, which serves as the base for setting up the environment needed to train the custom TensorFlow Lite model.

💡OpenCV

OpenCV (Open Source Computer Vision Library) is an open-source computer vision and machine learning software library. It provides a common infrastructure for computer vision applications and is used extensively in image and video analysis. In the video, OpenCV is installed on Raspbian OS Bullseye to support the development of the object detection model.

💡LabelImg

LabelImg is a graphical image annotation tool that is widely used in machine learning to label images for object detection. It is mentioned in the video as a tool to create bounding boxes around the objects in images, which is a crucial step in preparing the dataset for training the TensorFlow Lite model.

💡IP Webcam

IP Webcam is an application that can be installed on Android devices to turn them into a network surveillance camera. In the video, the presenter uses IP Webcam to capture images from a mobile device, which are then used to test the custom TensorFlow Lite model, demonstrating a real-world application of the model.

💡Model Training

Model training is the process of teaching a machine learning model to make predictions or decisions based on input data. The video describes the steps to train a custom TensorFlow Lite model using Google Colab, emphasizing the iterative nature of model development where data is used to improve the model's accuracy.

💡Google Drive

Google Drive is a file storage and synchronization service developed by Google. It is used in the video to store and share the zipped dataset and trained model files, illustrating its utility in cloud storage and collaboration for machine learning projects.

Highlights

Training a custom model using Google Colab for TensorFlow Lite object detection.

Object detection with Raspberry Pi Pico and ESP8266 using custom-trained models.

Setting up a Raspberry Pi 4 with Raspbian OS Bullseye for machine learning tasks.

Installation of OpenCV 4.4.5 on Raspbian OS Bullseye to support object detection.

Use of mobile camera through IP webcam software for capturing training images.

Labeling images for custom object detection using LabelImg software on Raspbian OS.

Organizing training and validation datasets for custom object detection models.

Cloning a GitHub repository and using Google Colab files for the training process.

Uploading labeled data to Google Drive and preparing it for model training.

Training the TensorFlow Lite model using Google Colab with GPU support.

Evaluating the trained model with validation data to assess performance.

Exporting the trained model as a TensorFlow Lite model for further use.

Running object detection on Raspberry Pi with TensorFlow Lite and custom models.

Integration of IP webcam software and Python scripts for real-time object detection.

Successfully detecting Raspberry Pi Pico and ESP8266 objects with the custom model.

Transcripts

play00:00

hello friends and welcome to youtube

play00:01

channel freedom tech and in this session

play00:03

what we are going to learn in this

play00:05

session we are going to train our own

play00:08

custom model with the help of google

play00:10

collab then we are going to detect our

play00:13

objects in this scenario i have here a

play00:16

raspberry pi pico and

play00:19

esp8266 and we are going to detect

play00:22

esp8266 and pico with the help of our

play00:24

own custom model okay so before we move

play00:28

to our practical friends if you learn

play00:29

something from our videos please

play00:31

consider to subscribe our channel okay

play00:33

friends thank you so much

play00:34

and let's get started so friends we are

play00:36

going to train our own custom model for

play00:39

tensorflow lite with the help of google

play00:41

collab okay so for that first of course

play00:44

we need to install the raspbian os

play00:46

bullseye on raspberry pi 4. i already

play00:48

created the video how to install

play00:50

raspbian os bullseye uh this is what a

play00:52

raspbian os bull size 64-bit version

play00:55

okay you name

play00:58

you name hyphen m

play01:01

as you can see r64 it means uh it is a

play01:04

64-bit version okay then i have already

play01:08

installed the opencv latest version

play01:10

4.4.5 so i already created the video how

play01:13

to install the opencv latest version on

play01:15

raspbian os bullseye so watch the video

play01:18

and install the opencv latest version

play01:20

4.5.5

play01:22

then we need here a tensorflow lite i

play01:25

have already created that video also how

play01:26

to install the light the tensorflow lite

play01:29

on raspbianos 64-bit version

play01:32

just watch video and install the

play01:33

tensorflow lite

play01:35

then i am going to use here my mobile

play01:37

camera with the help of ipv webcam

play01:40

software

play01:41

we are going to detect our custom model

play01:43

means in this scenario i am going to

play01:45

detect my esp8266 and the pico with the

play01:48

help of a mobile ipcam software okay so

play01:52

i already created the video how to

play01:54

install opencv how to install tensorflow

play01:57

lite and how to use the mobile camera

play02:00

with the help of ipcamp for tensorflow

play02:03

lite so watch these three videos and

play02:05

just

play02:06

configure the basic configuration for

play02:09

this particular video for

play02:11

training your own custom model for

play02:13

tensorflow lite okay so i have already

play02:16

installed all these things

play02:18

so

play02:19

we are ready now and first we need here

play02:22

our images so i have already captured

play02:24

the images with the help of my mobile

play02:27

camera so i'm going to show you here if

play02:30

i open the file manager and i have

play02:32

create here a folder name called img as

play02:35

you can see this is what the img folder

play02:37

here as you can see i have here a 70

play02:40

images of raspberry pi pico and nodemc

play02:44

esp8266 okay i'm going to show you here

play02:48

as you can see esp8266 and the pico

play02:52

board okay uh i have saved it inside the

play02:55

slash home slash pi and img folder okay

play02:58

so

play02:59

this is what our data which basically we

play03:02

are going to train here then we need

play03:04

here a label mg software okay so

play03:08

we are going to install the

play03:11

label mg software on raspbian os

play03:13

bullseye okay for that i have first

play03:16

created here a

play03:18

github repository okay you need to

play03:21

download the repository and inside that

play03:23

repository we have here a google collab

play03:25

file and the text file so i will mention

play03:29

the link you need to simply copy and

play03:30

paste the link then open the link code

play03:33

copy

play03:35

minimize browser and i am going to clone

play03:38

the folder inside the download so cd

play03:40

downloads and come to the downloads

play03:42

folder then sudo

play03:44

get clone and paste the link

play03:48

and just hit enter it will clone our

play03:50

folder inside the downloads i'm going to

play03:52

clear the screen

play03:54

ls

play03:55

and as you can see this is what our

play03:56

folder simply cd tensorflow hit enter ls

play04:01

and this is what our file which this is

play04:04

basically the text file which we want to

play04:06

move on our desktop so

play04:08

sudo

play04:09

mv

play04:11

label emg slash home slash

play04:14

pi

play04:15

and the desktop and desktop hit enter

play04:18

and as you can see we have our text file

play04:20

on our now desktop so if i open the text

play04:23

filter as you can see

play04:24

here i have mentioned how to install the

play04:26

label mg on the raspbian os bullseye

play04:29

okay so first command which we want to

play04:31

run here we need to install

play04:34

a

play04:36

py q25 hyphen dell hyphen tools package

play04:39

so simply i'm going to run cd command

play04:42

clear the screen and i am now

play04:44

inside slash home slash pi okay here we

play04:47

want to run the command first command

play04:49

simply hit enter

play04:50

and it will install the py q25 package

play04:53

on our raspbian os boot size 64 64-bit

play04:55

version okay i already installed then we

play04:57

need to install our main package which

play04:59

is the label emg copy it and paste

play05:03

and just hit enter it will install the

play05:06

label emg on our raspbian us i

play05:09

already installed as you can see the

play05:10

requirement is already satisfied that's

play05:12

it

play05:14

so now what we want to do we want to

play05:15

start our label emg software inside

play05:18

raspbianos so run the command line this

play05:20

way l a b

play05:22

and just press the tab button it will

play05:24

auto complete our command label emg you

play05:27

can run the command with the help of

play05:28

sudo or or you can run the command or

play05:31

with the help of pi user okay so i am

play05:33

going to run the command as a normal

play05:35

user so

play05:37

label amg just hit enter it will open

play05:40

label emg inside the raspbian os then

play05:44

open dir and

play05:46

we need to select our folder where we

play05:48

save our images so

play05:50

slash home slash pi emg this is what my

play05:52

folder and click on choose

play05:56

then it will open our images as you can

play05:58

see

play05:59

then we need to click on change save dir

play06:02

click on here and mention the same

play06:04

folder part so

play06:06

slash home slash pi img

play06:09

is the same path where we have our

play06:10

images so same path for the change save

play06:14

dir okay click on choose

play06:17

so we have successfully select open dr

play06:19

path also we have successfully select

play06:22

the change save dir path that's it now

play06:24

we are ready and then after what we want

play06:26

to do we want to click on create

play06:28

rectangle box just click on and then

play06:31

we can basically create a bounding box

play06:34

on our

play06:35

object so this is what our pico

play06:39

then just mention the name pico

play06:42

click on ok then our second object

play06:45

create rectangle box and also we want to

play06:48

create the box on our esp8266

play06:53

then mention esp8266

play06:58

so we have our two object pico and

play07:00

esp8266

play07:02

click on save click on next image and

play07:05

then we have here our next image which

play07:08

is

play07:08

only pico board

play07:11

so click on pico

play07:12

ok

play07:14

save click on next image so this is the

play07:17

way you need to first label your images

play07:19

with the help of label emg software so i

play07:22

have already labeled the images okay so

play07:24

i am going to simply

play07:26

close the software i hope so far so good

play07:29

now what we want to do i am going to

play07:30

first show you here

play07:33

i have your already trained images as

play07:35

you can see this is what the image is

play07:37

and this is what the xml file if i open

play07:40

the xml file with the help of text

play07:42

editor

play07:43

then

play07:45

just terminate

play07:49

as you can see this is what our xml file

play07:51

as you can see xml file and inside that

play07:54

we have our object names

play07:57

as you can see this is what the pico and

play08:00

this is what the esp8266

play08:02

okay so this is the way first you need

play08:04

to label your images with the help of

play08:07

label emg software that's it now next

play08:09

step we need to create here a folder

play08:12

okay so i'm going to

play08:16

this is what our file manager and here

play08:18

as you can see

play08:20

i have created a freedom tech folder

play08:23

okay so simply you need to right click

play08:26

on the file manager and create a new

play08:28

folder i'm going to mention like yt

play08:33

let's say yt folder

play08:35

and click on ok this is the way you can

play08:37

create the folder so i have created the

play08:40

folder y2 folder then inside that again

play08:42

we need to open the y2 folder and inside

play08:44

that again we need to create the two

play08:46

folder so first folder

play08:49

which is the train

play08:52

click on ok

play08:53

then next one validate

play08:57

validate that's it so

play09:00

as i told you first we want to create

play09:02

the folder in this case i have created a

play09:05

y2 folder as you can see and inside that

play09:08

we need to create a train and validate

play09:10

folder that's it now what we want to do

play09:14

where we save our train image in this

play09:17

case i have saved my images which we

play09:19

just label i have saved inside the

play09:22

images as you can see we need to simply

play09:24

press the ctrl a

play09:26

select all images

play09:28

then right click

play09:31

click on copy

play09:33

and go to the folder white folder

play09:36

and

play09:37

train and just paste here all the images

play09:40

and the xml file okay as you can see

play09:42

it's copied then same inside the

play09:45

validate

play09:46

just paste

play09:51

okay

play09:51

so this is how you need to create the

play09:54

folder and inside that folder you need

play09:57

to create train and validate folder and

play09:59

inside that train and validate folder

play10:01

you need to

play10:03

paste all your images which you label

play10:07

okay

play10:08

now what we want to do we want to simply

play10:11

zip this y2 folder

play10:14

we want to simply zip our y2 folder okay

play10:18

so what i am going to do here i am going

play10:19

to minimize the browser and in this case

play10:22

i have already created the folder which

play10:23

is the freedom tech same folder freedom

play10:26

tech and inside that i have here a train

play10:29

and validate folder and inside the train

play10:32

invalidate as you can see i have already

play10:34

saved the images with the xml file okay

play10:38

so now what we want to do we want to zip

play10:40

the folder so for that we are going to

play10:42

run the command so simply i'm going to

play10:45

open our text file

play10:48

and we need to run this one command zip

play10:51

hyphen r

play10:52

then the zip file name and our folder

play10:55

name so in this case i have created a

play10:57

freedom tech folder okay so

play11:01

i'm going to show you here just a minute

play11:04

so first

play11:05

open the terminal

play11:07

then run the command sudo apt

play11:10

install zip and it will install the zip

play11:13

package

play11:14

which i have already installed then

play11:17

we need to run this one command so

play11:19

simply copy it

play11:21

copy it

play11:23

and just paste inside the terminal

play11:26

and remember one thing friends i have

play11:28

here a freedom tech folder so here you

play11:31

need to mention your folder in case

play11:34

as i told you we have created the folder

play11:36

name called yt folder so you need to

play11:40

mention here a white folder

play11:42

okay so remember one thing you need to

play11:44

mention here your folder name then your

play11:47

folder dot zip file name then slash star

play11:51

means it will

play11:52

zip all the files

play11:54

which we have inside the freedom tab

play11:56

folder okay so simply i am going to hit

play11:59

enter

play12:01

and as you can see it will

play12:03

now

play12:04

zip all the content which basically we

play12:07

have inside the freedom tech folder okay

play12:10

so friends as you can see we have

play12:12

successfully zip our

play12:14

folder if i run the command ls

play12:17

and here it is whatever zip file

play12:19

freedomtech.zip now what we want to do

play12:21

here we want to simply move this file

play12:23

inside our

play12:25

google drive okay so now i am going to

play12:28

close the text file open the browser so

play12:31

first you need to log in with your gmail

play12:33

account for accessing your own google

play12:35

drive so i am going to mention g drive

play12:39

it will open our google drive

play12:42

go here google drive sign in i have

play12:45

already signed in with my drive okay

play12:50

so here it is as you can see inside my

play12:52

google drive i have already uh copy the

play12:56

freedomwebtech.zip file but i am going

play12:58

to simply remove it

play13:00

okay remove

play13:02

and then i am going to again upload the

play13:05

freedom tech dot zip file so right click

play13:09

file upload

play13:11

go to the pi

play13:13

and we need to search for our file

play13:16

freedom check dot zip simply

play13:18

click on open and now it's uploading our

play13:23

zip file on google drive okay so

play13:25

meanwhile what we want to do we want to

play13:27

open our google collab so what we want

play13:30

to do we want to simply open google

play13:32

collab

play13:38

click on here

play13:46

and then here we want to click on upload

play13:50

choose file

play13:52

and we clone our file inside the

play13:55

downloads using our repository as you

play13:57

can see this is what the folder inside

play13:59

that we have our google collab file so

play14:02

click on

play14:03

open and it will open the

play14:06

file inside our google

play14:14

cooler as you can see

play14:17

we have our

play14:19

file inside now google column okay

play14:22

so

play14:23

on google drive

play14:25

as you can see our file uploading

play14:26

process also completed okay so now what

play14:30

we want to do we want to open our google

play14:32

collab and here we have our file already

play14:35

uploaded now what we want to do we want

play14:37

to simply go here run time

play14:39

then change runtime type and so select

play14:42

the gpu then click on save

play14:45

now what we want to do we want to simply

play14:48

start our

play14:49

file so click on first connect as you

play14:52

can see

play14:53

first we need to click on connect

play14:58

connecting

play15:00

so as you can see we have successfully

play15:02

connected now we are ready and we are

play15:04

going to run our first command so just

play15:06

click on here and it will install the t

play15:09

of light model maker and tflight support

play15:12

package as you can see it's installing

play15:17

so we have successfully installed the

play15:19

packages just scroll down

play15:21

then

play15:22

import the required packages click on

play15:24

here

play15:28

so we have successfully import the

play15:30

packages just scroll down

play15:32

and here

play15:34

these steps is basically for

play15:36

the mounting our google drive okay so

play15:40

just click on

play15:46

and it will ask you to permit this

play15:48

notebook to access your google drive

play15:50

file simply click on connect to google

play15:53

drive

play15:54

then

play15:55

it will open this kind of window here

play15:57

you need to select your gmail id okay so

play16:00

i'm going to select my gmail id because

play16:03

i have open my google drive with this

play16:06

freedom tech email id so i'm going to

play16:08

select that one then scroll down click

play16:10

on continue

play16:12

and that's it we have successfully mount

play16:14

our drive so now simply scroll down and

play16:17

we are going to unzip the freedom tech

play16:20

dot zip so click on here the next step

play16:23

and it will unzip the

play16:26

folder okay so just if you click here

play16:29

then you will see the freedom check

play16:31

folder train and validate folder as you

play16:33

can see train and validate if you do

play16:36

like this way

play16:38

we have basically all the images as you

play16:40

can see

play16:41

okay

play16:43

so as you can see we have successfully

play16:46

unzip our freedom tech dot zip file now

play16:49

just scroll down

play16:51

scroll down

play16:53

and this is what the next steps so here

play16:55

what we want to do we want to mention

play16:57

our label name so in this case i have

play17:01

label

play17:02

esp8266 and pico

play17:05

okay so mention

play17:06

your object label's name so esp8266

play17:11

then the pico and here also

play17:13

esp8266 and the pico

play17:16

okay so remember these things you need

play17:18

to mention your object label names like

play17:20

this way esp-266 and pico esp8266 and

play17:24

pico okay then simply click on here

play17:28

so all the good as you can see green

play17:30

tigma then just scroll down

play17:33

scroll down

play17:35

then next step just click on

play17:42

so as you can see this step is also

play17:43

completed just scroll down and then now

play17:47

we are basically ready we are going to

play17:48

train our model as you can see train the

play17:50

tensorflow model with the training data

play17:53

so now simply we need to click on here

play17:55

now from here basically we are going to

play17:59

train our images okay so it will take

play18:02

some time okay friends

play18:04

so friends as you can see the training

play18:06

process has just completed okay we have

play18:09

completed our training process now next

play18:12

steps

play18:13

okay so next step is evaluate the model

play18:16

with the validation data so simply i am

play18:19

going to click on here

play18:22

and as you can see the model

play18:25

evaluate process just started okay

play18:28

so friends as you can see the model

play18:30

evaluation process also completed now

play18:33

next steps

play18:34

export as a tensorflow lite model okay

play18:37

so simply we are going to click on here

play18:40

on cell

play18:41

and now it will create the android dot

play18:44

tf alight okay we are going to basically

play18:48

uh export our tensorflow lite model okay

play18:52

and the model name is android dot tf

play18:55

alight okay

play18:58

so friends now

play19:00

we have our model ready okay this is

play19:03

what the last step okay so export as a

play19:06

tensorflow lite model this is what our

play19:08

last step okay now if you just click on

play19:12

here which is our folder this is what

play19:14

our model simply we need to click here

play19:17

right click then click on download and

play19:19

it will download our model inside our

play19:22

raspbianos bullseye okay

play19:24

so as you can see we have successfully

play19:26

download our model so simply i am going

play19:28

to minimize the browser

play19:30

and simply i am going to open the

play19:32

terminal and this is what our terminal

play19:34

now simply run the command cd

play19:36

downloads folder because our model is

play19:39

downloaded inside the downloads folder

play19:41

so cd downloads hit enter if i run the

play19:43

ls command as you can see android dot tf

play19:46

li this is what our custom model so

play19:49

clear the screen and we need to move the

play19:51

model inside our tensorflow folder so

play19:53

sudo

play19:55

mv android dot tflight slash home

play19:59

slash pi

play20:01

then tensorflow tensorflow lite bullseye

play20:05

this is what our folder name which we

play20:06

have already cloned

play20:08

okay

play20:10

so let me just make like this way okay

play20:12

this is what our folder which we have

play20:14

already cloned in a video where i have

play20:17

already explained how to install the

play20:19

tensorflow lite on the bullseye so the

play20:22

same folder if you don't know how to

play20:24

install watch the video and first

play20:26

install okay so we are going to move our

play20:29

android dot ta for like folder slash

play20:31

home slash pad tensorflow hyphen light

play20:34

hyphen bullseye

play20:35

then inside that we have example folder

play20:37

then the light folder then again

play20:40

examples then object detector and then

play20:43

we have our raspberry pi folder so we

play20:45

need to move our android dot tf a light

play20:48

file inside the raspberry pi okay so

play20:52

simply hit enter

play20:54

clear the screen now i am going to run

play20:56

the cd command

play20:58

then we are going to insert our folder

play21:00

so cd tensorflow lite example

play21:04

slide examples object detector then the

play21:07

raspberry pi

play21:09

okay clear the screen

play21:11

clear

play21:13

ls

play21:15

as you can see we have successfully move

play21:17

our model inside the raspberry pi folder

play21:20

now what i am going to do we are ready

play21:22

and i am going to run our script with

play21:25

the help of our model so i have here

play21:28

file as you can see here detect ipcan.pi

play21:32

this is what the file this is what the

play21:34

python script for

play21:36

running the ipcam with tensorflow lite

play21:40

okay so what i am going to do here i

play21:43

need to just start the ipcam software

play21:45

from mobile then i am going to enter the

play21:48

ip address inside the

play21:51

ipcm.5 okay as you can see this is what

play21:54

the ipcam.part so what i am going to do

play21:56

i am going to minimize the terminal go

play21:58

to the menu programming open the tony

play22:01

python ide

play22:03

and i am going to

play22:04

just close the file

play22:07

go to the open

play22:09

and we need to go inside the pie then

play22:12

our folder tensorflow lite iphone

play22:15

click on it examples

play22:18

then the light

play22:20

examples and we need to go here object

play22:23

detection and here it is what our

play22:25

raspberry pi folder click on it and then

play22:27

i need to open the i

play22:29

ipcam.pi file because we need to change

play22:32

the ip address

play22:34

so open it

play22:36

and this is what the ip address so i am

play22:38

going to open the ipcam software for

play22:40

mobile and then i'm going to enter here

play22:42

my ip address okay

play22:45

so friends i have started the ip camera

play22:47

the ip address is same

play22:50

192.168.0.102. so remember one thing you

play22:52

need to mention your ipic webcam

play22:55

software ip address here okay so we are

play22:58

ready and i am going to start the script

play23:00

so just close the tony editor open the

play23:03

terminal and remember one thing we are

play23:06

now inside the tensorflow lite bullseye

play23:08

example slide examples object detection

play23:10

raspberry pi folder and here we are

play23:12

going to run our custom model so sudo

play23:15

and our file name is detect ipcm dot pi

play23:19

okay so

play23:20

sudo python3

play23:24

then detect

play23:27

ipcm.pi

play23:28

space hyphen hyphen model

play23:31

space we need to mention our model name

play23:33

which is the android.t of align and now

play23:36

simply we need to hit enter then our

play23:39

camera frame will be start okay so i'm

play23:41

going to simply hit enter remember one

play23:43

thing i have already started the ipcam

play23:46

software from my mobile so if i just hit

play23:48

enter

play23:50

so friends as you can see we have

play23:52

successfully detected our esp8266 and

play23:55

the pico okay as you can see here

play23:58

esp8266

play23:59

and pico with the help of mobile camera

play24:02

using ipcam and the opencv and also the

play24:06

tensorflow okay

play24:08

so this is the reference you can create

play24:10

our own custom model for detecting the

play24:12

object with the help of tensorflow lite

play24:15

on raspbianos raspberry pi 4

play24:17

so i hope you learned something from

play24:19

this video we'll meet our next video

play24:21

till then thank you take care and bye

Rate This

5.0 / 5 (0 votes)

Ähnliche Tags
Machine LearningTensorFlow LiteRaspberry PiCustom ModelObject DetectionGoogle ColabESP8266PicoOpenCVPython Script
Benötigen Sie eine Zusammenfassung auf Englisch?