Building an Object Detection App with Tensorflow.JS and React.JS in 15 Minutes | COCO SSD

Nicholas Renotte
18 Nov 202015:55

Summary

TLDRThis video tutorial guides viewers through building a real-time object detection app using React and TensorFlow.js. The host introduces a pre-built template on GitHub to expedite the setup, then demonstrates how to integrate the COCO SSD model from TensorFlow.js for object detection. The tutorial covers setting up the React app, capturing webcam images, processing detections, and rendering them on-screen. It concludes with a live demo showcasing the app's ability to detect and highlight objects in real-time, with an option to switch to a 'party mode' for a more dynamic visual effect.

Takeaways

  • 😀 The video is a tutorial on building a real-time object detection app using React and TensorFlow.js.
  • 🛠️ A computer vision template is provided on GitHub to help kickstart the development of the object detection app.
  • 📚 The tutorial covers three main topics: accessing the computer vision template, setting up and coding with COCO SSD, and making real-time detections with the app and webcam.
  • 🔍 The app uses a pre-built TensorFlow.js model that utilizes COCO SSD for real-time object detection.
  • 💻 The tutorial guides through setting up a React app, capturing images from a webcam, and rendering detections to the screen.
  • 🔑 The 'tensorflow-models/coco-ssd' package is a key dependency for the pre-trained COCO SSD model in the project.
  • 🎨 The 'drawRect' function is created to visually represent the detected objects by drawing rectangles and text on the canvas.
  • 🔄 The app starts with importing the model, loading the network, making detections, and then drawing the results on the canvas.
  • 🎉 The tutorial also suggests enhancing the app with a 'party mode' feature that changes the color of the detections for a more dynamic effect.
  • 🔍 The detections are logged in the console, showing details like bounding box coordinates, class of the object, and the confidence score.
  • 📈 The tutorial concludes with a demonstration of the app detecting various objects in real-time using the webcam.

Q & A

  • What is the main topic of the video?

    -The main topic of the video is building a real-time object detection app using React JS and TensorFlow.js.

  • What are the three key things the video covers?

    -The video covers accessing the computer vision template, setting up and coding with COCO SSD, and making real-time detections using the app and webcam.

  • What is the purpose of the GitHub template mentioned in the video?

    -The GitHub template is designed to kickstart the development of a real-time object detection app, providing a foundation to build upon.

  • Which pre-built model is used for object detection in the video?

    -The video uses the pre-built TensorFlow.js model that utilizes COCO SSD for object detection.

  • How does the video guide the setup of the React JS app?

    -The video guides the setup by cloning the React computer vision template from GitHub and using the 'create-react-app' library.

  • What is the role of the 'tensorflow-models/coco-ssd' package in the project?

    -The 'tensorflow-models/coco-ssd' package provides the pre-trained COCO SSD model from TensorFlow.js, which is used for making object detections.

  • How does the video describe the process of capturing images from the webcam?

    -The video describes the process as streaming whatever is in the webcam's frame to the TensorFlow.js model for object detection.

  • What is the purpose of the 'drawRect' function in the utilities.js file?

    -The 'drawRect' function is used to draw rectangles and text on the canvas, representing the detected objects from the webcam feed.

  • How does the video demonstrate updating the app.js file?

    -The video demonstrates updating the app.js file by importing the required model, loading the network, making detections, and using the 'drawRect' function to visualize the results.

  • What additional feature does the video suggest for enhancing the app?

    -The video suggests updating the drawing function to change the color of the detections dynamically, creating a 'party mode' effect.

  • How can viewers access the code and resources mentioned in the video?

    -Viewers can access the code and resources by cloning the provided GitHub repository or downloading the code from the custom object detection React JS TensorFlow repo link provided in the video description.

Outlines

00:00

🛠️ Building a Real-Time Object Detection App with React and TensorFlow.js

The video script introduces a project to build a real-time object detection application using React and TensorFlow.js. The presenter outlines the process, starting with setting up a React app using a template available on GitHub, which accelerates the development. The app will utilize the COCO SSD model from TensorFlow.js for object detection. The script details the steps to clone the repository, set up the project in an IDE like VS Code, and install necessary dependencies, particularly the pre-trained COCO SSD model. The presenter also mentions an alternative for those who prefer to download and run the code without coding.

05:01

🔍 Setting Up and Coding with COCO SSD for Object Detection

This paragraph delves into the technical setup for the object detection app. It covers the installation of packages via npm install and the subsequent steps to update the app.js file. The presenter guides the audience through importing the COCO SSD model, loading the network, and setting up the detection function. The script explains how to use the webcam to capture images and pass them to the TensorFlow.js model for detection. It also includes the initial steps to start the React app and observe the detection results in the console.

10:02

🎨 Developing the Drawing Utility for Visual Detections

The script continues with the development of a drawing utility function named 'drawRect', which is designed to visualize the detection results on the webcam canvas. It explains how to create and style the function to draw rectangles and text based on the detection predictions. The function iterates over each detection, extracting relevant variables such as class, score, and bounding box coordinates. The presenter also demonstrates how to integrate this function into the app.js file to display the detections on the screen in real-time.

15:04

🥳 Enhancing Detections with 'Party Mode' and Finalizing the App

The final paragraph describes the enhancement of the detection app with a 'party mode' feature, which changes the color of the detection boxes for a more dynamic visual effect. The presenter shows how to modify the drawing function to implement this feature. After updating the function, the script concludes with running the app to see the real-time detections with the new visual style. The video ends with a summary of the steps taken and an invitation for viewers to share their thoughts and experiences with building their own real-time object detection apps.

Mindmap

Keywords

💡Real-time object detection

Real-time object detection refers to the process of identifying and locating objects within an image or video frame in real-time, as the video is being captured or streamed. In the video's context, it is the main theme where the creator is building an app to perform this task using TensorFlow.js and React.js. The script mentions setting up a React app to capture images from a webcam and using a TensorFlow.js model to detect objects, showcasing this concept in action.

💡React.js

React.js is a popular JavaScript library used for building user interfaces, particularly single-page applications. It is mentioned in the script as the foundational technology for the object detection app. The creator uses 'create-react-app' to set up the project, emphasizing its role in facilitating the development process.

💡TensorFlow.js

TensorFlow.js is a JavaScript library for training and deploying machine learning models in the browser or on Node.js. The script discusses leveraging TensorFlow.js for its object detection capabilities, specifically using a pre-built model that employs the COCO SSD algorithm for real-time detection.

💡COCO SSD

COCO SSD (Single Shot MultiBox Detector) is an algorithm used for object detection that is part of the TensorFlow.js models. It is highlighted in the script as the method the creator uses to perform real-time object detection within the app, allowing for the identification of various objects in the webcam's frame.

💡Webcam

A webcam is a digital camera that captures images or video streams and is used in the script to feed the real-time object detection app. The video mentions capturing images from the webcam to stream to the TensorFlow.js model, which then processes these images for object detection.

💡Computer vision

Computer vision is an interdisciplinary field that deals with how computers can gain high-level understanding from digital images or videos. In the script, the creator is building an app that utilizes computer vision techniques to detect objects in real-time, as evidenced by the use of a template and TensorFlow.js for this purpose.

💡Model

In the context of machine learning and the script, a model refers to a trained algorithm that makes predictions or decisions based on input data. The creator imports and loads a pre-trained COCO SSD model from TensorFlow.js to perform object detection within the app.

💡Detections

Detections in the script refer to the output of the object detection process, where the model identifies and locates objects within the video frame. The creator discusses making detections using the app and logs these in the console, which are then used to draw bounding boxes around detected objects.

💡Bounding box

A bounding box is a rectangular box that outlines an object within an image or video frame, used to indicate the location of the object. The script describes the creation of a 'drawRect' function to draw these bounding boxes around detected objects on the webcam canvas.

💡Canvas

In web development, the canvas refers to an HTML element used to draw graphics via scripting (usually JavaScript). The script mentions using a canvas to render the detections by drawing rectangles and text onto it, visualizing the object detection results.

💡Utilities.js

Utilities.js is a JavaScript file created in the script to contain helper functions for the app, such as 'drawRect' for drawing bounding boxes on the canvas. This file is an example of organizing code into modules to handle specific tasks within the app, enhancing maintainability and readability.

Highlights

Building a real-time object detection app using ReactJS and TensorFlow.js.

Utilizing a pre-built COCO SSD model from TensorFlow.js for object detection.

Accessing a computer vision template on GitHub for a head start on development.

Setting up and coding with COCO SSD to leverage TensorFlow.js's capabilities.

Making real-time detections using the app and webcam to identify objects in the frame.

The React app setup includes capturing images from the webcam to feed into the TensorFlow.js model.

Rendering detections to the screen to visualize different objects captured by the webcam.

Cloning the React computer vision template for a faster development process.

Using the 'create-react-app' library to streamline the app creation process.

Importing the pre-trained COCO SSD model to facilitate real-time object detection.

Installing dependencies via npm to set up the necessary packages for the app.

Updating the app.js file to integrate the TensorFlow.js model for detections.

Creating a 'drawRect' function to visually represent detections on the canvas.

Adjusting the drawing function to add a 'party mode' effect to detections.

Starting the React app to see real-time object detections in action.

Using console logs to monitor the output and performance of the model's detections.

The ability to detect various objects such as people, furniture, and other items.

Engaging with the audience by encouraging feedback on the app and its features.

Inviting viewers to subscribe and enable notifications for new video releases.

Transcripts

play00:00

i spy something beginning with

play00:03

um pete

play00:06

[Music]

play00:09

what's happening guys in today's video

play00:11

we're going to be building our very own

play00:13

real-time object detection app we're

play00:15

going to be using refjs and

play00:17

tensorflow.js to do this

play00:18

and in order to speed us along the way

play00:20

we're going to be taking a look at the

play00:23

real-time object detection template that

play00:25

i've set up that you're going to be able

play00:27

to access

play00:28

let's take a deep look as to what we're

play00:29

going to be going through so in today's

play00:30

video we're going to be covering

play00:32

three key things so first up we're going

play00:34

to be accessing our computer vision

play00:36

template so this is a template that i've

play00:37

set up for you on

play00:39

github that's going to allow you to

play00:40

kickstart your journey in terms of

play00:42

building

play00:43

your real-time object detection app

play00:45

we're also going to be setting up and

play00:46

coding with

play00:47

coco ssd so for this we're going to

play00:49

leverage the

play00:50

pre-built tensorflow js model so this is

play00:53

going to allow us to make

play00:54

detections and then last but not least

play00:56

we're going to make real-time detections

play00:58

using our app and our webcam so we'll

play01:00

actually be able to detect

play01:01

different objects within our frame let's

play01:03

take a look as to how this is all going

play01:05

to work

play01:06

so first up we're going to set up our

play01:07

react js app so this is

play01:09

all included inside of the template so

play01:11

it's pretty easy to set up

play01:12

then what we're going to do is capture

play01:14

images from our webcams our webcam is

play01:16

going to stream whatever's in that

play01:17

particular frame

play01:18

to our tensorflow.js model and then

play01:20

we're going to render those detections

play01:22

to the screen so you'll be able to see

play01:24

all the different objects that have been

play01:26

captured ready to do it

play01:27

let's get to it alrighty so in order to

play01:30

build our real-time object detection

play01:32

app using react and tensorflow.js we're

play01:35

going to be leveraging

play01:36

a couple of things that are going to

play01:38

help us along the way so first and

play01:40

foremost i've gone and set up this

play01:41

react computer vision template that's

play01:43

going to allow us to get up to speed

play01:45

a whole heap faster so we're going to be

play01:47

able to clone this down

play01:49

and build our app from there we've also

play01:52

got another reaper which is if you don't

play01:54

want to actually go on ahead and code

play01:56

and you just want to download the code

play01:58

and run it from the get go you can

play02:00

actually download this link so this is

play02:01

called

play02:02

the custom object detection react js

play02:04

tensorflow repo

play02:06

bit of a mouthful i know but everything

play02:08

that you need to actually run this

play02:09

without writing a single line of code is

play02:11

there

play02:12

in terms of how we've actually gone and

play02:13

built it we're using react

play02:15

and specifically we've gone and used the

play02:18

create react

play02:19

app library to go and do that and then

play02:21

we're also going to be leveraging

play02:23

tensorflow.js and if we go and take a

play02:25

look at the tfjs models

play02:27

we're specifically going to be using the

play02:29

object detection model

play02:31

so this model actually uses coco ssd to

play02:34

allow us to go and perform

play02:36

real-time object detection enough

play02:38

webbing let's actually get started so

play02:40

what we're actually going to do is we're

play02:41

going to clone this repo so the react

play02:43

computer vision template so we can copy

play02:45

that

play02:46

and we're going to open up a new command

play02:48

prompt

play02:50

and in order to clone this repo we're

play02:52

just going to go into our d

play02:53

drive or a drive that you want to clone

play02:54

this into and we're just going to type

play02:56

in git

play02:57

clone and then the name of our

play02:59

repository so if we just minimize that

play03:01

for a second

play03:02

so basically what we've written is git

play03:04

clone and then the link to our react

play03:06

computer vision template

play03:07

now all the links that i just mentioned

play03:09

i'm going to make available in the

play03:10

description below so if you want to pick

play03:12

those up by all means

play03:13

just grab them you'll be able to get

play03:14

started super quick

play03:16

so let's go on ahead and clone this

play03:19

awesome so that's now cloned

play03:21

now if we open up our d drive you should

play03:23

have a

play03:24

cloned repository so let's go ahead and

play03:26

open that up

play03:27

so you can see in fact we've now got a

play03:29

folder called react

play03:30

computer vision template and this

play03:32

contains all the code we need to get

play03:34

started

play03:34

now in order to go and build up from

play03:36

this we're going to open it up inside of

play03:38

an

play03:38

integrated development environment or a

play03:40

coding environment so in this case we're

play03:42

going to be using vs code

play03:44

so what we'll do is we'll first up go

play03:46

into that directory and then we can open

play03:47

it up using the code dot command this is

play03:49

only if you're using vs code

play03:51

so let's cd into it and then we can type

play03:54

in code and then dot and this will open

play03:56

it up inside of vs code

play03:58

and i'll just bring it onto the right

play03:59

screen and there you go

play04:02

so inside of this folder we've got a

play04:04

bunch of stuff

play04:05

so namely we've got a css file so

play04:07

app.css

play04:08

we've got an app.js file and this is

play04:10

where we're going to be doing

play04:12

the majority of our work we've also got

play04:14

an index.s

play04:15

or css file and an index.js file

play04:19

now as i mentioned the majority of our

play04:21

work's going to be inside of our app.js

play04:23

file

play04:24

but before we actually go on and start

play04:26

making some updates to our code what

play04:28

we're going to do is just make sure we

play04:29

install our dependencies

play04:31

so if you select package.json let's make

play04:34

this a little bit bigger

play04:36

you're going to be able to see all the

play04:38

dependencies that we've got within

play04:40

our application now in this case the

play04:42

library that

play04:43

is most important is or the package that

play04:45

is most important

play04:46

is the one named tensorflow dash models

play04:49

forward slash coco

play04:50

dash ssd so this is the pre-trained coco

play04:54

ssd model that we're going to be able to

play04:56

leverage from tensorflow.js

play04:58

now i've also got tensorflow.js we've

play05:00

also got react

play05:01

we can delete this fingerpose one out

play05:02

because that's from a previous code set

play05:05

and in order to go and install all of

play05:07

these all we need to do is open up a new

play05:09

terminal so i can just open one up by

play05:11

hitting control and the squiggly bracket

play05:13

i never know what it's called

play05:14

on a mac it's going to be commands

play05:16

wiggly bracket and then

play05:18

to go and install this stuff we're just

play05:19

going to type in npm install

play05:22

and let that run so this should take a

play05:24

couple of minutes to run but once it's

play05:26

done

play05:26

you're going to see a node underscore

play05:28

modules folder pop up

play05:30

so we'll be right back in a second

play05:32

alrighty so you can see that all of our

play05:34

packages have installed and we're back

play05:35

at our regular command line

play05:37

now what we're going to go and do is

play05:39

start making our updates to our app.js

play05:42

file

play05:42

so if you actually take a look at this

play05:44

so let's just make this a little bit

play05:46

smaller

play05:47

you can see that we've got a bunch of to

play05:49

do's now i've specifically called this

play05:51

out

play05:52

because it's basically going to walk you

play05:54

through the steps that you need to

play05:55

update in order to use this but likewise

play05:57

if you wanted to use this for other use

play05:58

cases you could do that as well

play06:01

here what we're going to do is we're

play06:02

going to go through steps one two

play06:05

three four and five and then by the time

play06:08

we've got through each one of those

play06:09

steps we should

play06:10

effectively have a real-time object

play06:13

detection

play06:13

reactor so the first thing that we need

play06:15

to do is import the required

play06:18

model here now the model that we're

play06:20

actually going to need is actually

play06:21

coming out of our coco ssd package

play06:24

so let's go on ahead and import that

play06:26

model first up

play06:32

alrighty so that's our first dependency

play06:35

imported so the line that we've just

play06:36

written there is import

play06:37

star as coco ssd from at tensorflow dash

play06:41

models

play06:42

forward slash coco ssd so this basically

play06:45

is allowing us to download our

play06:46

pre-trained tensorflow.js model

play06:49

the next thing that we need to do is

play06:50

actually import our drawing utility

play06:52

but we're going to hold off on that

play06:53

until the end because the last thing

play06:55

that we want to do is draw

play06:56

now the next thing that we would need to

play06:58

do is actually go on ahead and load our

play07:00

network into our model so let's go ahead

play07:02

and do that okay so that's our model

play07:06

loaded

play07:06

so in order to do that we've created a

play07:08

new variable called net

play07:10

and then we've made this because our

play07:11

function is asynchronous we're just

play07:13

waiting for that to load

play07:14

so here we're actually loading our coco

play07:16

ssd model and this is what we imported

play07:19

right up here and then we're using the

play07:20

load method to actually go on ahead and

play07:23

bring it in

play07:24

now if we go to step 4 the next thing

play07:26

that we want to do is actually start

play07:27

making some detection so really quickly

play07:30

we're already up to making some

play07:31

detections so let's go ahead and start

play07:33

doing that

play07:34

so once we get these detections we're

play07:36

going to be able to start our

play07:37

application and actually log

play07:39

these out so we'll actually be able to

play07:41

see how model is actually performing

play07:47

alrighty so that's the line of code that

play07:50

we need to actually make our detections

play07:52

so basically what we're doing here is

play07:53

we're creating a new variable called obj

play07:56

or

play07:56

object and we're using our network that

play07:58

we defined

play07:59

up here so we're actually passing it

play08:01

through to this detect function

play08:03

so we're using that network and we're

play08:04

passing through our video so this is the

play08:06

video from our webcam

play08:08

and we're passing that to our detect

play08:09

function so ideally we should be able to

play08:10

detect a bunch of objects

play08:12

then what we're doing is we're console

play08:14

logging that out so we should be able to

play08:16

see

play08:16

the output of each one of these

play08:18

detections in our console

play08:21

now what we need to do is actually start

play08:23

up our app so this is going to allow us

play08:24

to see our objects and whether or not

play08:27

our model is actually performing well so

play08:28

to start our app we just need to type in

play08:30

npm

play08:31

start and this is going to start our

play08:34

react app and

play08:35

open up a new browser so you can see

play08:37

it's opened up a new browser and it's

play08:39

gone directly to localhost 3000 so by

play08:42

default

play08:42

this is where our react app is going to

play08:45

start

play08:46

so let's give that a couple of seconds

play08:48

while it compiles and then ideally we

play08:49

should be able to see a couple of

play08:51

detections in our console

play08:53

all right so you can see that we've got

play08:54

our camera showing up on the screen

play08:56

now if we go and inspect our console

play09:01

and if we open that up you can see we're

play09:03

getting a bunch of detections here and

play09:05

if we open this up

play09:06

you can see that in fact we're making

play09:08

detection so we've got a couple of

play09:10

things showing here so we've got an

play09:11

array and inside of

play09:12

our array we've got a b box this

play09:15

represents

play09:16

our bounding box and each one of these

play09:18

represents a specific thing

play09:19

so this is our x coordinate this is our

play09:21

y coordinate and this is our box width

play09:23

and our box height

play09:24

we can also see our different classes so

play09:26

in this case you can see the class as

play09:28

person because it's detecting me

play09:30

and we can also see our score but at the

play09:32

moment we're not actually drawing

play09:34

anything to the screen

play09:36

so let's stop our app and actually

play09:38

finish our drawing function so we'll

play09:39

actually be able to see our results

play09:42

okay so the last two things that we

play09:43

needed to do were update our drawing

play09:45

utility

play09:46

and bring it in right up here

play09:49

so let's go on ahead and do that now to

play09:51

start building our drawing utility we're

play09:53

just going to right click on source

play09:55

create a new file and call that

play09:56

utilities dot js

play10:00

and in here we're going to define a

play10:02

function that's going to allow us to

play10:03

draw to the screen

play10:04

so this function is going to be called

play10:06

draw rect short for draw rectangle

play10:09

and it's going to allow us to pass our

play10:10

predictions to this function

play10:12

and draw them to our actual webcam

play10:14

screen or to our webcam canvas

play10:16

so let's go ahead and start doing that

play10:19

okay so we're going to

play10:20

start setting up our function and

play10:22

remember our function is going to be

play10:23

called

play10:23

draw rect

play10:29

alrighty so to our draw rest function

play10:31

we're going to be passing out detections

play10:33

and if you actually take a look this is

play10:35

actually going to be our object variable

play10:37

here

play10:37

and we're also going to pass through our

play10:39

canvas so this is going to be our canvas

play10:41

that we've already had predefined

play10:43

what we're then going to do is loop

play10:45

through each one of our detections using

play10:46

the 4h function

play10:48

so let's do that

play11:02

so the first thing that we're actually

play11:04

doing within our draw rect function

play11:06

is we're going and grabbing our x

play11:07

variable our y variable our width and

play11:09

our height so remember when we're

play11:10

console logging out our predictions we

play11:12

were able to see our class and our text

play11:14

these are exactly the same variables

play11:16

here we're just extracting them out now

play11:18

we're also extracting our class so in

play11:20

this case it was going to be person from

play11:22

our last prediction

play11:23

now the next thing that we want to do is

play11:24

set up some styling and actually go on

play11:26

ahead and draw our rectangles

play11:28

so let's power through that

play11:58

okay so we've finished up our code now

play12:01

what we've gone and done

play12:02

is we've first upset our styling so

play12:03

there we've created a new variable

play12:05

called color and this is going to hold

play12:07

the color of our box as well as the

play12:09

color of our text

play12:10

and we've also gone and set this should

play12:12

actually be stroke style so let's go and

play12:15

change that

play12:16

and we've set that to color we've also

play12:17

set our fill style to color

play12:19

and we've also set our font so in this

play12:21

case it's going to be 18 pixels

play12:23

and aerial then what we've gone and done

play12:26

is we've gone and drawn

play12:27

our rectangles and text so we've used

play12:29

our canvas and we've

play12:30

commenced our path we've then used the

play12:32

fill text method

play12:33

and to that we've passed the text that

play12:34

we've extracted from our prediction

play12:36

as well as the x and y coordinates in

play12:38

terms of their placement

play12:40

and then we're drawing our rectangle as

play12:41

well so to that we're passing our x

play12:43

variable our y variable our width and

play12:45

our height

play12:45

so this should ideally draw a rectangle

play12:47

around each one of our predictions

play12:49

and then last but not least we're

play12:51

drawing our stroke so this is going to

play12:53

actually apply it to the screen so we

play12:54

can save this now

play12:55

and then all we need to do is bring this

play12:57

into our app.js file and

play12:59

run it for each prediction so let's go

play13:01

ahead and do that

play13:03

so right up here on step two we can

play13:06

bring in our drawrect function

play13:10

so that's our drawrect function imported

play13:13

now what we need to do is

play13:14

just run it down here so in step five we

play13:18

just need to use our drawrect function

play13:20

and pass through our object so these

play13:21

were all about individual predictions

play13:23

and pass through our canvas

play13:25

so if we save that now ideally what

play13:27

should happen is if we go back

play13:28

to our browser we should be able to see

play13:31

our

play13:32

detection so you can see it's right up

play13:33

in the top screen but you can see that

play13:35

we're actually drawing our box

play13:36

now if we adjust our camera a little bit

play13:40

and take this down you can see that

play13:43

we're making a whole bunch of detection

play13:45

so we're detecting myself as a person

play13:47

the couch we're also detecting the chair

play13:50

pretty cool right now if we got some

play13:51

other stuff i don't actually have my

play13:53

phone on me but maybe if we tried out

play13:54

our bottle

play13:56

it's saying cup bottle there you go so

play13:58

you can see that we're doing we're

play14:00

creating real time detections

play14:02

using our webcam and using our pre-built

play14:05

tensorflow.js model now what we could do

play14:07

as well if we wanted to make this a

play14:09

little bit fancier or go

play14:10

and make detections in party mode we

play14:12

could do that as well

play14:14

so let's go ahead and do that

play14:17

so now what we need to do is we

play14:19

basically just need to update

play14:21

our drawing function so what we can do

play14:23

here is just go back into utilities.js

play14:27

and then rather than just automatically

play14:29

setting our color to green we're going

play14:30

to do a little bit of magic to make this

play14:32

a little bit more

play14:34

party mode so let's go ahead and define

play14:37

this so rather than our color just being

play14:38

green

play14:40

we're going to change it to pound sign

play14:44

and then plus

play14:53

and if we save it now and go back to our

play14:56

app

play14:59

you can see that our detections are now

play15:01

appearing in party modes our person

play15:03

is flashing and if we go and use our

play15:06

bottle

play15:07

you can see we're getting detections in

play15:09

party mode

play15:11

and that about wraps it up so we've gone

play15:13

through quite a fair bit now if we

play15:15

go back and take a look at our app we

play15:17

went through our importing of our model

play15:20

we

play15:20

created our drawrect function and

play15:22

imported it there we loaded up our

play15:24

neural network

play15:25

then we made our detections and we also

play15:27

drew it

play15:28

to our canvas and that about wraps it up

play15:32

thanks so much for tuning in guys

play15:33

hopefully you found this video useful if

play15:35

you did

play15:35

be sure to give it a thumbs up hit

play15:37

subscribe and tick that bell so you get

play15:38

notified of when i'm releasing new

play15:40

videos

play15:41

let me know what you think of my mo and

play15:43

also let me know how you went about

play15:45

building your real-time object detection

play15:47

app

play15:47

thanks again for tuning in peace

Rate This

5.0 / 5 (0 votes)

相关标签
Object DetectionReact JSTensorFlow.jsWebcamCOCO SSDMachine LearningAI AppReal-timeCoding TutorialComputer Vision
您是否需要英文摘要?