Face recognition in real-time | with Opencv and Python

Pysource
17 Aug 202128:19

Summary

TLDRIn this tutorial, Sergio demonstrates how to build facial recognition projects using Python libraries. He explains the difference between face detection and facial recognition, guides viewers through installing necessary libraries like OpenCV and face_recognition, and shows how to encode and compare images of faces. The tutorial progresses to real-time face recognition using a webcam, identifying individuals in live video feeds. Sergio also shares tips for improving accuracy and speed, and encourages viewers to engage with the content.

Takeaways

  • 😀 The video is a tutorial by Sergio on building visual recognition projects, specifically focusing on facial recognition.
  • 🔍 Sergio explains the difference between face detection (surrounding a face with a box) and facial recognition (identifying the person by name).
  • 📚 Viewers are instructed to download certain files from Sergio's website, including 'main.pi' and 'simplefacerect.pi', to follow along with the tutorial.
  • 🛠️ Two libraries are required for the project: 'opencv-python' and 'face_recognition', which are installed via the terminal using pip commands.
  • 📷 The process involves loading and displaying images, converting them from BGR to RGB format, and encoding them for comparison using the 'face_recognition' library.
  • 🔗 Sergio demonstrates how to compare two images to determine if they are the same person, using the 'compare_faces' function from the 'face_recognition' library.
  • 🖼️ The tutorial includes a practical example of encoding and comparing images of well-known figures like Elon Musk and Messi.
  • 💻 Sergio guides viewers through writing code for real-time facial recognition using a webcam, emphasizing the importance of the 'simplefacerect.pi' module.
  • 📹 The real-time demonstration includes capturing video frames, detecting faces, and drawing rectangles around detected faces with the 'cv2.rectangle' function.
  • 📝 Sergio discusses the importance of image titles for associating names with detected faces and displays names using the 'cv2.putText' function.
  • 🔄 The video concludes with suggestions for improving the project, such as enhancing accuracy, speed, and detection capabilities, and encourages viewers to subscribe for more content.

Q & A

  • What is the main topic of the video?

    -The main topic of the video is teaching viewers how to build facial recognition projects using Python libraries such as OpenCV and face_recognition.

  • Who is the presenter of the video?

    -The presenter of the video is Sergio, who helps companies, students, and freelancers to build visual recognition projects.

  • What are the two types of face detection mentioned in the video?

    -The two types of face detection mentioned are phase detection, which involves surrounding any face with a box, and facial recognition, which includes identifying the person by name.

  • What are the two libraries that need to be installed for the project?

    -The two libraries that need to be installed are 'opencv-python' and 'face_recognition'.

  • What is the purpose of converting the image format from BGR to RGB?

    -The image format is converted from BGR to RGB because OpenCV uses BGR format by default, whereas the face_recognition library requires RGB format.

  • What is the role of the 'face_recognition' library in the project?

    -The 'face_recognition' library simplifies the steps of facial recognition by encoding images and comparing them to identify known faces in real-time.

  • How does the video demonstrate the facial recognition process?

    -The video demonstrates the facial recognition process by encoding images of known individuals, comparing them to other images or live video feeds, and identifying matches by name.

  • What is the importance of downloading the files from the presenter's website?

    -Downloading the files from the presenter's website is important because they contain necessary images and Python files like 'main.pi' and 'simplefacerect.pi' required for the project.

  • How can the facial recognition accuracy be improved?

    -The facial recognition accuracy can be improved by using a GPU for faster processing, adding more diverse datasets, and ensuring good lighting conditions for face detection.

  • What is the significance of the 'simplefacerect.pi' file in the project?

    -The 'simplefacerect.pi' file is a Python module that is not a library but a custom file needed for the project. It is used for detecting and drawing rectangles around faces in real-time video feeds.

  • How can viewers test the facial recognition project with their own images?

    -Viewers can test the facial recognition project by placing their own images in the specified folder, ensuring the images are named with the person's name for correct identification during the project execution.

Outlines

00:00

😀 Introduction to Facial Recognition Tutorial

In this introductory paragraph, Sergio welcomes viewers to a new video focused on teaching students, freelancers, and company employees how to build facial recognition projects efficiently. He differentiates between face detection, which merely outlines faces in a box, and facial recognition, which not only does that but also identifies individuals by name. Sergio emphasizes the necessity of downloading certain files from his website and installing two essential libraries, 'opencv-python' and 'face_recognition', to proceed with the project. He mentions that the tutorial will cover both image and video recognition, including real-time applications.

05:01

🔍 Setting Up the Facial Recognition Environment

This paragraph details the initial setup for facial recognition, including the installation of necessary libraries via the command prompt or terminal. Sergio demonstrates how to load an image using the 'cv2.imread' function and how to display it with 'cv2.imshow'. He also explains the importance of converting the image color format from BGR to RGB using 'cv2.cvtColor'. The paragraph concludes with the encoding of the image using the 'face_recognition.face_encodings' function, which is a crucial step for the algorithm to later compare and identify faces.

10:03

📚 Comparing Encoded Images for Facial Recognition

Sergio explains the process of comparing two encoded images to determine if they represent the same person. He uses the 'face_recognition.compare_faces' function to perform this comparison, which returns a boolean value. The paragraph includes a debugging moment where Sergio corrects a typo that led to an inaccurate comparison result. He stresses the high accuracy of the facial recognition algorithm, which is essential for reliable identification, and suggests that errors are likely due to incorrect image encoding.

15:04

📹 Implementing Real-Time Facial Recognition with Webcam

This paragraph introduces the transition from static image comparison to real-time facial recognition using a webcam. Sergio outlines the steps to capture video streams from the webcam, including initializing the camera with 'cv2.VideoCapture' and capturing frames in a loop. He also discusses the use of 'cv2.waitKey' to update the video feed in real-time and the importance of releasing the camera after use to prevent resource leaks.

20:04

👥 Loading and Encoding Known Faces for Identification

Sergio demonstrates how to load and encode known faces for identification in real-time video streams. He introduces a function to import and encode faces from a folder, which simplifies the process of adding multiple known individuals to the system. The paragraph explains how the 'face_recognition' library uses image file names as identifiers for each person's face, allowing the system to recognize and label faces in real-time.

25:06

🎨 Drawing Rectangles and Labeling Faces in Real-Time

The final paragraph focuses on the visualization aspect of the real-time facial recognition system. Sergio explains how to draw rectangles around detected faces using the coordinates provided by the 'face_recognition' library and how to display the names of the individuals above these rectangles. He discusses adjusting the thickness and color of the rectangles for better visibility and correcting any coordinate errors to ensure that the names are accurately positioned relative to the detected faces.

🚀 Conclusion and Future Improvements

In the concluding paragraph, Sergio wraps up the tutorial by highlighting the simplicity of the facial recognition project and suggesting potential improvements, such as enhancing accuracy, speeding up the process with a GPU, and expanding the dataset for more reliable recognition. He invites viewers to share their thoughts, experiences, and questions in the comments and encourages them to subscribe for more content on facial recognition and computer vision.

Mindmap

Keywords

💡Facial Recognition

Facial recognition is a technology that identifies or verifies the identity of a person using their facial features. In the video, it is the main theme, demonstrating how to build a project that not only detects faces within an image or video but also identifies the person by comparing the facial features with a database of known individuals. The script mentions 'facial recognition' as the process of surrounding each face with a box and naming the person, which is a key part of the tutorial.

💡Phase Detection

Phase detection, as mentioned in the script, is the initial step in facial recognition where a system identifies the presence of a face and outlines it with a box. It is a subset of facial recognition and is essential for the subsequent step of identifying the person. The script differentiates phase detection from facial recognition by stating that the former only detects faces, while the latter includes identification.

💡Image Database

An image database in the context of the video refers to a collection of images used to train or compare facial features for recognition. The script emphasizes the need for a database of images from specific people to perform accurate facial recognition, indicating that the system's accuracy is reliant on the quality and variety of images it has access to.

💡OpenCV

OpenCV, mentioned in the script, is a popular open-source computer vision and machine learning software library. It is used in the video for tasks such as loading images, capturing video from a webcam, and displaying frames. The script instructs viewers to install OpenCV using pip, indicating its importance in the facial recognition project.

💡Libraries

In the script, libraries refer to collections of code that can be imported into a program to provide additional functionality. The video mentions installing two libraries: 'opencv-python' and 'face_recognition', which are essential for the facial recognition project. These libraries simplify complex tasks and provide the necessary tools for processing and recognizing faces in images and videos.

💡Encoding

Encoding in the context of facial recognition is the process of converting an image of a face into a set of unique data points or a 'faceprint' that can be used for comparison. The script describes encoding as a necessary step before comparing faces, where the algorithm analyzes the facial features of known individuals to create a template for identification.

💡Real-time Processing

Real-time processing, as discussed in the script, is the ability of a system to process input data immediately as it is received, without any delay. The video demonstrates how to implement facial recognition in real-time using a webcam, which is crucial for applications like security systems or live event monitoring.

💡Face Comparison

Face comparison is the process of analyzing and comparing facial features between two images to determine if they are of the same person. The script describes using the 'face_recognition' library to compare encoded images and return a boolean value indicating whether the faces match or not, which is central to the identification aspect of facial recognition.

💡RGB Format

RGB stands for Red, Green, and Blue, which are the primary colors used in digital imaging. The script mentions converting the image format from BGR, which is the default in OpenCV, to RGB. This is important because different systems may use different color models, and ensuring consistency is crucial for accurate facial recognition.

💡Webcam

A webcam is a digital camera that captures images or videos and streams them over a network, such as the internet. In the script, the webcam is used as the source for real-time video input, allowing the facial recognition system to process live footage and identify or detect faces in real-time.

💡Accuracy

Accuracy in the context of facial recognition refers to the system's ability to correctly identify or verify individuals. The script mentions that the facial recognition algorithm used in the project has a high accuracy rate, which is critical for reliable identification and is a key selling point for such technologies.

Highlights

Introduction to building visual recognition projects for facial recognition.

Explanation of the difference between phase detection and facial recognition.

Importance of downloading specific files from the instructor's website for the project.

Instructions on installing necessary libraries: opencv-python and face_recognition.

Overview of the face_recognition library developed by Adam Geitgey.

Demonstration of loading and displaying an image using OpenCV.

Process of encoding an image for facial recognition using RGB format.

Explanation of the face encoding process using the face_recognition library.

Comparison of facial encodings to determine if two images are of the same person.

Testing the facial recognition algorithm with images of different people.

Achieving a high accuracy rate of 99% on certain datasets with the facial recognition algorithm.

Transition to real-time facial recognition using a webcam stream.

Importance of using the simplefacerect.py module for real-time face detection.

Instructions on setting up a webcam for real-time face detection.

Process of loading known face encodings from a folder for comparison.

Real-time detection of known faces and drawing rectangles around detected faces.

Displaying names above the detected faces in real-time using OpenCV.

Potential improvements to the project, such as accuracy and speed, and the use of GPUs.

Invitation for feedback and questions from the audience for future tutorial content.

Transcripts

play00:01

[Music]

play00:11

oh hi welcome to this new video my name

play00:14

is sergio and i help company students

play00:16

and freelancers to ease and efficiently

play00:19

build visual recognition projects

play00:21

as you can imagine and you saw that from

play00:24

the preview we're going to see right now

play00:26

how to make facial recognition

play00:29

so let me explain a quick difference one

play00:32

is phase detection where you just

play00:33

surround any face with a box and then

play00:36

there is second what we're going to see

play00:37

today is facial recognition so it means

play00:40

surrounding each phase with a box but

play00:42

also telling exactly who that person is

play00:45

so putting a name on that face of course

play00:48

we start from some database of images

play00:50

that we have from specific people we

play00:52

will see that

play00:54

on an image and also on a video and in

play00:56

real time so let's start before we start

play00:58

you need to know that you have to

play01:00

download some files from my website i'm

play01:02

i'm going to put the link down below and

play01:05

you need to get these files so we have

play01:08

images of course you can get your own

play01:09

images i have some simple images main.pi

play01:13

and then simplefacerect.pi

play01:16

this is really important don't miss this

play01:18

step because you're going to get an

play01:20

error later

play01:22

second we need to install two libraries

play01:25

how to install the libraries we go

play01:28

on the terminal

play01:29

command prompt

play01:31

cmd

play01:32

and we see command prompt whether you

play01:34

have the mac or linux is the same you

play01:36

open the terminal and you need to type

play01:39

pip install

play01:41

opencv python is the first library

play01:44

and anyway all the things that i'm

play01:46

saying right now so all the comments

play01:48

will be also

play01:50

that all my blog posts if you just want

play01:52

to copy and paste them pip install

play01:54

opencv dash python is the first one you

play01:57

press enter i'm not going to do that

play01:58

because i have already the library then

play02:00

pip install face underscore recognition

play02:06

and then you press enter and we're going

play02:08

to use this library which is called

play02:10

phase recognition which is a great

play02:11

library that simplifies all the steps uh

play02:15

this is the github page i will not go

play02:17

into details about this one i will leave

play02:19

also the link for this because i want to

play02:21

make things

play02:22

very simple right now

play02:24

and easy to follow so this is a great

play02:27

library developed by adam get gay and

play02:31

we'll use this one

play02:34

let's start right now

play02:36

so i'm going to start with an empty file

play02:42

we're now going to import cv2 which is

play02:44

the opencv library

play02:46

and we're going to load an image let me

play02:48

explain what's the idea now we load a

play02:51

simple image for example i'm going to

play02:52

load messy

play02:54

one dot web ep we load this image and

play02:57

later we are going to compare this image

play03:00

with other images

play03:02

so to check if we have messy somewhere

play03:04

on these pictures

play03:06

so if we have messy

play03:08

the algorithm should tell us yes it's

play03:11

messy also somewhere here so let's do

play03:13

that we load first of all emg equals

play03:16

to load the image we use cv2.in read

play03:21

and here we need to put the path of the

play03:22

image i have that on the same folder

play03:24

messy1.webvp

play03:26

and now let's display the image cv2 dot

play03:29

in show

play03:31

emg

play03:33

and then emg

play03:35

i forgot something in show

play03:38

now

play03:39

uh this will display the image but we

play03:41

need a weight key vent to keep the image

play03:43

on hold so cv2.way

play03:46

key

play03:47

zero so it is going to wait until we

play03:49

press a key

play03:52

i'm going to run this right away to make

play03:53

sure that everything is correct and that

play03:55

we don't get error it works this is

play03:58

messy

play03:59

now what is the first step for phase

play04:01

recognition we need to encode

play04:04

this image so the algorithm will encode

play04:07

this image so that later it can compare

play04:09

this the phase of message with other

play04:12

faces

play04:14

there is one step before so we need two

play04:17

lines one is we need to convert the

play04:18

image format from bgr to rgb opencv by

play04:23

default uses the bgr format

play04:26

so a blue and green and red and we need

play04:29

red green and blue so we need just to

play04:30

swap that so

play04:32

rgb emg equals cv2 dot cvt color so

play04:37

convert color what do we want to convert

play04:39

we want to convert tmg then cv2.color

play04:44

underscore bgr2rgb

play04:47

we're going to convert the bjr format to

play04:49

rgb

play04:50

and now we're going to encode the image

play04:52

so we'll say

play04:54

emg encoding

play04:57

equals

play04:59

to this we need the library phase

play05:00

recognition we're going to import

play05:04

face recognition

play05:07

and then we'll say face recognition dot

play05:11

phase encodings and we are going to put

play05:15

here

play05:16

rgb emg

play05:19

emg

play05:22

and then we are going to say

play05:24

zero so

play05:26

uh probably because this loads multiple

play05:28

image you can load multiple images we're

play05:29

going to use just zero here as an index

play05:33

we have the first image encoded so let's

play05:36

uh restart these i'm going to just

play05:38

restart to make sure that there is no

play05:40

type on it i don't get any error i

play05:43

i run this quite often to

play05:46

to avoid to have a lot of code and then

play05:49

it's a mess if you get some error

play05:52

and of course it's a bit slow when you

play05:54

have the encoding because it needs to

play05:56

analyze the image the bigger the image

play05:57

is lower the encoding is and this image

play06:00

is quite big

play06:02

it works let's go to the next step we're

play06:05

going to do exactly the same process for

play06:08

another image

play06:10

i'm going to copy and paste and we just

play06:13

are going to change

play06:16

we will be changing only the variables

play06:18

name so we have

play06:19

let's load the second image we're going

play06:20

to load another person let's load elon

play06:23

musk

play06:24

so i'm going to load

play06:26

em g2 see if it's not in read i have

play06:29

elon musk on another folder images

play06:34

and then elon musk

play06:37

by the way everything that i am typing

play06:38

here will be uh on the

play06:41

on the link in the description there

play06:42

will be a blog all the code you can

play06:44

download everything that i have

play06:47

and also the test images that i am using

play06:49

so images elo mask gpg so rgb and g2 mg

play06:54

encoding two phase recognition rgb emg

play06:57

too okay that was quite simple

play07:00

and now let's display also emg2

play07:04

so now i'm going fast just with this

play07:07

example but later we will use this in

play07:09

real time it will be the code it will be

play07:10

much easier i have another file which

play07:13

have this really simplified so don't

play07:16

worry if it's hard for you to follow

play07:20

there is not much really to understand

play07:22

it's just a few lines

play07:24

now

play07:27

now we need to make the comparison so

play07:30

what is the idea we have emg encoding

play07:32

one

play07:33

and we have emg encoding two

play07:35

we want to know

play07:36

if emg one and in emg two there is the

play07:40

same person

play07:41

so let's do that

play07:46

result

play07:48

equals

play07:52

phase recognition

play07:54

dot

play07:55

compare faces

play07:58

and right here we're going to put first

play08:03

emg encoding

play08:06

and then emg

play08:09

encoding

play08:10

2. and let's print the result

play08:19

let's run this one this might take a few

play08:22

seconds to run so what are we doing

play08:23

we're comparing this image elon musk

play08:26

with this image of messi

play08:29

and the code should tell us true or

play08:31

false if true if is the same person or

play08:33

false

play08:34

if there are different

play08:36

people and the result is true

play08:39

which is a bit awkward uh okay i just

play08:43

realized that there there is some

play08:45

uh some typo here so i'm loading i'm

play08:48

loading emg one again so emg two of

play08:51

course they're not the same person so i

play08:53

was surprised to be honest so let's run

play08:55

this again

play08:58

by the way this algorithm algorithm is

play09:00

really precise

play09:01

this worked great it has like 99

play09:04

accuracy on some data set so if you are

play09:06

getting an error is most likely

play09:08

that there is some type of mode then

play09:10

it's not working well somehow

play09:12

so we have a result now it's false so

play09:14

we're comparing this one

play09:16

with

play09:17

this one and of course they're not the

play09:20

same person

play09:21

and we see result false now let's test

play09:24

also with

play09:26

some other image

play09:27

so we're going to compare

play09:29

messi with

play09:31

this one jeff bezos

play09:36

um

play09:57

okay we're going to compare messi wii

play09:59

jeff bezos

play10:02

jeff bezos.jpg

play10:06

so of course now we should get again

play10:09

false

play10:17

it takes a few seconds it's of course a

play10:19

bit slow

play10:20

uh you might even uh there is some it's

play10:23

more complex in simulation but you might

play10:25

get this working with a graphic card

play10:27

media graphic card and it will be much

play10:29

faster again we have result false i'm

play10:32

going to make the last testing so we're

play10:33

going to compare messi with another

play10:35

picture of messi so now we have messi

play10:38

jeff bezos are they the same person

play10:40

exactly they are not

play10:42

so now let's load messy and mess but of

play10:44

course a different picture of him it

play10:46

wouldn't make any sense with the same

play10:48

picture

play10:50

messy

play10:53

yvp

play10:55

and now we should get true

play11:11

and we get results true so we are

play11:12

comparing this one

play11:14

with

play11:16

this one

play11:18

so despite

play11:21

i

play11:22

is a different environment

play11:24

the despite also the light and the color

play11:28

it works great we get result true

play11:31

i can say that this one honestly was

play11:33

quite simple to achieve

play11:35

now it will come the fun part of the

play11:37

project where we're going to have a

play11:39

simple code to run this in real time and

play11:42

i will be using my webcam so let's do

play11:44

that i'm going to

play11:49

i'm going to

play11:51

store this somewhere so that i will put

play11:53

later on the blog

play11:56

so you will get also some files with

play11:58

this one while i'm going to

play12:01

start this from scratch but in real time

play12:03

from a video

play12:04

so we import cv2 import phase

play12:06

recognition and now we import

play12:11

of from

play12:14

simple

play12:16

face rect we are going to import simple

play12:19

face wreck

play12:21

keep in mind this is not a library this

play12:23

is a python file that you need to

play12:25

download from a website and put together

play12:27

on the same folder with the main file

play12:29

otherwise you get an error so let's run

play12:32

this one

play12:34

how do we work with a webcam first of

play12:36

all we're going to get the stream from

play12:38

the webcam and later we will do all the

play12:40

rest so first of all let's in

play12:43

let's load the camera cap equal cv 2.

play12:46

video

play12:48

capture

play12:50

and then we put the index of the camera

play12:53

so load camera

play12:58

uh zero so this is say load the first

play13:00

webcam if you have multiple webcams you

play13:02

need to put one to load the second two

play13:04

through the third and so on and i have

play13:06

three cameras so i need to load the

play13:08

latest one so i'm going to use index two

play13:10

most likely we need to use index zero if

play13:13

you

play13:13

are not using the webcam you have just

play13:15

one

play13:16

while

play13:18

true now we're going to get the stream

play13:20

in real time from real time we're going

play13:21

to get frame after frame

play13:25

red frame equals cap dot read

play13:32

so red is true or false true with the

play13:34

frame false we don't have the frame and

play13:35

frame is the frame

play13:37

and now let's show the frame cv two dot

play13:39

im show

play13:41

frame and then frame

play13:43

and now again as before a weight key

play13:45

event but this time slightly different

play13:48

key equals cv2 dot weight key

play13:51

one before we have we had weight key 0

play13:54

which was freezing the frame

play13:56

until we will press some key now we say

play13:59

1 so weights 1 millisecond and go to the

play14:01

next frame so that we have the video in

play14:02

real time or like the camera real time

play14:06

but if we want to quit we can do that if

play14:08

the key is 27 which corresponds to the s

play14:11

key on the keyboard this breaks the loop

play14:14

finally we need to release the camera

play14:16

cap dot release

play14:19

and see if it so does destroy all

play14:22

windows to close all the windows

play14:24

let's run this one so i'm going to run

play14:26

main dot pi

play14:28

i'll stop every run

play14:31

if everything is working correctly we

play14:33

should see the live stream of the camera

play14:37

so we have the camera loading

play14:40

uh now it's not doing anything just

play14:42

displaying the camera

play14:44

let's now

play14:46

use the library face simple phase rect

play14:50

not the library

play14:52

the python file is simple the module

play14:53

simple phase rect

play14:56

to compare the faces

play14:58

how do we do this

play15:00

first of all we need to load the

play15:01

encoding faces

play15:03

how do we load that

play15:05

so also let me explain i'm putting too

play15:07

many things and not being clear

play15:10

we load the encoding faces it means

play15:12

these are the known faces so later we

play15:15

will look wherever we have for these

play15:17

people elon musk jeff bezos messi

play15:21

reynolds and me

play15:24

so each time we see one of these

play15:27

we will get their name

play15:29

uh when we see some other phase we will

play15:32

get unknown face so that's

play15:35

that's the goal

play15:36

uh the idea

play15:42

so let's encode all these faces together

play15:44

so for this i created a simple function

play15:48

which imports all the faces that are on

play15:50

a folder

play15:52

and we need to do this encode

play15:57

uh faces

play15:59

from a folder

play16:01

first of all let's initialize the simple

play16:04

face rec module

play16:06

f of simple phase drag let's call

play16:10

sf

play16:12

uh sfr equals

play16:15

a simple face rack so i'm going to

play16:18

initialize this

play16:21

then let's load

play16:23

uh i will put this before loading the

play16:25

camera

play16:28

like this

play16:29

now let's load let's encode all the

play16:31

faces all the known faces so we will

play16:34

take the path where they are so i will

play16:37

put them just on images

play16:40

and we do this s f

play16:42

r dot

play16:44

load encoding images

play16:46

and right here we need to put the path

play16:49

of the folder so we have them just on

play16:52

images

play16:54

so this is the path where all the images

play16:56

are

play16:58

to make sure and to show how this works

play17:01

i'm going to run this

play17:03

if everything is working correctly you

play17:05

will see an output that five

play17:08

images were found found because i have

play17:10

five images

play17:11

of course you can put how many images

play17:13

you want in that specific folder

play17:16

so you can put your own images

play17:19

how many you want whatever you want just

play17:21

put them here and they will be

play17:23

automatically added to the encoding

play17:28

now let's go to the next step once we

play17:30

have five encoding images found

play17:35

just after we get

play17:37

the frame

play17:39

we are going to

play17:41

detect if any of the names

play17:44

are there

play17:49

detect

play17:50

faces

play17:52

so we use

play17:55

sfr

play17:57

dot

play17:58

detect known faces

play18:02

and then let's uh from the frame

play18:05

from the frame

play18:07

what are we going to get in exchange

play18:09

we're going to get the phase rectangle

play18:11

for each phase so we're going to get uh

play18:14

face

play18:15

locations

play18:18

face locations

play18:20

and then we're going to get

play18:23

the names face names

play18:27

uh by the way if we're wondering how are

play18:29

the names loaded if we're not putting

play18:30

the name anywhere

play18:32

uh

play18:33

they will be taken from the image title

play18:36

so for example this is the image of elon

play18:38

musk and it's called elon musk dot jpg

play18:42

jeffbezos.jpg so the name that you put

play18:44

on this image is the name that you will

play18:46

see displayed later on the screen

play18:53

phase locations so let's now first of

play18:55

all display the face locations

play18:59

for

play19:01

uh

play19:02

face

play19:03

lock

play19:05

and for

play19:06

name

play19:09

in

play19:11

a zip so we're going to extract these

play19:13

together phase locations

play19:15

and face names

play19:18

let's first of all print phase location

play19:21

face lock

play19:24

and let's now let's now run this one so

play19:27

see what we get

play19:31

so now the idea is that we're going to

play19:32

get phase location so of course the

play19:34

coordinates to

play19:36

to display correctly the face

play19:39

uh in real time

play19:41

on the frame

play19:44

and right here you see that i have a lot

play19:46

of outputs

play19:48

and even if i move my face you can see

play19:50

that these numbers somehow are changing

play19:52

all of them okay now all of them

play19:56

oh here we have the location of the face

play19:58

so what i am going to do right now is

play20:01

i'm going to

play20:02

take

play20:04

uh these values i'm going to extract

play20:06

these values and draw a rectangle

play20:08

surrounding the face so now i don't have

play20:10

anything we should draw the rectangle

play20:12

using these coordinates

play20:14

uh what are these coordinates there are

play20:17

four values

play20:19

the first two is the top left point and

play20:21

then we have the right bottom point so

play20:23

top left like this

play20:26

top left

play20:27

right bottom and we have the rectangle

play20:29

of the face

play20:31

so

play20:32

we have

play20:33

top

play20:34

left

play20:37

top left

play20:40

bottom

play20:42

right

play20:43

equals

play20:45

a face lock

play20:48

zero

play20:50

so i'm going to extract them one by one

play20:52

this way

play20:54

phase one

play20:58

face two

play21:00

and face three

play21:03

so i simply assign to the first value is

play21:06

top is the y the second value is the x

play21:09

left

play21:10

then we have bottom and so on

play21:12

now let's draw a rectangle for this one

play21:14

cv2.e

play21:18

we're going to draw the rectangle where

play21:19

we're going to draw the rectangle on the

play21:21

frame so frame

play21:24

uh

play21:24

then we have x and y so we have left

play21:28

and top

play21:29

top left point

play21:31

then we have 0.2

play21:34

uh probably it's better just to call

play21:36

this with the coordinates will be less

play21:38

confusing so instead of top this will be

play21:40

y1

play21:41

instead of left would be x1

play21:44

instead of bottom would be y2

play21:48

instead of right will be x2

play21:50

so we have left which would be x1 top

play21:54

will be y1 so this is just basic

play21:56

geometry with the coordinates then we

play21:58

have x2

play22:00

y2

play22:02

and now the color let's make this

play22:05

somewhat red so zero blue

play22:08

the color go from zero to 20 55 we give

play22:11

zero blue zero for green and let's give

play22:13

200 of red so that it's red but not too

play22:16

bright

play22:18

thickness let's make this two pixels

play22:20

thick

play22:22

if everything is correctly is correct

play22:25

we're going to see a rectangle

play22:27

surrounding the face

play22:30

uh

play22:31

we have the rectangle so red rectangular

play22:33

surrounding the face

play22:35

let's now take the name

play22:37

so the rectangle is working also i will

play22:39

make maybe a little bit thicker let's

play22:41

say four pixels

play22:44

and let's now take also so we have the

play22:47

name associated to each phase location

play22:51

let's display that

play22:53

cv2.put

play22:55

text

play22:57

let's put the text on the

play23:00

we're on for on the frame

play23:04

or the position of the text let's say

play23:07

we're going to put the text on the top

play23:09

of the rectangle

play23:11

so we will do

play23:13

x

play23:14

1

play23:16

y

play23:17

1

play23:20

minus minus 10 px so that it's not

play23:24

overlapping with the line of the

play23:25

rectangle so that's why i'm going to say

play23:27

minus 10 pixels

play23:30

uh i forgot the text so first we need

play23:32

the text frame then

play23:34

um

play23:36

name

play23:38

here we show the name

play23:42

so the text font face now the font c2

play23:46

dot font

play23:48

underscore hair shall it doesn't really

play23:50

matter with the font that we choose

play23:53

and the size of the text let's say one

play23:57

color of the text let's make this

play24:00

completely black

play24:02

zero

play24:03

zero zero

play24:05

and thickness of the text too

play24:07

and now let's run this one

play24:11

oh and here we have this in real time

play24:13

you see

play24:15

there is my name so it's already a good

play24:16

start only i believe i mixed some

play24:19

somewhat the coordinates

play24:21

so my name was supposed to be right here

play24:24

on the top

play24:25

left of the rectangle so i i messed with

play24:27

the coordinates so i'm going to fix that

play24:30

and also let's i'm going to use

play24:33

to make the the name

play24:34

better looking right here so let's

play24:36

quickly do that

play24:42

so i mix the coordinates so we have top

play24:44

right so this is x two

play24:47

and this is x

play24:50

one

play24:53

uh probably left top right and then

play24:57

bottom and left

play24:58

i don't know why they choose to use this

play25:00

coordinates format

play25:02

for the face uh face direct library this

play25:05

it's not common but anyway this is how

play25:08

you do it

play25:12

i was saying

play25:14

okay this is uh let's make the test text

play25:18

look better um

play25:21

okay i don't want to take much time with

play25:23

this video so

play25:24

i want that it's it will be more clear

play25:26

so i will say also for this one red

play25:28

around 200

play25:35

so ideally was thinking about putting a

play25:36

red box and white color of the text

play25:40

but i might do add that code later i

play25:42

want to make this as simple as possible

play25:44

to get the important information so how

play25:47

to make the face wrecked

play25:49

uh how to get the face then all the rest

play25:52

uh changing the

play25:54

the colors uh the the shapes uh

play25:58

it's all about simple operations with

play26:00

opencv which we won't make any sense to

play26:03

to do right now

play26:09

oh it's working right now so you see now

play26:11

there is my name on it

play26:13

but i have to prove you that

play26:16

it's working well so i have the phone

play26:19

and i'm going to show the pictures

play26:24

okay even

play26:27

let me uh increase the lightning so

play26:31

we will not have any problem okay you

play26:33

see ryan reynolds

play26:36

again

play26:37

sometimes if it doesn't recognize the

play26:39

person says unknown but it's

play26:42

it works quite well uh so far

play26:45

messy

play26:47

can we get messy

play26:51

messiah it's hard

play26:53

because there is the reflection of the

play26:55

lightning so elon musk

play26:57

we get that

play27:00

so the hardest right here is face

play27:02

detection because of the face is really

play27:04

small and the lightning but normally

play27:06

will work much better

play27:10

and again jeff bezos

play27:18

so this is all for this tutorial i hope

play27:21

you enjoyed this tutorial of course

play27:22

there there are improvements that you

play27:24

could make for example

play27:25

you could improve the accuracy you could

play27:28

improve the speed

play27:29

installing this with the gpu you could

play27:31

improve the uh detection in adding

play27:34

multiple data sets necessarily of adding

play27:36

just one single person

play27:39

for encoding you can add multiple person

play27:41

for example for messy multiple pictures

play27:43

of messi

play27:44

and so on

play27:46

let me know what you think about this

play27:48

project in the comment let me know if

play27:50

you try this

play27:51

and what projects are you working on and

play27:53

questions because i will take the

play27:55

questions also to

play27:57

make new videos and to improve the

play28:00

projects

play28:01

regarding face recognition

play28:03

i suggest that you

play28:05

subscribe to get notified because i will

play28:08

be releasing a lot of new content

play28:10

regarding

play28:12

phase recognition computer vision

play28:14

and so on so this is all for now see you

play28:17

in the next video

Rate This

5.0 / 5 (0 votes)

Étiquettes Connexes
Facial RecognitionPython CodingOpenCVMachine LearningAI ProjectsReal-Time DetectionComputer VisionFace EncodingVideo ProcessingTech Tutorial
Besoin d'un résumé en anglais ?