Directional Motion Detection and Tracking using Webcam in TouchDesigner #touchdesigner #tutorial
Summary
TLDRIn this video, Dan demonstrates how to use optical flow with a webcam to detect movement direction, creating job channels for triggering events based on left, right, up, and down movements. This technique is ideal for interactive installations or controlling visuals without the need for sensors. Dan guides viewers through the process, from setting up the optical flow to refining the output for a clean direction signal, and suggests adjusting parameters based on lighting conditions. The tutorial also promotes Dan's Patreon for additional resources and components.
Takeaways
- 🎥 Dan is demonstrating how to use a webcam with optical flow to track movement for triggering events in a design software.
- 👐 The process does not require special sensors, making it accessible for those without them.
- 🔍 Optical flow is used to determine the direction of movement for each pixel in the webcam's incoming texture.
- 📏 The script explains how to isolate the left-right movement by focusing on the red channel and using a reorder node.
- ↔️ A threshold node is used to separate the movement into positive (right) and negative (left) directions.
- 🔢 Analyze nodes are utilized to count the number of pixels moving in each direction, which is then summed up.
- 📉 To reduce noise, a trail node is introduced to smooth out the data, making the movement detection more reliable.
- 📊 A logic chop is used to define bounds that determine when the movement is considered significant enough to trigger an action.
- 🔄 The final direction is determined by comparing the count of positive and negative pixels, with a sine function used to simplify the output to -1 or 1.
- 🛠️ The setup can be adjusted with parameters such as the optical flow threshold and bounds to optimize performance under different lighting conditions.
- 🔄 The same method can be applied to the green channel to detect up and down movements, with the final output being a combination of both directional movements.
Q & A
What is the main purpose of the component being built in the video?
-The main purpose of the component is to detect movement direction using optical flow from a webcam input and generate job channels that can trigger events based on the movement direction.
What is Optical Flow and how does it help in this context?
-Optical Flow is a method that determines the direction of movement for each pixel of an incoming texture. It helps by providing the direction of movement for the objects within the webcam's view, which can be used to trigger events or control visuals.
How can the optical flow component be accessed in the video?
-The optical flow component can be accessed by going to the tools palette, pressing Ctrl+B, and selecting 'Optical Flow' from the list.
What does the Optical Flow component provide in terms of movement detection?
-The Optical Flow component provides the direction of movement for each pixel, encoded in red and green channels, where red indicates left-right movement and green indicates up-down movement.
How does the video script describe the process of isolating movement direction?
-The script describes isolating the movement direction by using a reorder node to focus on the red channel for left-right movement, applying a threshold to separate negative and positive pixel values indicating direction, and then analyzing the count of these pixels to determine the overall movement direction.
What is the purpose of the threshold node in the script?
-The threshold node is used to filter out pixels based on their movement direction, separating the pixels that indicate movement to the left (negative values) from those indicating movement to the right (positive values).
How are the counts of positive and negative pixels used to determine the overall movement direction?
-The counts of positive and negative pixels are converted to chop data and then subtracted from each other to get a value that indicates the overall movement direction, with positive values suggesting rightward movement and negative values suggesting leftward movement.
What is the role of the trail node in the script?
-The trail node is used to smooth out the data and reduce noise, making the movement direction signal more reliable and easier to interpret.
How can the final direction signal be cleaned up for a more reliable output?
-The final direction signal can be cleaned up by using a logic chop to define bounds that filter out noise, and by applying a blur node to smooth the data before processing.
What is the significance of the sine function in determining the final direction?
-The sine function is used to convert the direction signal into a simple -1 or 1 value, where -1 represents leftward movement and 1 represents rightward movement, simplifying the output for use in controlling visuals or triggering events.
How can the optical flow threshold be adjusted to improve the component's performance?
-The optical flow threshold can be adjusted to exclude minor movements or to focus on more significant movement, helping to reduce noise and improve the reliability of the direction signal.
What is the suggested method for controlling the circle's movement in the video?
-The circle's movement is controlled by using a speed drop with a limited range between -0.5 and 0.5, and by offsetting the direction signal to move the circle left or right based on the detected movement.
How can the component's parameters be customized for different lighting conditions?
-The parameters that may need to be adjusted for different lighting conditions include the optical flow threshold, the bounds for noise filtering, and potentially the lambda value in the blur node, although its specific function is not detailed in the script.
Outlines
🎥 Introduction to Optical Flow for Movement Detection
Dan introduces a project using optical flow technology to detect movement direction from a webcam feed without any special sensors. The goal is to create a component that outputs job channels based on movement direction, which can be utilized for triggering events in interactive installations or controlling visuals. He mentions that a pre-configured component with parameters is available on his Patreon, where he is more active than on YouTube. The process begins with setting up an NDI input for the webcam image and applying optical flow to determine the direction of movement for each pixel.
🔍 Isolating and Analyzing Movement Directions
The video script explains the process of isolating the red channel from the optical flow data to focus on left and right movements. A threshold is applied to separate negative and positive pixel values, representing movement directions. The analyze drop operation is used to count these pixels, and the results are converted to a chop format. The process involves subtracting the counts to get a net direction value, which is then filtered and trailed for a smoother output. The aim is to simplify the noisy data into a clean left-right direction signal.
📉 Refining the Direction Signal and Controlling Visuals
The script continues with refining the direction signal by using a logic chop to define bounds for noise and a blur for data smoothing. The sine function is applied to convert the direction value into a clean -1 or 1 output, representing left and right movements respectively. The process is repeated for the green channel to detect up and down movements, and the results are merged to create a comprehensive direction control. The optical flow threshold can be adjusted for different lighting conditions to improve reliability, and the final component can be used to control visuals, demonstrated by moving a circle in response to hand movements.
👍 Conclusion and Call to Action
In conclusion, the video demonstrates how to use optical flow to detect and control movement directions from a webcam feed. The script encourages viewers to subscribe for more content and to check out Patreon for additional resources and pre-configured parameters for the optical flow component. The video wraps up with a reminder of the practical applications of this technology in installations and the importance of adjusting parameters based on environmental conditions.
Mindmap
Keywords
💡Optical Flow
💡NDI
💡Threshold
💡Directional Isolation
💡Analyze
💡Chop
💡Logic Chop
💡Sine Function
💡Denoising
💡Telekinesis
💡Patreon
Highlights
Introduction to a method for detecting movement direction using optical flow without special sensors.
Utility in triggering events based on movement direction for interactive installations.
Explanation of using basic left, right, up, and down directions for control.
Invitation to Patreon for the component with parameters mapped out and additional resources.
Setup begins with a clean slate and an NDI input for webcam image.
Optical flow is introduced to determine the movement direction of each pixel.
Demonstration of viewing pixel values to understand movement direction.
Isolating directional movement by focusing on the red channel for left and right movement.
Use of threshold to separate movement into negative and positive directions.
Counting pixels to determine the predominance of movement direction.
Conversion of pixel counts to a single channel for clarity.
Subtraction of pixel counts to get a general direction of movement.
Introduction of a trail effect for smoothing the movement data.
Recap of the process for obtaining general movement direction from pixel data.
Refinement of the signal by setting bounds to determine when movement is significant.
Inversion of the signal for a more intuitive understanding of left and right movement.
Use of sine function to convert movement data into a clean -1 or 1 value.
Adjustment of the optical flow threshold to reduce noise in the signal.
Application of the method to control a visual element, like a circle, in real-time.
Advice on adjusting parameters based on lighting conditions for optimal results.
Mention of Patreon for pre-configured parameters and additional resources.
Conclusion and invitation to subscribe for more content, with a reminder of Patreon offerings.
Transcripts
hey everyone this is Dan today we're
gonna move anything into designer
using on our webcam we're not going to
use anything special other than Optical
flow but in the end we're gonna get some
job channels out of this component that
we're gonna build
um this is very useful if you don't have
any sensors at hand and you want to
trigger events just based on the
movement direction for example if you
wanted to do something with your visuals
you are interactive installation only
when people were walking to the left or
to the right you could do stuff or as
you can see you can just control
anything with these basic left right up
and down
directions
if you would like to skip the video and
just start using this component
with all the relevant parameters mapped
out for you
and then you can find it on my patreon
along with a lot of other useful stuff I
tend to be a bit more active there than
here on YouTube so consider checking it
out
alrighty let's start from a clean slate
I'm gonna put down a ndi in because
that's where I have my webcam image
coming from
because OBS is also using it where I am
and we need an optical flow from the
palette so player is Ctrl B
and there under the tools you will find
Optical flow
and the optical flow
basically gives us
the direction of movement for each pixel
of the incoming texture
so if I try to
pause it here
um I'm moving my hands I guess from
right to left
and let's look at the pixel values by
activating this viewer and hitting d
so if you look at some of these pixels I
use you can see the direct channel is in
the negative
meaning I was moving my hand from right
to left I guess or the other way around
we will see you soon
and we also have some green values
because I guess
I'm moving to my hands a bit
up and down also when I'm moving it to
the left and right but basically yeah
this Optical flow is giving us these red
and green channels indirect channels we
have the left and right movement encoded
and the green Channel we have the up and
down encoded for each pixel and we're
gonna use all this data to get a general
direction
uh from the incoming image
let's start with only the left and right
movement so that means we need to
isolate directional
I'm gonna do that with the reorder
I'm gonna output zero on the green and
zero on the blues and get input one red
so now we have all the uh red Channel
stuff here which Maybe
we can actually see now
but it's there it's only only red left
and right information
so we have positive and negative pixels
also depending on the direction so this
is where we're gonna mainly uh separate
the
the data so I'm going to put down a
threshold
threshold top
and in the threshold top we're gonna
select less or equal
and less or equal then zero so we get
Only One Direction not the other
and we're gonna do the the same but
instead of less or equal we're gonna go
to greater or equal
so in these two structures we have now
if I'm moving my hand
to the left we have more pixels here if
I'm moving it to the right we have more
pixels here because these are the I
guess uh negative values and these are
going to be
our positive values
uh next up we're gonna count these
pixels so I'm gonna put down an analyze
drop
and operation uh
not count but let's sum them up
um
and it's going to give us this blinking
something
uh it's
yeah it's not pretty but it's contains
the information that we need
I'm gonna put down here now we can do
that without n
like so
and convert both of these
to a chop
like that
that's only
take one channel out of these
great
so now if we in principle
subtract these two from each other
we will get a number which is either
positive or negative depending on the
direction
so if it combine shops subtract
let's maybe filter this a bit because
it's jumping quite a lot
and let's put down a trail also
so that we can see what's going on
so I'm moving my hand to the left
jumped up moving my hand to the right it
jumps down
severe
heading in the right direction
seems to be quite reliable
like so
so to quickly recap what we did here
we have the optical flow giving us for
each pixel whether that pixel is moving
to the left or to the right or not the
pixel but the the thing contained by the
pixels are moving to the left or to the
right but we have this data for each
pixel and what we want in the end is
just simple job data of the general
direction
of things moving on the camera so we
isolated everything with the reorder to
the red Channel meaning on it to the
left and the right because it's only
also contains up and down in the green
and we're gonna also do that in the
other Branch when we are done with this
so we have only the red channel here and
the threshold we can
isolate only the negative
pixels and only the positive pixels
and with the analyze we count how many
positive pixels there are and how many
negative pixels there are
we convert everything to chop now
and if you subtract these two numbers
um we can get if there are more negative
or more positive numbers
and this number here is gonna give us
how many more
uh positive or negative numbers there
are
in our Optical flow
so this is cool but what I want in the
end is just
one value if things are moving to the
left or to the right I don't want this
kind of noisy data I want something
clean
so I'm gonna rename these channels or
this channel
to left right
and we have to we have to draw a line uh
where we say that okay this is now
just noise or
yeah we have to somehow
contain this data
a bit better so I'm going to put down a
logic chop
and convert input on
off one outside bounds
and these bounds I'm gonna
I'm gonna Define
as minus uh let's see
so now even though I'm not really moving
it's still jumping quite a lot
and we can put a blur before everything
kind of denies
are
our data I think yeah that already looks
much less noisy
and let's look at some values
and move my hand to the left it jumps up
to 2000 ish
other direction also
so let's say that when this value
reaches
uh minus 500 or 500
that's when it should be on so this is
not inverted so I'm just gonna
uh Channel pre-op invert
and now
we're getting a bit more reliable signal
like that
and to get just a direction I can put
down a function drop
and use the sine function science sign
so this is gonna give us one if it's
positive minus one if it's negative
and if we now
multiply these two together
combat jobs multiply
we're gonna get
minus one and one minus one one
and I might want this to
be the other way around so I can
just here
multiply this by minus one
to me it makes more sense for left to be
-1 and right to be positive
I don't know why
we can do some more denoising maybe
but
yeah this could be all right we can also
change
the optical flow threshold
that can also help sometimes
to make this a bit less noisy
but I feel like
this is quite okay
yeah
cool
I like this
so we can do the same kind of deal
for the green Channel and I'm just gonna
lazily copy and paste all this
we don't need this Trail anymore
and instead of getting directional here
we're gonna get red zero and green
Green from input one
and in our
yeah we can keep it like this it doesn't
matter if it's red to the top two
now in principle
they have it up and down also
I'm gonna rename this to
down up
I want down to be minus one
up to V1
like so
if you look at this
and we can just merge these two together
put down and now
select all these collapse it the
component
and let's use this data to control
something
let's do this circle
let's move the circle
so we have the circle I'm going to
increase its resolution
a little bit
decrease the radius
and
let's use a speed drop
that's limited between minus 0.5 and 0.5
let's look at what happens if we
offset this
going to the right
left right it's a bit
yeah it's kind of like
telekinesis it's not
fully reliable but
I've used this to my advantage and
in an installation recently
where we didn't have any
connect or any sensor just webcam
so the values that you have to change
based on the lighting conditions because
this is just taking the the camera image
so it's they can just slide information
so depending on the lighting conditions
you have to change
the bounds potentially
or
the threshold of the optical flow I
don't recommend changing any of the
other parameters here maybe the Lambda I
don't really know what it does but yeah
I just leave it this way
so the threshold if you want to
threshold
you know some movement you want to
exclude when it's just a little bit of a
movement you can increase the threshold
but yeah based on the lighting
conditions you might you may wanna
change the bounds
and also you can play with the filter
to give you some better results Maybe
uh in the patreon file you will find all
the relevant parameters that you might
want to mess with already mapped for you
I'm not gonna spend time on that in this
video
if you like the content please consider
subscribing and again please check the
patreon there's lots of uh cool stuff
there
much more than here on YouTube so yeah
see you around and thanks for watching
Weitere verwandte Videos ansehen
![](https://i.ytimg.com/vi/Tp5dI-GDerM/hq720.jpg)
Installing and Configuring Logstash to Ingest Fortinet Syslogs
![](https://i.ytimg.com/vi/OuZrhykVytg/hq720.jpg)
What are Events? (C# Basics)
![](https://i.ytimg.com/vi/2IL0Sd3neWc/hq720.jpg)
Chat With Documents Using ChainLit, LangChain, Ollama & Mistral 🧠
![](https://i.ytimg.com/vi/3UhgEsLxmG8/hq720.jpg)
Blade templates & Layouts | Laravel 10 Tutorial #7
![](https://i.ytimg.com/vi/_K_yx1suHgI/hq720.jpg)
The Perspectives Method - a BRAND NEW Way to Build in Notion!
![](https://i.ytimg.com/vi/rPO9vILgHGs/hqdefault.jpg?sqp=-oaymwEXCJADEOABSFryq4qpAwkIARUAAIhCGAE=&rs=AOn4CLB3BY5ftpiepdek_FrR_pI65-YqMQ)
Microsoft Copilot Studio: How to Build your Copilot | Microsoft Copilot | Build your first Copilot
5.0 / 5 (0 votes)