Machine Vision Basics 05 - Image Processing
Summary
TLDRThis script delves into the fundamentals of machine vision, focusing on feature detection and edge detection. It explains the importance of contrast for accurate object differentiation and the minimum 20% contrast requirement for effective feature detection. The video illustrates how machine vision cameras inspect objects on a conveyor belt and how algorithms analyze pixel contrast to identify features and edges. It also touches on sub-pixel accuracy and the practical considerations for setting up machine vision systems for reliable measurements.
Takeaways
- 🔍 Feature Detection: The script discusses feature detection in machine vision, which is based on contrast and is used for finding features on products for various inspections like presence/absence, size, flaw, strain, and color uniformity.
- 📸 Camera Setup: It describes a camera setup with backlighting, where a red square object is used to illustrate the importance of contrast in machine vision systems for accurate differentiation.
- 🔆 Contrast Importance: The necessity of good contrast for better image quality and more accurate results is highlighted, with a rule of thumb being a minimum of 20% contrast for good differentiation.
- ⚫️ Black and White Contrast: The script points out that black and white offer the most contrast, and as shades of grey are introduced, the contrast decreases.
- 📊 Contrast in Pixels: It explains how contrast sensing algorithms work at the pixel level, and how a single pixel's movement can affect the contrast spread over multiple pixels.
- 🔬 Threshold Setting: The process of setting thresholds in vision software to detect features by finding pixels within certain gray level ranges is described.
- 📐 Edge Detection: The script covers edge detection, which is used for measuring applications and is not based on gray levels, with examples of how it works on a robot inspecting wheels.
- 📈 Gray Level Profile: It explains how the gray level profile is used to identify points of sharp contrast change in edge detection and how it helps in setting thresholds for pass/fail decisions.
- 🔎 Sub-Pixel Accuracy: The discussion includes the possibility of achieving sub-pixel accuracy in machine vision through specific optics and backlighting, and the conservative value of ±0.5 pixel.
- 🛠️ Testing for Repeatability: The script emphasizes that to ensure repeatability and feasibility of a machine vision application, parts must be tested with cameras, lighting, and software to obtain consistent results.
- 📈 Edge Window Algorithm: It concludes with an explanation of the edge window algorithm that collects multiple sub-pixel samples for precise calculation of sub-pixel edges.
Q & A
What is feature detection in machine vision?
-Feature detection in machine vision is a method based on contrast detection used for finding features on a product, which can be applied to tasks such as presence/absence checks, size inspection, flaw detection, strain inspection, and color uniformity checks.
How is the camera setup typically used in machine vision for inspecting objects on a conveyor belt?
-The camera is often set up on one side, looking at objects as they move down a conveyor belt, with a backlight to enhance contrast and make features on the objects more distinguishable.
What is the importance of contrast in machine vision applications?
-Contrast is crucial because it allows the vision system to differentiate between features on an object. Better contrast results in a clearer image and more accurate inspection outcomes.
What is the minimum contrast typically needed for good differentiation between features in a machine vision system?
-A minimum of 20% contrast is generally needed for good differentiation between features in a machine vision system.
How does the contrast affect the grayscale values in an image?
-High contrast results in more distinct grayscale values, with complete black being 0 and complete white being 255. As contrast decreases, the grayscale values become more similar, indicating less differentiation between features.
What is edge detection in the context of machine vision?
-Edge detection is a method used for finding edges in images, which is mainly used for measuring applications and feature detection that is not gray-level based, such as dimension measurement, positioning, and orientation.
How does the edge detection algorithm work in machine vision?
-The edge detection algorithm identifies points where the image contrast changes sharply, comparing the grayscale value of each pixel along the region of interest to its neighbors to determine the rate of change and the steepness of the gradient.
What is the purpose of using sub-pixel algorithms in machine vision?
-Sub-pixel algorithms are used to enhance the accuracy of edge detection by examining the gray levels of adjacent pixels around the measurement endpoint and interpolating the location of the actual edge to a fraction of a pixel.
How can the accuracy of a machine vision system be influenced by the choice of optics and lighting?
-The use of telecentric optics and calibrated backlight can make higher accuracy measuring possible. However, using other optics or lighting techniques can affect the system's accuracy.
What is the best way to determine the repeatability and feasibility of a machine vision application?
-The best way to determine repeatability and feasibility is to set up the application with the actual parts, optics, lighting, and software, then record and analyze the results over several hundred triggers in the same static park to see if consistent results are achieved that meet the application needs.
What is the role of the region of interest (ROI) in machine vision inspection?
-The region of interest (ROI) is the area within the field of view that the vision software evaluates to determine the amount of dark or bright pixels against a threshold, which can then be used to pass or fail an inspection based on predefined limits.
Outlines
🔍 Fundamentals of Machine Vision: Feature Detection and Contrast
The first paragraph introduces the basics of machine vision applications, focusing on feature detection methods. It emphasizes the importance of contrast in identifying features such as edges, flaws, and color uniformity in products. The setup of a machine vision system using a camera and backlight is described, illustrating how contrast helps in distinguishing features with a minimum of 20% difference for effective differentiation. The paragraph also explains how the camera detects features by looking for pixels with significant contrast to their background and how this affects the accuracy of the vision system. A visual representation of contrast levels and their impact on pixel values is provided, along with an example of how feature detection can be affected by the movement of a feature across pixels.
📏 Thresholding and Edge Detection in Machine Vision
The second paragraph delves into the technical aspects of thresholding and edge detection in machine vision. It explains how vision software uses threshold ranges to differentiate between dark and bright pixels, with the region of interest being evaluated for the number of dark and bright pixels. The concept of edge detection is explored, describing how the algorithm identifies points of sharp contrast change and measures the gray level of each pixel along the region of interest. The paragraph also discusses the accuracy of edge detection, mentioning that sub-pixel accuracy can be achieved under certain conditions, such as the use of telecentric optics and collimated backlight. The importance of testing and recording results for repeatability in machine vision applications is highlighted, with a brief mention of sub-pixel algorithms and their role in precise measurement.
🔬 Advanced Edge Detection Techniques and Sub-Pixel Accuracy
The third paragraph provides a deeper look into advanced edge detection techniques, specifically the sub-pixel edge detection algorithm. It describes how multiple sub-pixel samples are collected along the detected edge and how these samples are used to calculate the precise location of the edge. The paragraph explains the process of generating sub-pixel samples by examining individual lines of pixels across a detected edge and how the algorithm uses gray level information to interpolate the actual edge location. The importance of understanding the edge pixel's gray level in relation to adjacent pixels is emphasized, and a simplified example is provided to illustrate how the algorithm determines the sub-pixel location of an edge. The paragraph concludes with a general overview of the factors that influence the accuracy of edge detection, including the setup of the vision system and the software used.
Mindmap
Keywords
💡Feature Detection
💡Contrast Detection
💡Edge Detection
💡Contrast
💡Threshold
💡Region of Interest (ROI)
💡Gray Level
💡Sub-Pixel
💡Backlight
💡Machine Vision Software
💡Repeatability
Highlights
Feature detection in machine vision is based on contrast detection for finding features on a product, which is used in applications such as presence/absence, size inspection, flaw detection, and color uniformity inspection.
Camera setup involves backlighting to enhance contrast, which is crucial for accurate feature detection in machine vision applications.
A minimum contrast of 20% is typically needed for good differentiation between features in machine vision systems.
Contrast is measured by the difference between dark and bright areas, with higher contrast leading to better vision system performance.
Edge detection is used for measuring applications and is not based on grey level, which is essential for dimension measurement, positioning, and orientation.
Robotic integration with machine vision cameras allows for dynamic inspection of objects, such as wheels, by moving the camera around to inspect different areas.
The importance of contrast is emphasized for accurate image capture and inspection of objects in machine vision systems.
Machine vision software uses contrast sensing algorithms to detect features, even when they move slightly, by spreading the original contrast over multiple pixels.
A two by two pixel feature is guaranteed to cover at least one full pixel, ensuring repeatable detection in machine vision applications.
Thresholds in machine vision applications are set based on feedback from the software, determining the acceptable range of grey levels for features.
Edge detection algorithms identify points of sharp image contrast changes and measure the grey level of each pixel along the region of interest.
Sub-pixel algorithms provide higher accuracy in edge detection by examining the grey levels of adjacent pixels and interpolating the location of the actual edge.
The use of telecentric optics and calibrated backlight enhances the accuracy of higher precision measuring in machine vision systems.
Accuracy in machine vision is influenced by the choice of optics and lighting techniques, which can affect the system's performance.
True repeatability in machine vision applications is determined by setting up the application, testing parts, and recording consistent results over multiple trials.
Sub-pixel edge detection involves collecting multiple samples along the detected edge and using these to calculate the precise location of the edge.
Edge detection algorithms compare the value of each pixel along the edge to its neighbor to determine the gradient, indicating the steepness of the edge.
The process of finding edges in machine vision involves examining individual lines of pixels across a detected edge to generate sub-pixel samples.
Transcripts
these are kind of getting into
application-based
what uh different types of basic methods
of machine vision so one is feature
detection this methods based on contrast
detection and use for finding features
on a product the example
presence/absence in size inspection flaw
strain inspection color uniformity etc
and you can tell to the right how the
camera is set up on the left hand side
looking at objects going down a conveyor
and you can see that there is a actually
a backlight on this as well the red red
object rick sweat red square object is
actually is it up supposed to be a
backlight also edge detection this is
mainly used for finding edge edges in
measuring applications and feature
detection that are usually not grey
level based so dimension measurement on
positioning and orientation you can time
to tell that a robot there's supposed to
there's a machine vision camera on the
min of a robot and the robot is moving
the camera around to inspect looks like
different areas of wheels so when we
detect pixels of a feature these pixels
must have contrast to its background so
when you're looking at an object getting
contrast is one of the most important
things because this way if you have
better contrast you'll have better an
image and you'll get more accurate
results and looking at your objects and
inspecting your objects the difference
between the dark and bright the better
contrast and the better vision system is
able to differentiate them so a rule of
thumb is you want to have a minimum of
20% contrast and that is typically
needed for good different-different
ation between the features providing
good contrast to your object and below
this is just a little chart of kind of
you can tell black and white you have
you're gonna have the most contraire
as soon as you get on down the line when
the greys look similar you're gonna have
less and less contrast so I showed this
slide earlier but this is just kind of
reiterate a few things we're getting
ready to go over on the bottom-left the
image is a sample measurement program we
were doing so when you zoom in also on
the middle image if you zoom in that
would be what the image would look like
and on the big image to the right that
is if you zoom in to maximum as far as
you can go and you can tell on this
image you have zero 1540s and sixty-five
so like we went over earlier complete
black is 0 in the 255 is supposed to be
supposed to be a solid white but I'm not
really sure on my screen it's kind of
looking great but I think that's due to
the background of the template but it's
supposed to be white so you can tell by
in this image to the right it's kind of
showing you on when you go from
completely black and you go out
integrate how your numbers will go up
and you can take a look at this in the
machine vision software so contrast
sensing algorithm it's able to detect
one pick one single pixel with a good
contrast but let's say if this pixel
moves half a pixel diagonal you get the
same features now spread over four
pixels each having a little part of the
original contrast so you can see you
have on the left-hand image you have
completely black it would be a zero in
all 100's around it but if that moved
diagonal you would have that black image
over four pixels now and then see how
your numbers kind of went from zero
completely black and it's now more gray
mukesh white it would go up to a 75 so
what's the smallest size of features
that can be repeatedly detective we say
that a two by two pixel feature is
always kind of guaranteed to be
completely covering at least one full
pixel and
this is not something that we really get
a lot into but when we're kind of
setting our thresholds on our
application and on our arm perfectly
good image when we start a machine
vision application we can kind of look
at these numbers and kind of make a
start from what our threshold should be
depending on what kind of feedback we
get from the machine vision software so
the most common way for vision software
detective feature is finding like strain
defect whatever we're looking for is
finding pixels with certain grade level
ranges like your threshold so threshold
range is set between a lower and opal
threshold of the gray level bright
pixels are defined as pixels from a gray
level threshold to gray level 100% dark
pixels like we went over earlier or zero
so in the image you can tell on the left
that the lower threshold the dark is in
the yellow and on the right the bride is
in the yellow that's just kind of
showing you the difference between dark
and bright and on the image below when
we're checking for the pixels that are
dark yes your vision software is gonna
be able to find him and then we can get
to know their gray level in position in
the image so you can see that there's a
blue square inside the field of view and
that's our region of interest in the
software so we can evaluate the amount
of dark pixels against the bright pits
pixels in the region of interest and
then put a limit to it so if the amount
of dark pixels exceeded this limit on
the inspection on whatever we're looking
for then this will cause the inspection
or your object this will cause it to
fail this is kind of how we go about
getting a good a good pass or a bad fail
so on edge detection this algorithm this
kind of identifies the points where the
image contrast changes sharply this
algorithm measures great level of each
pixel along the region of interest you
can see that on this edge detection on
the left-hand side it starts out gray in
when we get into the black part the gray
level line will drop down to zero and
then when we get back into the gray the
gray level line profile comes back up
back into black goes back down to zero
and then back into gray it goes back up
it looks like it's going back up to a
little over 60 so this is kind of how we
go about setting our thresholds on
objects to set in Pat's we want to pass
or fail and this is kind of based on
this edge detection right here here's
that the grey line profile enhanced at a
better view so this obviously it
indicates the rate of change this
algorithm compares the value of each
pixel along the edge to its neighbor the
greater value indicates a positive
gradient dark to bright lesser value
indicates a negative right to dark and
you can tell that differences in these
two lines kind of indicates the
steepness of the gradient and as you
could tell in the previous slide how the
gradient line profile changed and it
will change depending on whatever object
you're looking at so there's an
algorithm for our edge detection it
usually works with plus or minus 1 pixel
and sub pixel and algorithm the
accuracies plus or minus one twentieth
or a point zero five pixel can be
obtained but you know plus or minus 0.5
can be achieved as more as a
conservative value in a real application
with the camera and the optics in the
light situation so the use of teller
centric optics and Calamity backlight
makes higher accuracy measuring possible
using other optics or light lighting
techniques will influence the accuracy
of the system when you're doing
sub-pixel een honestly it should really
just be considered as insurance only
true true way to know how repeatable a
measurement application will be is to
actually set up the application do the
optics lighting handling of the software
and record the results over set several
hundred triggers in the same static park
then you can compare the users
of what you got on the information and
go from there so a lot of times people
want to do me ask about machine vision
applications and you can't you can run
the numbers so we're kind of decide if
the application is doable or not but
like I just said in the to get um
repeatability and to know if the
application is actually plausible and we
can do it we have to get some parts we
have to test them out we have to set up
the cameras and Lighting's and go
through the software and actually run
the test up several several times and to
see if we can get consistent results
that meet your application needs
so sub-pixel and here's just a little
image a quick image of sub-pixel an
algorithm i just kind of examines how
the gray levels are of the adjacent
pixels around the measurement endpoint
this just uses the information to
interpolate the location of the actual
eggs to the fraction of a pixel here's
well maybe a little better view of it
the edge window algorithm collects
multiple sub pixel samples along the
detective edge on both sides of the
initial dictation point this areas along
the edge of this window determine by the
subpixel settings in the tool and these
settings allow for precise calculation
of sub pixel edge so this algorithm
generates each sub pixel sample by
examining an individual line of pixels
across a detected edge so when we're
doing edge detection this is kind of
just getting into detail pretty much
about what's going on in the software
where you're getting these these numbers
from where you're getting a detective
edges your edge windows in your region
of interests this is also just called
best finding edge it's just another
example of edge detection and what goes
into it sub pixel location of an edge
within a single pixel so you can compare
the gray level to that
edge pixel to the gray levels and the
pixels of either side of it in the
simplified example shown below
on the edges pixels or gray level is a
mix of the left-hand pixels gray level
and the right-hand pixels grade level so
the proportions of the mix depends on
the location in the edge within the edge
pixel so pretty much it's just saying
that all your grade level when your
edges are going to just kind of
determine on where your lines are where
your edges are or your region of
interests are what your pixel counts are
and here's just kind of a algorithm of
how you go about looking for your edges
on your gray level and your right level
this is really nothing that can we
really get into or need to know the
numbers of but these are just kind of
some of the numbers and some of the
feedback in the machine vision software
that we get this is just kind of where
they come from and how we get those
numbers
Посмотреть больше похожих видео
Machine Vision Basics 01 - Why Machine Vision
Object Detection Using OpenCV Python | Object Detection OpenCV Tutorial | Simplilearn
Machine Vision Basics 06 - Camera Selection
Vision Assistant for Visually Impaired People | College Mini Project
Machine Vision Basics 02 - Camera Fundamentals
Machine Vision Basics 04 - Lighting Fundamentals
5.0 / 5 (0 votes)