Machine Vision Basics 05 - Image Processing

ESECOTV
8 Jun 202012:12

Summary

TLDRThis script delves into the fundamentals of machine vision, focusing on feature detection and edge detection. It explains the importance of contrast for accurate object differentiation and the minimum 20% contrast requirement for effective feature detection. The video illustrates how machine vision cameras inspect objects on a conveyor belt and how algorithms analyze pixel contrast to identify features and edges. It also touches on sub-pixel accuracy and the practical considerations for setting up machine vision systems for reliable measurements.

Takeaways

  • 🔍 Feature Detection: The script discusses feature detection in machine vision, which is based on contrast and is used for finding features on products for various inspections like presence/absence, size, flaw, strain, and color uniformity.
  • 📸 Camera Setup: It describes a camera setup with backlighting, where a red square object is used to illustrate the importance of contrast in machine vision systems for accurate differentiation.
  • 🔆 Contrast Importance: The necessity of good contrast for better image quality and more accurate results is highlighted, with a rule of thumb being a minimum of 20% contrast for good differentiation.
  • ⚫️ Black and White Contrast: The script points out that black and white offer the most contrast, and as shades of grey are introduced, the contrast decreases.
  • 📊 Contrast in Pixels: It explains how contrast sensing algorithms work at the pixel level, and how a single pixel's movement can affect the contrast spread over multiple pixels.
  • 🔬 Threshold Setting: The process of setting thresholds in vision software to detect features by finding pixels within certain gray level ranges is described.
  • 📐 Edge Detection: The script covers edge detection, which is used for measuring applications and is not based on gray levels, with examples of how it works on a robot inspecting wheels.
  • 📈 Gray Level Profile: It explains how the gray level profile is used to identify points of sharp contrast change in edge detection and how it helps in setting thresholds for pass/fail decisions.
  • 🔎 Sub-Pixel Accuracy: The discussion includes the possibility of achieving sub-pixel accuracy in machine vision through specific optics and backlighting, and the conservative value of ±0.5 pixel.
  • 🛠️ Testing for Repeatability: The script emphasizes that to ensure repeatability and feasibility of a machine vision application, parts must be tested with cameras, lighting, and software to obtain consistent results.
  • 📈 Edge Window Algorithm: It concludes with an explanation of the edge window algorithm that collects multiple sub-pixel samples for precise calculation of sub-pixel edges.

Q & A

  • What is feature detection in machine vision?

    -Feature detection in machine vision is a method based on contrast detection used for finding features on a product, which can be applied to tasks such as presence/absence checks, size inspection, flaw detection, strain inspection, and color uniformity checks.

  • How is the camera setup typically used in machine vision for inspecting objects on a conveyor belt?

    -The camera is often set up on one side, looking at objects as they move down a conveyor belt, with a backlight to enhance contrast and make features on the objects more distinguishable.

  • What is the importance of contrast in machine vision applications?

    -Contrast is crucial because it allows the vision system to differentiate between features on an object. Better contrast results in a clearer image and more accurate inspection outcomes.

  • What is the minimum contrast typically needed for good differentiation between features in a machine vision system?

    -A minimum of 20% contrast is generally needed for good differentiation between features in a machine vision system.

  • How does the contrast affect the grayscale values in an image?

    -High contrast results in more distinct grayscale values, with complete black being 0 and complete white being 255. As contrast decreases, the grayscale values become more similar, indicating less differentiation between features.

  • What is edge detection in the context of machine vision?

    -Edge detection is a method used for finding edges in images, which is mainly used for measuring applications and feature detection that is not gray-level based, such as dimension measurement, positioning, and orientation.

  • How does the edge detection algorithm work in machine vision?

    -The edge detection algorithm identifies points where the image contrast changes sharply, comparing the grayscale value of each pixel along the region of interest to its neighbors to determine the rate of change and the steepness of the gradient.

  • What is the purpose of using sub-pixel algorithms in machine vision?

    -Sub-pixel algorithms are used to enhance the accuracy of edge detection by examining the gray levels of adjacent pixels around the measurement endpoint and interpolating the location of the actual edge to a fraction of a pixel.

  • How can the accuracy of a machine vision system be influenced by the choice of optics and lighting?

    -The use of telecentric optics and calibrated backlight can make higher accuracy measuring possible. However, using other optics or lighting techniques can affect the system's accuracy.

  • What is the best way to determine the repeatability and feasibility of a machine vision application?

    -The best way to determine repeatability and feasibility is to set up the application with the actual parts, optics, lighting, and software, then record and analyze the results over several hundred triggers in the same static park to see if consistent results are achieved that meet the application needs.

  • What is the role of the region of interest (ROI) in machine vision inspection?

    -The region of interest (ROI) is the area within the field of view that the vision software evaluates to determine the amount of dark or bright pixels against a threshold, which can then be used to pass or fail an inspection based on predefined limits.

Outlines

00:00

🔍 Fundamentals of Machine Vision: Feature Detection and Contrast

The first paragraph introduces the basics of machine vision applications, focusing on feature detection methods. It emphasizes the importance of contrast in identifying features such as edges, flaws, and color uniformity in products. The setup of a machine vision system using a camera and backlight is described, illustrating how contrast helps in distinguishing features with a minimum of 20% difference for effective differentiation. The paragraph also explains how the camera detects features by looking for pixels with significant contrast to their background and how this affects the accuracy of the vision system. A visual representation of contrast levels and their impact on pixel values is provided, along with an example of how feature detection can be affected by the movement of a feature across pixels.

05:00

📏 Thresholding and Edge Detection in Machine Vision

The second paragraph delves into the technical aspects of thresholding and edge detection in machine vision. It explains how vision software uses threshold ranges to differentiate between dark and bright pixels, with the region of interest being evaluated for the number of dark and bright pixels. The concept of edge detection is explored, describing how the algorithm identifies points of sharp contrast change and measures the gray level of each pixel along the region of interest. The paragraph also discusses the accuracy of edge detection, mentioning that sub-pixel accuracy can be achieved under certain conditions, such as the use of telecentric optics and collimated backlight. The importance of testing and recording results for repeatability in machine vision applications is highlighted, with a brief mention of sub-pixel algorithms and their role in precise measurement.

10:01

🔬 Advanced Edge Detection Techniques and Sub-Pixel Accuracy

The third paragraph provides a deeper look into advanced edge detection techniques, specifically the sub-pixel edge detection algorithm. It describes how multiple sub-pixel samples are collected along the detected edge and how these samples are used to calculate the precise location of the edge. The paragraph explains the process of generating sub-pixel samples by examining individual lines of pixels across a detected edge and how the algorithm uses gray level information to interpolate the actual edge location. The importance of understanding the edge pixel's gray level in relation to adjacent pixels is emphasized, and a simplified example is provided to illustrate how the algorithm determines the sub-pixel location of an edge. The paragraph concludes with a general overview of the factors that influence the accuracy of edge detection, including the setup of the vision system and the software used.

Mindmap

Keywords

💡Feature Detection

Feature Detection is a method in machine vision that involves identifying distinct characteristics or patterns in an image. It is crucial for tasks such as presence/absence checks, size inspection, flaw detection, and color uniformity. In the video, feature detection is used to find features on a product, highlighting its importance in quality control and inspection processes.

💡Contrast Detection

Contrast Detection refers to the process of identifying areas in an image where there is a significant difference in brightness. This is essential for feature detection as it helps in distinguishing objects from their background. The video emphasizes the importance of a minimum 20% contrast for good differentiation, illustrating how contrast affects the accuracy of machine vision systems.

💡Edge Detection

Edge Detection is a technique used in machine vision to identify the boundaries of objects within an image. It is primarily used for measuring applications and is not based on grey level. The video script mentions edge detection in the context of a robot moving a camera to inspect different areas of wheels, demonstrating its application in positioning and orientation.

💡Contrast

Contrast in the context of machine vision is the difference in brightness between an object and its background. It is a critical factor in image clarity and the accuracy of machine vision systems. The video script explains that better contrast leads to better image quality and more accurate results, with examples showing how contrast affects pixel detection.

💡Threshold

Threshold in machine vision is a value set to differentiate between different levels of pixel intensity. It is used to define what constitutes a 'dark' or 'bright' pixel in an image. The video discusses setting thresholds to determine the acceptable range of pixel intensity, which is crucial for identifying defects or features in an object.

💡Region of Interest (ROI)

Region of Interest (ROI) is a specific area within an image that is selected for analysis. In the video, the ROI is highlighted as a blue square in the field of view, where the machine vision software evaluates the amount of dark and bright pixels against a set limit to determine if an inspection passes or fails.

💡Gray Level

Gray Level refers to the intensity of a pixel in an image, measured on a scale from 0 (black) to 255 (white). It is used in machine vision to analyze the contrast and edges in an image. The video script provides examples of how gray level thresholds are used to detect features and determine the success of an inspection.

💡Sub-Pixel

Sub-Pixel refers to the accuracy of measurement beyond the resolution limit of a single pixel. The video script discusses sub-pixel algorithms that can achieve accuracies of plus or minus 0.5 pixels, which is significant in high-precision applications. It explains how these algorithms examine the gray levels of adjacent pixels to interpolate the location of the actual edge.

💡Backlight

Backlight in the context of machine vision is a lighting technique that illuminates an object from behind, making it easier to detect edges and features against a dark background. The video script mentions a red square object being illuminated by backlight, demonstrating its use in enhancing contrast for feature detection.

💡Machine Vision Software

Machine Vision Software is the set of programs used to process and analyze images captured by machine vision cameras. It plays a crucial role in interpreting the data and making decisions based on the visual information. The video script describes how this software uses algorithms to detect features, set thresholds, and evaluate the results of inspections.

💡Repeatability

Repeatability in machine vision refers to the consistency of results when the same object is measured multiple times under the same conditions. The video script emphasizes the importance of testing and recording results over several hundred triggers to ensure that the application is reliable and meets the required standards.

Highlights

Feature detection in machine vision is based on contrast detection for finding features on a product, which is used in applications such as presence/absence, size inspection, flaw detection, and color uniformity inspection.

Camera setup involves backlighting to enhance contrast, which is crucial for accurate feature detection in machine vision applications.

A minimum contrast of 20% is typically needed for good differentiation between features in machine vision systems.

Contrast is measured by the difference between dark and bright areas, with higher contrast leading to better vision system performance.

Edge detection is used for measuring applications and is not based on grey level, which is essential for dimension measurement, positioning, and orientation.

Robotic integration with machine vision cameras allows for dynamic inspection of objects, such as wheels, by moving the camera around to inspect different areas.

The importance of contrast is emphasized for accurate image capture and inspection of objects in machine vision systems.

Machine vision software uses contrast sensing algorithms to detect features, even when they move slightly, by spreading the original contrast over multiple pixels.

A two by two pixel feature is guaranteed to cover at least one full pixel, ensuring repeatable detection in machine vision applications.

Thresholds in machine vision applications are set based on feedback from the software, determining the acceptable range of grey levels for features.

Edge detection algorithms identify points of sharp image contrast changes and measure the grey level of each pixel along the region of interest.

Sub-pixel algorithms provide higher accuracy in edge detection by examining the grey levels of adjacent pixels and interpolating the location of the actual edge.

The use of telecentric optics and calibrated backlight enhances the accuracy of higher precision measuring in machine vision systems.

Accuracy in machine vision is influenced by the choice of optics and lighting techniques, which can affect the system's performance.

True repeatability in machine vision applications is determined by setting up the application, testing parts, and recording consistent results over multiple trials.

Sub-pixel edge detection involves collecting multiple samples along the detected edge and using these to calculate the precise location of the edge.

Edge detection algorithms compare the value of each pixel along the edge to its neighbor to determine the gradient, indicating the steepness of the edge.

The process of finding edges in machine vision involves examining individual lines of pixels across a detected edge to generate sub-pixel samples.

Transcripts

play00:03

these are kind of getting into

play00:05

application-based

play00:06

what uh different types of basic methods

play00:10

of machine vision so one is feature

play00:13

detection this methods based on contrast

play00:16

detection and use for finding features

play00:18

on a product the example

play00:20

presence/absence in size inspection flaw

play00:23

strain inspection color uniformity etc

play00:28

and you can tell to the right how the

play00:30

camera is set up on the left hand side

play00:33

looking at objects going down a conveyor

play00:36

and you can see that there is a actually

play00:39

a backlight on this as well the red red

play00:44

object rick sweat red square object is

play00:46

actually is it up supposed to be a

play00:47

backlight also edge detection this is

play00:52

mainly used for finding edge edges in

play00:54

measuring applications and feature

play00:56

detection that are usually not grey

play00:59

level based so dimension measurement on

play01:03

positioning and orientation you can time

play01:06

to tell that a robot there's supposed to

play01:09

there's a machine vision camera on the

play01:11

min of a robot and the robot is moving

play01:14

the camera around to inspect looks like

play01:17

different areas of wheels so when we

play01:22

detect pixels of a feature these pixels

play01:26

must have contrast to its background so

play01:29

when you're looking at an object getting

play01:31

contrast is one of the most important

play01:34

things because this way if you have

play01:36

better contrast you'll have better an

play01:38

image and you'll get more accurate

play01:41

results and looking at your objects and

play01:44

inspecting your objects the difference

play01:47

between the dark and bright the better

play01:49

contrast and the better vision system is

play01:50

able to differentiate them so a rule of

play01:55

thumb is you want to have a minimum of

play01:57

20% contrast and that is typically

play02:00

needed for good different-different

play02:02

ation between the features providing

play02:04

good contrast to your object and below

play02:08

this is just a little chart of kind of

play02:11

you can tell black and white you have

play02:14

you're gonna have the most contraire

play02:16

as soon as you get on down the line when

play02:19

the greys look similar you're gonna have

play02:21

less and less contrast so I showed this

play02:24

slide earlier but this is just kind of

play02:27

reiterate a few things we're getting

play02:29

ready to go over on the bottom-left the

play02:33

image is a sample measurement program we

play02:38

were doing so when you zoom in also on

play02:40

the middle image if you zoom in that

play02:43

would be what the image would look like

play02:44

and on the big image to the right that

play02:48

is if you zoom in to maximum as far as

play02:51

you can go and you can tell on this

play02:53

image you have zero 1540s and sixty-five

play02:57

so like we went over earlier complete

play03:00

black is 0 in the 255 is supposed to be

play03:04

supposed to be a solid white but I'm not

play03:07

really sure on my screen it's kind of

play03:08

looking great but I think that's due to

play03:11

the background of the template but it's

play03:13

supposed to be white so you can tell by

play03:17

in this image to the right it's kind of

play03:19

showing you on when you go from

play03:22

completely black and you go out

play03:25

integrate how your numbers will go up

play03:28

and you can take a look at this in the

play03:30

machine vision software so contrast

play03:34

sensing algorithm it's able to detect

play03:37

one pick one single pixel with a good

play03:39

contrast but let's say if this pixel

play03:42

moves half a pixel diagonal you get the

play03:45

same features now spread over four

play03:47

pixels each having a little part of the

play03:49

original contrast so you can see you

play03:52

have on the left-hand image you have

play03:54

completely black it would be a zero in

play03:56

all 100's around it but if that moved

play04:00

diagonal you would have that black image

play04:04

over four pixels now and then see how

play04:07

your numbers kind of went from zero

play04:09

completely black and it's now more gray

play04:12

mukesh white it would go up to a 75 so

play04:17

what's the smallest size of features

play04:19

that can be repeatedly detective we say

play04:22

that a two by two pixel feature is

play04:24

always kind of guaranteed to be

play04:26

completely covering at least one full

play04:28

pixel and

play04:30

this is not something that we really get

play04:32

a lot into but when we're kind of

play04:34

setting our thresholds on our

play04:36

application and on our arm perfectly

play04:39

good image when we start a machine

play04:41

vision application we can kind of look

play04:43

at these numbers and kind of make a

play04:45

start from what our threshold should be

play04:48

depending on what kind of feedback we

play04:49

get from the machine vision software so

play04:53

the most common way for vision software

play04:55

detective feature is finding like strain

play04:58

defect whatever we're looking for is

play05:00

finding pixels with certain grade level

play05:02

ranges like your threshold so threshold

play05:05

range is set between a lower and opal

play05:07

threshold of the gray level bright

play05:10

pixels are defined as pixels from a gray

play05:12

level threshold to gray level 100% dark

play05:16

pixels like we went over earlier or zero

play05:21

so in the image you can tell on the left

play05:25

that the lower threshold the dark is in

play05:27

the yellow and on the right the bride is

play05:30

in the yellow that's just kind of

play05:31

showing you the difference between dark

play05:33

and bright and on the image below when

play05:37

we're checking for the pixels that are

play05:39

dark yes your vision software is gonna

play05:41

be able to find him and then we can get

play05:44

to know their gray level in position in

play05:46

the image so you can see that there's a

play05:49

blue square inside the field of view and

play05:52

that's our region of interest in the

play05:54

software so we can evaluate the amount

play05:57

of dark pixels against the bright pits

play06:00

pixels in the region of interest and

play06:03

then put a limit to it so if the amount

play06:07

of dark pixels exceeded this limit on

play06:09

the inspection on whatever we're looking

play06:11

for then this will cause the inspection

play06:14

or your object this will cause it to

play06:16

fail this is kind of how we go about

play06:17

getting a good a good pass or a bad fail

play06:22

so on edge detection this algorithm this

play06:26

kind of identifies the points where the

play06:28

image contrast changes sharply this

play06:31

algorithm measures great level of each

play06:33

pixel along the region of interest you

play06:37

can see that on this edge detection on

play06:41

the left-hand side it starts out gray in

play06:44

when we get into the black part the gray

play06:49

level line will drop down to zero and

play06:52

then when we get back into the gray the

play06:54

gray level line profile comes back up

play06:56

back into black goes back down to zero

play06:59

and then back into gray it goes back up

play07:02

it looks like it's going back up to a

play07:03

little over 60 so this is kind of how we

play07:06

go about setting our thresholds on

play07:10

objects to set in Pat's we want to pass

play07:13

or fail and this is kind of based on

play07:16

this edge detection right here here's

play07:19

that the grey line profile enhanced at a

play07:22

better view so this obviously it

play07:25

indicates the rate of change this

play07:27

algorithm compares the value of each

play07:29

pixel along the edge to its neighbor the

play07:33

greater value indicates a positive

play07:34

gradient dark to bright lesser value

play07:38

indicates a negative right to dark and

play07:40

you can tell that differences in these

play07:42

two lines kind of indicates the

play07:44

steepness of the gradient and as you

play07:46

could tell in the previous slide how the

play07:48

gradient line profile changed and it

play07:52

will change depending on whatever object

play07:54

you're looking at so there's an

play07:56

algorithm for our edge detection it

play07:58

usually works with plus or minus 1 pixel

play08:01

and sub pixel and algorithm the

play08:05

accuracies plus or minus one twentieth

play08:07

or a point zero five pixel can be

play08:09

obtained but you know plus or minus 0.5

play08:12

can be achieved as more as a

play08:14

conservative value in a real application

play08:17

with the camera and the optics in the

play08:19

light situation so the use of teller

play08:22

centric optics and Calamity backlight

play08:25

makes higher accuracy measuring possible

play08:28

using other optics or light lighting

play08:30

techniques will influence the accuracy

play08:32

of the system when you're doing

play08:34

sub-pixel een honestly it should really

play08:36

just be considered as insurance only

play08:39

true true way to know how repeatable a

play08:42

measurement application will be is to

play08:44

actually set up the application do the

play08:46

optics lighting handling of the software

play08:49

and record the results over set several

play08:53

hundred triggers in the same static park

play08:56

then you can compare the users

play08:58

of what you got on the information and

play09:00

go from there so a lot of times people

play09:02

want to do me ask about machine vision

play09:04

applications and you can't you can run

play09:07

the numbers so we're kind of decide if

play09:09

the application is doable or not but

play09:12

like I just said in the to get um

play09:14

repeatability and to know if the

play09:17

application is actually plausible and we

play09:19

can do it we have to get some parts we

play09:21

have to test them out we have to set up

play09:24

the cameras and Lighting's and go

play09:25

through the software and actually run

play09:27

the test up several several times and to

play09:31

see if we can get consistent results

play09:33

that meet your application needs

play09:36

so sub-pixel and here's just a little

play09:39

image a quick image of sub-pixel an

play09:41

algorithm i just kind of examines how

play09:43

the gray levels are of the adjacent

play09:46

pixels around the measurement endpoint

play09:48

this just uses the information to

play09:51

interpolate the location of the actual

play09:54

eggs to the fraction of a pixel here's

play09:59

well maybe a little better view of it

play10:00

the edge window algorithm collects

play10:03

multiple sub pixel samples along the

play10:05

detective edge on both sides of the

play10:08

initial dictation point this areas along

play10:13

the edge of this window determine by the

play10:18

subpixel settings in the tool and these

play10:20

settings allow for precise calculation

play10:22

of sub pixel edge so this algorithm

play10:25

generates each sub pixel sample by

play10:27

examining an individual line of pixels

play10:29

across a detected edge so when we're

play10:34

doing edge detection this is kind of

play10:35

just getting into detail pretty much

play10:38

about what's going on in the software

play10:41

where you're getting these these numbers

play10:44

from where you're getting a detective

play10:46

edges your edge windows in your region

play10:49

of interests this is also just called

play10:52

best finding edge it's just another

play10:54

example of edge detection and what goes

play10:57

into it sub pixel location of an edge

play11:01

within a single pixel so you can compare

play11:03

the gray level to that

play11:05

edge pixel to the gray levels and the

play11:07

pixels of either side of it in the

play11:09

simplified example shown below

play11:11

on the edges pixels or gray level is a

play11:14

mix of the left-hand pixels gray level

play11:17

and the right-hand pixels grade level so

play11:21

the proportions of the mix depends on

play11:22

the location in the edge within the edge

play11:25

pixel so pretty much it's just saying

play11:29

that all your grade level when your

play11:30

edges are going to just kind of

play11:32

determine on where your lines are where

play11:34

your edges are or your region of

play11:36

interests are what your pixel counts are

play11:39

and here's just kind of a algorithm of

play11:43

how you go about looking for your edges

play11:47

on your gray level and your right level

play11:49

this is really nothing that can we

play11:51

really get into or need to know the

play11:53

numbers of but these are just kind of

play11:56

some of the numbers and some of the

play11:58

feedback in the machine vision software

play12:01

that we get this is just kind of where

play12:03

they come from and how we get those

play12:05

numbers

Rate This

5.0 / 5 (0 votes)

Etiquetas Relacionadas
Machine VisionQuality ControlFeature DetectionEdge DetectionContrast SensitivityImage ProcessingManufacturing AutomationInspection AlgorithmsPixel AnalysisVision SoftwareSub-Pixel Accuracy
¿Necesitas un resumen en inglés?