Understanding Sensor Fusion and Tracking, Part 4: Tracking a Single Object With an IMM Filter

MATLAB
28 Oct 201916:08

Summary

TLDRThis video introduces the concept of tracking remote objects using estimation filters, transitioning from positioning and localization to single object tracking. It explains the challenges of tracking with limited information and introduces the Interacting Multiple Model (IMM) filter. The video compares single model filters to IMMs, demonstrating how IMMs improve state estimation for maneuvering objects by blending predictions from multiple models. It also discusses the prediction and correction process, the importance of sensor fusion, and the computational trade-offs in using multiple models for tracking. The next video will address the complexities of tracking multiple objects.

Takeaways

  • 🔍 The video focuses on the shift from self-positioning to tracking a remote object, emphasizing the importance of state estimation in both scenarios.
  • 📈 The video introduces the Interacting Multiple Model (IMM) filter as a solution for tracking uncertain objects, highlighting its effectiveness over single model filters.
  • 🛠 The IMM filter is explained as an upgrade to single model estimation filters, allowing for better handling of less information and uncertainty in tracking.
  • 🎯 The script discusses the challenges of tracking, such as dealing with less information and the potential for false positive results in data association.
  • 🤖 The importance of fusing data from multiple sensors to get a comprehensive measurement of the tracked object is emphasized.
  • 🔄 The 'predict and correct' process of estimation filters is outlined, explaining how it applies to both self-state estimation and remote object tracking.
  • 🚀 The video uses an airplane example to illustrate the prediction problem in tracking, showing the difficulty in predicting the future state of an uncontrolled object.
  • 🔄 The concept of process noise is introduced, explaining its role in accounting for uncertainty in predictions and its impact on filter performance.
  • 🔄 The difference between cooperative and uncooperative tracking is discussed, with the latter requiring the filter to treat control inputs as unknown disturbances.
  • 🤝 The IMM approach is described as running multiple models simultaneously, each representing a different expected motion of the tracked object.
  • 🔄 The video explains how the IMM filter interacts by mixing state estimates and covariances of models after measurements, improving the overall estimation quality.
  • 🚀 The computational cost and the need for a smart approach to selecting models for the IMM filter are discussed, warning against using too many models which can degrade performance.

Q & A

  • What is the main focus of the video?

    -The video focuses on switching from estimating the state of our own system to estimating the state of a remote object, specifically discussing the concept of single object tracking and the use of an Interacting Multiple Model (IMM) filter for state estimation in uncertain scenarios.

  • What is the difference between positioning and localization and single object tracking?

    -Positioning and localization are about determining the state of one's own system, while single object tracking is about determining the state of a remote object, such as its position or velocity, by fusing sensor data and models.

  • Why is tracking a remote object more challenging than estimating the state of our own system?

    -Tracking a remote object is more challenging because it often requires working with less information and dealing with uncertainties. The lack of direct control over the remote object and the need to rely on external sensors add complexity to the tracking process.

  • What is an Interacting Multiple Model (IMM) filter and how does it help in tracking?

    -An Interacting Multiple Model (IMM) filter is an advanced estimation filter that combines multiple models to predict and estimate the state of a system. It is particularly useful for tracking uncertain objects by blending the results from different models based on their likelihood of representing the true motion.

  • What is the significance of the example tracking maneuvering targets in the video?

    -The example tracking maneuvering targets is used to demonstrate the effectiveness of the IMM filter. It simulates tracking an object that goes through three distinct maneuvers, showing how the IMM filter can adapt and provide better estimation compared to a single model filter.

  • How does the IMM filter differ from a single model filter in terms of tracking performance?

    -The IMM filter provides better tracking performance by using multiple models to account for different possible motions of the tracked object. It can quickly adapt to changes in the object's motion, resulting in lower tracking error compared to a single model filter that may not match the actual motion.

  • What are the three types of motion that the IMM filter considers when predicting the future state of an object?

    -The IMM filter considers three types of motion: 1) Dynamics and kinematics of the system that carry the current state forward, 2) Commanded and known inputs into the system that change the state, and 3) Unknown or random inputs from the environment that affect the state.

  • What is the role of process noise in the IMM filter?

    -Process noise in the IMM filter represents the uncertainty in the prediction. A higher process noise indicates less confidence in the prediction, allowing the filter to rely more on the sensor measurements for correction.

  • How does the IMM filter handle transitions between different motions of the tracked object?

    -The IMM filter allows models to interact after a measurement, reinitializing each filter with a mixed estimate of state and covariance based on their probability of being switched to or mixing with each other. This helps in reducing the transient error and quickly adapting to new motions.

  • What is the computational cost of using an IMM filter with a large number of models?

    -Using an IMM filter with a large number of models increases the computational cost due to the need to run multiple predictions simultaneously. This can be a limitation in real-time tracking applications where processing time is critical.

  • Why might having too many models in an IMM filter negatively impact its performance?

    -Having too many models can lead to increased transitions between models, making it harder to determine when a transition should occur. It can also result in similar motions being represented by multiple models, which can confuse the filter and lead to less optimal estimation.

  • What is the next step after understanding single object tracking with an IMM filter?

    -The next step is to expand the concept to tracking multiple objects simultaneously, which introduces additional complexities and will be covered in a future video.

Outlines

00:00

🔍 Switching to Single Object Tracking

This paragraph introduces the shift in focus from estimating the state of one's own system to tracking a remote object. It highlights the similarities between positioning and localization versus single object tracking, emphasizing the need to estimate state (position or velocity) by fusing sensor and model data. The challenge of tracking with less information is addressed, and the concept of using an Interacting Multiple Model (IMM) filter instead of a standard Kalman filter is introduced.

05:01

📊 Simulation and IMM Overview

The paragraph explains the use of simulation results to illustrate the IMM filter's effectiveness in tracking an object with three distinct maneuvers: constant velocity, constant turn, and constant acceleration. It compares the performance of a single model filter versus an IMM filter, demonstrating the IMM's superior accuracy in tracking a maneuvering object through visual results.

10:03

🔧 Estimation Filters and Predictive Models

This paragraph delves into the mechanics of estimation filters like the Kalman filter, which predict future states and correct them with measurements. It discusses the necessity of providing the filter with a model of the system to predict its state and the importance of sensor fusion in the measurement step. The challenges of measuring a tracked object using remote sensors versus embedded sensors are also explored.

15:06

✈️ Predicting Motion in Tracking

Here, the difficulties in predicting the future state of an uncooperative object are discussed. Using an airplane example, the paragraph explains how motion predictions are based on system dynamics, known inputs, and random environmental factors. It highlights the challenge of accounting for unknown control inputs when tracking uncooperative objects and the limitations of single model filters in such scenarios.

🤔 Model Selection and Prediction Errors

This paragraph explores the concept of selecting appropriate models for prediction and the errors that arise when the chosen model does not match the object's actual motion. It emphasizes the importance of accounting for process noise and the trade-offs between prediction accuracy and reliance on noisy measurements. A MATLAB example is used to illustrate how adjusting process noise impacts prediction accuracy.

🔄 Multiple Models for Better Estimation

The paragraph introduces the solution of using multiple models to improve state estimation of a maneuvering object. It describes how running several estimation filters with different models allows for better predictions by comparing measurements to multiple predictions. This approach reduces transient errors when the object changes motion, as the filter quickly adapts to the most likely model.

🔗 Interacting Multiple Models (IMM)

This section explains the final step of integrating multiple models into an IMM filter, which allows models to interact and share state estimates and covariances. The IMM filter continuously improves each model's prediction accuracy, leading to better overall tracking performance. An example with three models—constant velocity, turn, and acceleration—is used to demonstrate the IMM filter's effectiveness.

⚠️ Balancing Model Quantity and Performance

The paragraph warns against using too many models in an IMM filter, as it can lead to increased transitions and suboptimal estimation. It stresses the need to balance the number of models to cover the object's possible motions while maintaining computational efficiency and optimal performance. The focus is on finding a small set of models that can adequately predict the object's behavior.

🎯 Expanding to Multiple Object Tracking

The final paragraph teases the next video, which will cover the more complex challenge of tracking multiple objects simultaneously. It encourages viewers to subscribe for future videos and check out related content on control theory topics.

Mindmap

Keywords

💡State Estimation

State estimation is the process of determining the current state of a system or object, such as its position or velocity. In the context of the video, it refers to the challenge of estimating the state of a remote object rather than the system itself. The video discusses how state estimation is achieved by fusing sensor data with models, which is central to the theme of tracking and understanding object behavior.

💡Single Object Tracking

Single object tracking is the focus of the video, which involves estimating the state of a single, non-cooperative object. It is a subset of the broader concepts of positioning and localization but applied to external objects. The script illustrates this concept by comparing it to the process of determining one's own location but applied to another object.

💡Interacting Multiple Model (IMM) Filter

The IMM filter is introduced as an advanced method for state estimation when tracking uncertain objects. It is designed to handle situations where there is less information available, such as with uncooperative objects. The video builds intuition around the IMM by showing its effectiveness in tracking a maneuvering target, which is central to the video's educational purpose.

💡Sensor Fusion

Sensor fusion is the process of combining data from multiple sensors to achieve more accurate and robust state estimation. The video script mentions that sensor fusion is used to complement the strengths of different sensors, which is crucial for tracking objects when direct access to the object's sensors is not possible.

💡Maneuvering Targets

Maneuvering targets refer to objects that change their motion state, such as transitioning from constant velocity to constant acceleration. The video uses a simulated example of tracking a maneuvering target to demonstrate the IMM filter's capabilities, highlighting its importance in adapting to changes in object behavior.

💡Predict-Measure-Correct Cycle

The predict-measure-correct cycle is a fundamental concept in estimation filters, where the system's future state is predicted, then measurements are taken to correct the prediction. The video script explains this cycle in the context of both self-system estimation and remote object tracking, emphasizing its universality across different tracking scenarios.

💡Process Noise

Process noise represents the uncertainty in the prediction step of the estimation process. In the video, it is discussed as a way to account for unknown inputs and model inaccuracies. The script illustrates how increasing process noise can impact the reliance on sensor measurements versus the prediction, affecting the overall estimation quality.

💡Cooperative and Uncooperative Tracking

Cooperative tracking involves objects that share information with the tracking system, making it easier to predict their state. In contrast, uncooperative tracking deals with objects that do not share such information, requiring the tracking system to treat control inputs as unknown disturbances. The video script uses these terms to differentiate the challenges in tracking different types of objects.

💡Data Association

Data association is the problem of correctly matching measurements to the tracked object, especially in the presence of false positives. While the video script mentions it as a topic for a future video, it is implied as a challenge in the tracking process when using remote sensors.

💡Model Interaction

Model interaction in the context of the IMM filter refers to the process where multiple models share information after a measurement to improve their estimates. The video script explains that this interaction helps to reduce transient errors when the object's motion changes, allowing for faster convergence on the correct model.

💡Computational Cost

Computational cost is the resource expense of running an algorithm, which is a consideration when deciding the number of models to use in an IMM filter. The video script warns against using an excessive number of models due to the increased computational demands and potential decrease in performance.

Highlights

The video introduces a shift in focus from estimating the state of one's own system to estimating the state of a remote object, transitioning from positioning and localization to single object tracking.

Tracking an object involves determining its state such as position or velocity by integrating sensor data with models, which is more challenging with less information available.

To address tracking difficulties, the video suggests upgrading a single model estimation filter to an Interacting Multiple Model (IMM) filter.

The IMM filter is introduced as an effective approach for tracking uncertain objects by building intuition through simulation results.

The video demonstrates the superiority of the IMM filter over a single model filter in tracking a maneuvering object through comparative simulation results.

Estimation filters operate on a predict-then-correct mechanism using system models and sensor measurements to estimate the state of a system or object.

The challenge in tracking is the difficulty in predicting the future state of an uncontrolled object, unlike estimating the state of one's own system.

The video explains the importance of system dynamics, kinematics, and control inputs in the prediction step of the filter process.

The concept of process noise in filters is discussed, which accounts for unknown inputs and model uncertainty, affecting the prediction reliability.

The difference between cooperative and uncooperative tracking is highlighted, with cooperative objects sharing information that aids in prediction.

The video illustrates the limitations of a single model filter when the tracked object's motion does not match the model's assumptions.

Increasing process noise in a filter is shown to improve tracking during motion transitions but at the cost of performance during consistent motion.

The IMM filter is explained as running multiple simultaneous estimation filters, each with a different prediction model and process noise.

The IMM filter's interaction between models after a measurement is described, which improves individual filter performance by blending estimates.

The video presents the IMM filter's results, showing how it adapts to the object's changing motions and maintains low tracking error across different maneuvers.

The importance of selecting an optimal number of models for the IMM filter is emphasized to balance computational cost and tracking performance.

The video concludes by highlighting the increased complexity of tracking multiple objects simultaneously, which will be the focus of a future Tech Talk.

Transcripts

play00:00

in this video we're going to switch our focus from trying to estimate the state

play00:04

of our own system to estimating the state of a remote object so we're

play00:08

switching from the idea of positioning and localization to single object

play00:12

tracking figuring out where another object is isn't all that different from

play00:16

figuring out where you are we're simply trying to determine state like position

play00:19

or velocity by fusing together the results from sensors and models now the

play00:24

part that makes tracking harder is that we usually have to do it with less

play00:28

information but to deal with the lack of some information we can upgrade a single

play00:33

model estimation filter like the standard common filter that we used in

play00:37

the last video to an interacting multiple model filter and in this video

play00:41

we're gonna build up some intuition around the IMM by showing how it

play00:44

achieves state estimation when tracking an uncertain object and if you haven't

play00:48

heard of an IMM before I hope you stick around because I think it's a pretty

play00:51

awesome approach to solving the tracking problem I'm Brian and welcome to a

play00:55

MATLAB Tech Talk throughout this video I'm going to be showing some simulation

play01:01

results so that as we build up the IMM filter you can see how the changes

play01:05

impact the quality of the estimation and I generated the results using the

play01:09

example tracking maneuvering targets that come with the sensor fusion and

play01:12

tracking toolbox from math works the basic idea is that this example

play01:16

simulates tracking an object that goes through three distinct maneuvers it

play01:21

travels at a constant velocity at the beginning then a constant turn and it

play01:24

ends with the object undergoing a constant acceleration and within the

play01:28

script we can set up different single and multiple model filters to track this

play01:32

object and to give you a glimpse of what we're working towards I'm going to show

play01:37

you the end result on the left is the result for a typical single model filter

play01:42

and on the right is the result for an interacting multiple model filter the

play01:47

bottom graph shows the normalized distance between the objects true

play01:50

position and the estimated position as you can see the IMM does a much better

play01:55

job tracking this maneuvering object the normalized distance through all three

play01:59

maneuvers is much lower than the single model solution so the question is why

play02:05

what makes the IMM so special well to answer that we need a little background

play02:10

information estimation filters like a common filter

play02:15

work by predicting the future state of a system and then correcting that state

play02:19

with a measurement so we predict and then we measure and correct

play02:25

in order to predict we have to give the filter a model of the system something

play02:29

that it can use to estimate where the system will be at sometime in the future

play02:33

and then at that future time a measurement of the system state is made

play02:37

using one or more sensors and then we use that measured state to correct the

play02:41

predicted state based on the relative confidence in both it and the prediction

play02:46

this blended result is the output of the filter and this two-step process predict

play02:52

and correct is the same whether we're estimating the state of our own system

play02:55

or we're estimating the state of a remote object we're tracking however for

play03:00

a tracked object one of those steps is not as easy as the other let's start

play03:05

with the differences in how we measure the object in the last video we used a

play03:09

GPS and an IMU to measure state these are sensors that are embedded within the

play03:14

system and that we have access to with tracking however we don't often have

play03:18

access to the sensors within the system and so the measurements need to come

play03:22

from remote sensors like a radar tracking station or a camera vision

play03:25

system but the exact set of sensors that you use doesn't really change the nature

play03:30

of the measurement step the idea is that we want to fuse together sensors that

play03:34

complement each other by combining the strengths of each so that you get a good

play03:37

overall measurement so you can imagine that as long as you have the right

play03:40

combination of sensors remote or local then measuring the state of the system

play03:45

you have control over is pretty much the exact same as measuring the state of a

play03:49

remote object there is however at least one major difference and that is the

play03:54

idea of a false positive result you know you get a measurement but it's not of

play03:58

the objects that you're tracking it's for some other object in the vicinity

play04:02

this gets into a data Association problem that we're going to talk about

play04:06

more in the next video for now assume that we know that we're measuring the

play04:10

object we're tracking and there's no confusion there

play04:13

what about the prediction step well this is where the difference lies it's much

play04:19

harder to predict the future state of an object that you don't have control over

play04:22

than it is one that you do let's demonstrate the prediction problem with

play04:27

an example imagine an airplane flying through a radar station that updates

play04:31

once every few seconds and you want to predict where it'll be at the next

play04:34

detection let's say you're acting as the filter here do you have a guess it's

play04:39

probably around here right it's been pretty consistent before this so it

play04:44

makes sense that we'll continue on this trajectory but what if the last few

play04:48

measurements looked like this instead you'd probably assume that the airplane

play04:53

was currently turning and you'd have more confidence in a prediction that

play04:55

continued that trend so how could we code this kind of intuition into a

play05:01

filter well consider this motion comes from three things the first is the

play05:07

dynamics and kinematics of the system that carries the current state forward

play05:10

so the airplane already has some velocity and it would move forward in a

play05:14

fairly predictable manner based on the physics of the plane traveling through

play05:18

the air and to motion also comes from the commanded and known inputs into the

play05:23

system that add or remove energy and changed the state this would be things

play05:26

like adjusting the engines or control surfaces if the pilot rotates the

play05:31

control wheel to the right then you would be correct to assume that the

play05:34

state of the plane also moves to the right and three motion comes from inputs

play05:39

into the system that are unknown or random from the environment things like

play05:43

wind gusts and air density changes so these are the three things that we need

play05:48

to take into account when predicting a future state so how does an estimation

play05:52

filter do this well we give the filter access to the dynamics in the form of a

play05:57

mathematical model and if it's a system that you have control over then the

play06:01

filter can have access to the control inputs as well that is you can tell the

play06:05

filter when you're commanding the system and it can play those commands through

play06:08

the model to better the prediction now the unknown inputs into the system as

play06:14

well as uncertainty in the model itself by definition can't be known and

play06:18

therefore they only degrade the prediction and we take this degradation

play06:22

into account with the filter process noise the higher the process noise the

play06:27

more on certain you are about the prediction so

play06:30

if you were the one flying the airplane and you knew that you didn't command any

play06:33

adjustments to it no control inputs then you could expect with reasonable

play06:38

certainty that the plane would maintain its current speed and direction so the

play06:41

prediction at the red X is probably pretty close but what if you weren't

play06:47

flying the plane but instead tracking it remotely how do you account for the

play06:51

control inputs in this situation well it depends on whether we're talking about

play06:55

cooperative tracking or uncooperative tracking a cooperative object shares

play07:00

information with the tracking filter so the airplane would share the commands it

play07:05

was sending to the engines and the control surfaces and therefore tracking

play07:09

a cooperative object is pretty similar to just flying it ourselves

play07:12

uncooperative objects however don't share their control inputs and so we

play07:16

have to treat them as additional unknown disturbances so let's revisit our

play07:21

prediction of the airplane but this time it's uncooperative now how can we handle

play07:27

this well when we were the ones doing the prediction earlier we assume that

play07:32

whatever motion the airplane was engaged in was probably the most likely motion

play07:36

to continue into the future sure the pilot may change course but at least

play07:41

over a short period it's likely that they maintained the same motion

play07:45

therefore the model that we give our filter should take into account the

play07:48

motion that we are expecting if we think the plane is traveling straight the

play07:52

model should predict the state forward if we think the airplane is turning then

play07:56

the model should predict the state rotating off in one direction or another

play08:00

choosing the right single model is sort of a pre prediction decision we'll say

play08:06

so let's go back to the MATLAB example and see how well a single model filter

play08:10

does with a manoeuvring object the model that this filter is using is a constant

play08:16

velocity model so it's predicting the future state under the assumption that

play08:20

the object continues forward at a fixed speed so if we look at the normalized

play08:25

distance now you can see that it does a great job when the object is moving at a

play08:30

constant velocity maybe about five units of error so but the error increases

play08:34

dramatically during the constant turn portion I don't even know how bad it

play08:38

gets it's way off the chart and it's about 30 units of error during

play08:43

the constant acceleration section so with a single model our prediction is

play08:47

great if the object actually performs that motion but it falls apart if the

play08:52

model doesn't match reality however we may say that we're putting too much

play08:57

trust in our prediction here I mean we've increased the number of

play09:00

unknowns into our system and therefore should have less confidence in the

play09:04

prediction I mean the airplane could turn or slow down or speed up we just

play09:08

don't know so we should account for this by increasing the process noise in our

play09:12

filter but trusting the prediction less has the byproduct of trusting the

play09:17

correction measurement more and this makes sense if we have a hard time

play09:21

predicting where the airplane will be why not just believe the radar

play09:24

measurement when we get one and basically ignore most of that useless

play09:27

prediction well let's go back to the MATLAB simulation and see how this idea

play09:33

plays out in this run I've left the constant velocity model but I upped the

play09:37

process noise and you can clearly see there is a difference when the object is

play09:43

turning the error is now a much better 30 units or so and the acceleration

play09:48

portion improved as well but at a cost the constant velocity section which is

play09:53

the portion that our model is setup for in the first place got worse and this

play09:58

section got worse because we're relying more on the noisy measurements so if we

play10:03

can't trust the prediction and we're mostly relying on the sensor

play10:06

measurements anyway then what good is this estimation filter the whole point

play10:11

is to use a prediction to account for some of the measurement noise lowering

play10:15

the overall uncertainty well this is the problem that we're left with how do we

play10:20

estimate the state of a manoeuvring object better than what the sensors

play10:24

alone are capable of measuring and the answer is to run more than one model

play10:29

basically we can think of this as running several simultaneous estimation

play10:34

filters each with a different prediction model and process noise the idea is to

play10:39

have one model for each type of motion that you expect the tracked object to

play10:43

engage in you know things like move at a constant velocity or constant

play10:47

acceleration or constant turning and so on whatever is necessary to cover the

play10:52

full range of possible motion each model predicts where the object

play10:56

will be if it follows that particular motion then when we get a measurement

play11:01

it's compared to every single prediction and from this claims can be made as to

play11:06

which model most likely represents the true motion and we can place more trust

play11:10

in that model for the next prediction cycle and this behaves just like how a

play11:15

human would do prediction if the airplane seems to be flying straight

play11:18

assume it'll keep flying straight and if you see that it's starting to turn

play11:23

assume that turn will continue for some time with this method there will be some

play11:29

transient error whenever the object transitions to a new motion but the

play11:32

filter will quickly realize that a new model has a better prediction and will

play11:36

start to increase its likelihood this is the general idea behind multiple model

play11:42

algorithms but there is still one more step to get to interacting multiple

play11:47

models the problem we have with the current way we've set up the filters is

play11:51

that each one is operating on its own isolated from the others this means that

play11:55

for a model that doesn't represent the true motion it's still going to be

play11:59

maintaining its own bad estimate of system state and state covariance then

play12:03

if the object changes motion and there's a transition to this model with its bad

play12:09

state estimate and covariance the filters going to take some time to

play12:12

converge again so in this way every time there's a transition to a new motion the

play12:16

transient period will be longer than necessary while the filter is trying to

play12:20

catch up so to fix this we allow the models to interact after a measurement

play12:27

the overall filter gets an updated state and state covariance based on the

play12:31

blending of the most likely models at that point every filter is reinitialized

play12:36

with a mixed estimate of state and covariance based on their probability of

play12:40

being switched to or mixing with each other

play12:43

this is constantly improving each individual filter to reduce its own

play12:47

residual error even when that filter doesn't represent the true motion of the

play12:51

object in this way an IMM filter can switch to an individual model without

play12:55

having to wait for it to converge first so now we can finally make sense of the

play13:01

IMM result that I showed you at the beginning of this video this IMM is set

play13:05

up with three models constant velocity turn and acceleration to match the three

play13:09

expected motions of the object on the left are plots showing the normalized

play13:13

distance or error for the different models that we talked about that way you

play13:17

can see the results of all three side-by-side the top right shows the

play13:21

manoeuvre profile of the object and there's a new graph in the bottom right

play13:24

that shows how likely each model in the i'mmm is to represent the true motion

play13:29

the colored overlay is just there to give you a visual reference for which

play13:33

motion the object is currently engaged in so let's kick this off

play13:41

okay check out the IMM results you can see that the overall normalized distance

play13:46

is very low for all three maneuvers it's much lower than either of the single

play13:50

model results also check out how the likelihood of each model skyrockets when

play13:56

the object is doing the motion that it's predicting and the transient time

play14:00

between motions is pretty low it switches pretty quickly from one to the

play14:03

other so as long as the object isn't constantly and quickly changing motions

play14:08

then this transient error won't contribute much to the overall quality

play14:11

of the estimate so this is how we make up for the lack of control input

play14:17

information when tracking uncooperative objects we build a model for each

play14:21

expected motion and then set up an IMM to blend them together based on the

play14:25

likelihood that they represent the true motion now before I end this video I do

play14:31

want to address one more thing you might be tempted to just run an IMM with a

play14:35

million models you know something that could cover every possible motion

play14:39

scenario well the problem with this is that every model you run you have to pay

play14:44

a price namely the computational cost of running a pile of predictions and if

play14:49

it's a high-speed real-time tracking situation you may only have a few

play14:53

milliseconds to run the full filter in addition there is also the pain of

play14:57

having to set up all of these filters and get the processed noise right but

play15:02

let's say computational speed isn't a problem for you you really only care

play15:06

about performance well even then having too many models can hurt performance for

play15:10

one it increases the number of transitions between models and it's

play15:14

harder to determine when the transition should take place if there's a lot of

play15:18

models that represent very similar motions and both of these contribute to

play15:22

a less optimal estimation so unfortunately you still have to approach

play15:26

this filter in a smart way and try to find the smallest set of models that can

play15:31

adequately predict the possible motions for the object that you're tracking

play15:35

practically speaking this tends to be less than 10 models and usually around

play15:40

just three or four and something else to keep in mind is that everything I've

play15:45

just explained is what's necessary to track a single object our problem gets

play15:49

even harder when we expand this to tracking multiple

play15:52

at once and that is what we'll cover in the next video so if you don't want to

play15:57

miss future Tech Talk videos don't forget to subscribe to this channel also

play16:01

if you want to check out my channel control system lectures I cover more

play16:04

controlled theory topics there as well I'll see you next time

Rate This

5.0 / 5 (0 votes)

Related Tags
IMM FilterMATLABObject TrackingState EstimationSensor FusionSystem DynamicsPredictive ModelingRemote SensingData AssociationControl InputsTechnical Tutorial