Understanding Sensor Fusion and Tracking, Part 4: Tracking a Single Object With an IMM Filter
Summary
TLDRThis video introduces the concept of tracking remote objects using estimation filters, transitioning from positioning and localization to single object tracking. It explains the challenges of tracking with limited information and introduces the Interacting Multiple Model (IMM) filter. The video compares single model filters to IMMs, demonstrating how IMMs improve state estimation for maneuvering objects by blending predictions from multiple models. It also discusses the prediction and correction process, the importance of sensor fusion, and the computational trade-offs in using multiple models for tracking. The next video will address the complexities of tracking multiple objects.
Takeaways
- 🔍 The video focuses on the shift from self-positioning to tracking a remote object, emphasizing the importance of state estimation in both scenarios.
- 📈 The video introduces the Interacting Multiple Model (IMM) filter as a solution for tracking uncertain objects, highlighting its effectiveness over single model filters.
- 🛠 The IMM filter is explained as an upgrade to single model estimation filters, allowing for better handling of less information and uncertainty in tracking.
- 🎯 The script discusses the challenges of tracking, such as dealing with less information and the potential for false positive results in data association.
- 🤖 The importance of fusing data from multiple sensors to get a comprehensive measurement of the tracked object is emphasized.
- 🔄 The 'predict and correct' process of estimation filters is outlined, explaining how it applies to both self-state estimation and remote object tracking.
- 🚀 The video uses an airplane example to illustrate the prediction problem in tracking, showing the difficulty in predicting the future state of an uncontrolled object.
- 🔄 The concept of process noise is introduced, explaining its role in accounting for uncertainty in predictions and its impact on filter performance.
- 🔄 The difference between cooperative and uncooperative tracking is discussed, with the latter requiring the filter to treat control inputs as unknown disturbances.
- 🤝 The IMM approach is described as running multiple models simultaneously, each representing a different expected motion of the tracked object.
- 🔄 The video explains how the IMM filter interacts by mixing state estimates and covariances of models after measurements, improving the overall estimation quality.
- 🚀 The computational cost and the need for a smart approach to selecting models for the IMM filter are discussed, warning against using too many models which can degrade performance.
Q & A
What is the main focus of the video?
-The video focuses on switching from estimating the state of our own system to estimating the state of a remote object, specifically discussing the concept of single object tracking and the use of an Interacting Multiple Model (IMM) filter for state estimation in uncertain scenarios.
What is the difference between positioning and localization and single object tracking?
-Positioning and localization are about determining the state of one's own system, while single object tracking is about determining the state of a remote object, such as its position or velocity, by fusing sensor data and models.
Why is tracking a remote object more challenging than estimating the state of our own system?
-Tracking a remote object is more challenging because it often requires working with less information and dealing with uncertainties. The lack of direct control over the remote object and the need to rely on external sensors add complexity to the tracking process.
What is an Interacting Multiple Model (IMM) filter and how does it help in tracking?
-An Interacting Multiple Model (IMM) filter is an advanced estimation filter that combines multiple models to predict and estimate the state of a system. It is particularly useful for tracking uncertain objects by blending the results from different models based on their likelihood of representing the true motion.
What is the significance of the example tracking maneuvering targets in the video?
-The example tracking maneuvering targets is used to demonstrate the effectiveness of the IMM filter. It simulates tracking an object that goes through three distinct maneuvers, showing how the IMM filter can adapt and provide better estimation compared to a single model filter.
How does the IMM filter differ from a single model filter in terms of tracking performance?
-The IMM filter provides better tracking performance by using multiple models to account for different possible motions of the tracked object. It can quickly adapt to changes in the object's motion, resulting in lower tracking error compared to a single model filter that may not match the actual motion.
What are the three types of motion that the IMM filter considers when predicting the future state of an object?
-The IMM filter considers three types of motion: 1) Dynamics and kinematics of the system that carry the current state forward, 2) Commanded and known inputs into the system that change the state, and 3) Unknown or random inputs from the environment that affect the state.
What is the role of process noise in the IMM filter?
-Process noise in the IMM filter represents the uncertainty in the prediction. A higher process noise indicates less confidence in the prediction, allowing the filter to rely more on the sensor measurements for correction.
How does the IMM filter handle transitions between different motions of the tracked object?
-The IMM filter allows models to interact after a measurement, reinitializing each filter with a mixed estimate of state and covariance based on their probability of being switched to or mixing with each other. This helps in reducing the transient error and quickly adapting to new motions.
What is the computational cost of using an IMM filter with a large number of models?
-Using an IMM filter with a large number of models increases the computational cost due to the need to run multiple predictions simultaneously. This can be a limitation in real-time tracking applications where processing time is critical.
Why might having too many models in an IMM filter negatively impact its performance?
-Having too many models can lead to increased transitions between models, making it harder to determine when a transition should occur. It can also result in similar motions being represented by multiple models, which can confuse the filter and lead to less optimal estimation.
What is the next step after understanding single object tracking with an IMM filter?
-The next step is to expand the concept to tracking multiple objects simultaneously, which introduces additional complexities and will be covered in a future video.
Outlines
🔍 Switching to Single Object Tracking
This paragraph introduces the shift in focus from estimating the state of one's own system to tracking a remote object. It highlights the similarities between positioning and localization versus single object tracking, emphasizing the need to estimate state (position or velocity) by fusing sensor and model data. The challenge of tracking with less information is addressed, and the concept of using an Interacting Multiple Model (IMM) filter instead of a standard Kalman filter is introduced.
📊 Simulation and IMM Overview
The paragraph explains the use of simulation results to illustrate the IMM filter's effectiveness in tracking an object with three distinct maneuvers: constant velocity, constant turn, and constant acceleration. It compares the performance of a single model filter versus an IMM filter, demonstrating the IMM's superior accuracy in tracking a maneuvering object through visual results.
🔧 Estimation Filters and Predictive Models
This paragraph delves into the mechanics of estimation filters like the Kalman filter, which predict future states and correct them with measurements. It discusses the necessity of providing the filter with a model of the system to predict its state and the importance of sensor fusion in the measurement step. The challenges of measuring a tracked object using remote sensors versus embedded sensors are also explored.
✈️ Predicting Motion in Tracking
Here, the difficulties in predicting the future state of an uncooperative object are discussed. Using an airplane example, the paragraph explains how motion predictions are based on system dynamics, known inputs, and random environmental factors. It highlights the challenge of accounting for unknown control inputs when tracking uncooperative objects and the limitations of single model filters in such scenarios.
🤔 Model Selection and Prediction Errors
This paragraph explores the concept of selecting appropriate models for prediction and the errors that arise when the chosen model does not match the object's actual motion. It emphasizes the importance of accounting for process noise and the trade-offs between prediction accuracy and reliance on noisy measurements. A MATLAB example is used to illustrate how adjusting process noise impacts prediction accuracy.
🔄 Multiple Models for Better Estimation
The paragraph introduces the solution of using multiple models to improve state estimation of a maneuvering object. It describes how running several estimation filters with different models allows for better predictions by comparing measurements to multiple predictions. This approach reduces transient errors when the object changes motion, as the filter quickly adapts to the most likely model.
🔗 Interacting Multiple Models (IMM)
This section explains the final step of integrating multiple models into an IMM filter, which allows models to interact and share state estimates and covariances. The IMM filter continuously improves each model's prediction accuracy, leading to better overall tracking performance. An example with three models—constant velocity, turn, and acceleration—is used to demonstrate the IMM filter's effectiveness.
⚠️ Balancing Model Quantity and Performance
The paragraph warns against using too many models in an IMM filter, as it can lead to increased transitions and suboptimal estimation. It stresses the need to balance the number of models to cover the object's possible motions while maintaining computational efficiency and optimal performance. The focus is on finding a small set of models that can adequately predict the object's behavior.
🎯 Expanding to Multiple Object Tracking
The final paragraph teases the next video, which will cover the more complex challenge of tracking multiple objects simultaneously. It encourages viewers to subscribe for future videos and check out related content on control theory topics.
Mindmap
Keywords
💡State Estimation
💡Single Object Tracking
💡Interacting Multiple Model (IMM) Filter
💡Sensor Fusion
💡Maneuvering Targets
💡Predict-Measure-Correct Cycle
💡Process Noise
💡Cooperative and Uncooperative Tracking
💡Data Association
💡Model Interaction
💡Computational Cost
Highlights
The video introduces a shift in focus from estimating the state of one's own system to estimating the state of a remote object, transitioning from positioning and localization to single object tracking.
Tracking an object involves determining its state such as position or velocity by integrating sensor data with models, which is more challenging with less information available.
To address tracking difficulties, the video suggests upgrading a single model estimation filter to an Interacting Multiple Model (IMM) filter.
The IMM filter is introduced as an effective approach for tracking uncertain objects by building intuition through simulation results.
The video demonstrates the superiority of the IMM filter over a single model filter in tracking a maneuvering object through comparative simulation results.
Estimation filters operate on a predict-then-correct mechanism using system models and sensor measurements to estimate the state of a system or object.
The challenge in tracking is the difficulty in predicting the future state of an uncontrolled object, unlike estimating the state of one's own system.
The video explains the importance of system dynamics, kinematics, and control inputs in the prediction step of the filter process.
The concept of process noise in filters is discussed, which accounts for unknown inputs and model uncertainty, affecting the prediction reliability.
The difference between cooperative and uncooperative tracking is highlighted, with cooperative objects sharing information that aids in prediction.
The video illustrates the limitations of a single model filter when the tracked object's motion does not match the model's assumptions.
Increasing process noise in a filter is shown to improve tracking during motion transitions but at the cost of performance during consistent motion.
The IMM filter is explained as running multiple simultaneous estimation filters, each with a different prediction model and process noise.
The IMM filter's interaction between models after a measurement is described, which improves individual filter performance by blending estimates.
The video presents the IMM filter's results, showing how it adapts to the object's changing motions and maintains low tracking error across different maneuvers.
The importance of selecting an optimal number of models for the IMM filter is emphasized to balance computational cost and tracking performance.
The video concludes by highlighting the increased complexity of tracking multiple objects simultaneously, which will be the focus of a future Tech Talk.
Transcripts
in this video we're going to switch our focus from trying to estimate the state
of our own system to estimating the state of a remote object so we're
switching from the idea of positioning and localization to single object
tracking figuring out where another object is isn't all that different from
figuring out where you are we're simply trying to determine state like position
or velocity by fusing together the results from sensors and models now the
part that makes tracking harder is that we usually have to do it with less
information but to deal with the lack of some information we can upgrade a single
model estimation filter like the standard common filter that we used in
the last video to an interacting multiple model filter and in this video
we're gonna build up some intuition around the IMM by showing how it
achieves state estimation when tracking an uncertain object and if you haven't
heard of an IMM before I hope you stick around because I think it's a pretty
awesome approach to solving the tracking problem I'm Brian and welcome to a
MATLAB Tech Talk throughout this video I'm going to be showing some simulation
results so that as we build up the IMM filter you can see how the changes
impact the quality of the estimation and I generated the results using the
example tracking maneuvering targets that come with the sensor fusion and
tracking toolbox from math works the basic idea is that this example
simulates tracking an object that goes through three distinct maneuvers it
travels at a constant velocity at the beginning then a constant turn and it
ends with the object undergoing a constant acceleration and within the
script we can set up different single and multiple model filters to track this
object and to give you a glimpse of what we're working towards I'm going to show
you the end result on the left is the result for a typical single model filter
and on the right is the result for an interacting multiple model filter the
bottom graph shows the normalized distance between the objects true
position and the estimated position as you can see the IMM does a much better
job tracking this maneuvering object the normalized distance through all three
maneuvers is much lower than the single model solution so the question is why
what makes the IMM so special well to answer that we need a little background
information estimation filters like a common filter
work by predicting the future state of a system and then correcting that state
with a measurement so we predict and then we measure and correct
in order to predict we have to give the filter a model of the system something
that it can use to estimate where the system will be at sometime in the future
and then at that future time a measurement of the system state is made
using one or more sensors and then we use that measured state to correct the
predicted state based on the relative confidence in both it and the prediction
this blended result is the output of the filter and this two-step process predict
and correct is the same whether we're estimating the state of our own system
or we're estimating the state of a remote object we're tracking however for
a tracked object one of those steps is not as easy as the other let's start
with the differences in how we measure the object in the last video we used a
GPS and an IMU to measure state these are sensors that are embedded within the
system and that we have access to with tracking however we don't often have
access to the sensors within the system and so the measurements need to come
from remote sensors like a radar tracking station or a camera vision
system but the exact set of sensors that you use doesn't really change the nature
of the measurement step the idea is that we want to fuse together sensors that
complement each other by combining the strengths of each so that you get a good
overall measurement so you can imagine that as long as you have the right
combination of sensors remote or local then measuring the state of the system
you have control over is pretty much the exact same as measuring the state of a
remote object there is however at least one major difference and that is the
idea of a false positive result you know you get a measurement but it's not of
the objects that you're tracking it's for some other object in the vicinity
this gets into a data Association problem that we're going to talk about
more in the next video for now assume that we know that we're measuring the
object we're tracking and there's no confusion there
what about the prediction step well this is where the difference lies it's much
harder to predict the future state of an object that you don't have control over
than it is one that you do let's demonstrate the prediction problem with
an example imagine an airplane flying through a radar station that updates
once every few seconds and you want to predict where it'll be at the next
detection let's say you're acting as the filter here do you have a guess it's
probably around here right it's been pretty consistent before this so it
makes sense that we'll continue on this trajectory but what if the last few
measurements looked like this instead you'd probably assume that the airplane
was currently turning and you'd have more confidence in a prediction that
continued that trend so how could we code this kind of intuition into a
filter well consider this motion comes from three things the first is the
dynamics and kinematics of the system that carries the current state forward
so the airplane already has some velocity and it would move forward in a
fairly predictable manner based on the physics of the plane traveling through
the air and to motion also comes from the commanded and known inputs into the
system that add or remove energy and changed the state this would be things
like adjusting the engines or control surfaces if the pilot rotates the
control wheel to the right then you would be correct to assume that the
state of the plane also moves to the right and three motion comes from inputs
into the system that are unknown or random from the environment things like
wind gusts and air density changes so these are the three things that we need
to take into account when predicting a future state so how does an estimation
filter do this well we give the filter access to the dynamics in the form of a
mathematical model and if it's a system that you have control over then the
filter can have access to the control inputs as well that is you can tell the
filter when you're commanding the system and it can play those commands through
the model to better the prediction now the unknown inputs into the system as
well as uncertainty in the model itself by definition can't be known and
therefore they only degrade the prediction and we take this degradation
into account with the filter process noise the higher the process noise the
more on certain you are about the prediction so
if you were the one flying the airplane and you knew that you didn't command any
adjustments to it no control inputs then you could expect with reasonable
certainty that the plane would maintain its current speed and direction so the
prediction at the red X is probably pretty close but what if you weren't
flying the plane but instead tracking it remotely how do you account for the
control inputs in this situation well it depends on whether we're talking about
cooperative tracking or uncooperative tracking a cooperative object shares
information with the tracking filter so the airplane would share the commands it
was sending to the engines and the control surfaces and therefore tracking
a cooperative object is pretty similar to just flying it ourselves
uncooperative objects however don't share their control inputs and so we
have to treat them as additional unknown disturbances so let's revisit our
prediction of the airplane but this time it's uncooperative now how can we handle
this well when we were the ones doing the prediction earlier we assume that
whatever motion the airplane was engaged in was probably the most likely motion
to continue into the future sure the pilot may change course but at least
over a short period it's likely that they maintained the same motion
therefore the model that we give our filter should take into account the
motion that we are expecting if we think the plane is traveling straight the
model should predict the state forward if we think the airplane is turning then
the model should predict the state rotating off in one direction or another
choosing the right single model is sort of a pre prediction decision we'll say
so let's go back to the MATLAB example and see how well a single model filter
does with a manoeuvring object the model that this filter is using is a constant
velocity model so it's predicting the future state under the assumption that
the object continues forward at a fixed speed so if we look at the normalized
distance now you can see that it does a great job when the object is moving at a
constant velocity maybe about five units of error so but the error increases
dramatically during the constant turn portion I don't even know how bad it
gets it's way off the chart and it's about 30 units of error during
the constant acceleration section so with a single model our prediction is
great if the object actually performs that motion but it falls apart if the
model doesn't match reality however we may say that we're putting too much
trust in our prediction here I mean we've increased the number of
unknowns into our system and therefore should have less confidence in the
prediction I mean the airplane could turn or slow down or speed up we just
don't know so we should account for this by increasing the process noise in our
filter but trusting the prediction less has the byproduct of trusting the
correction measurement more and this makes sense if we have a hard time
predicting where the airplane will be why not just believe the radar
measurement when we get one and basically ignore most of that useless
prediction well let's go back to the MATLAB simulation and see how this idea
plays out in this run I've left the constant velocity model but I upped the
process noise and you can clearly see there is a difference when the object is
turning the error is now a much better 30 units or so and the acceleration
portion improved as well but at a cost the constant velocity section which is
the portion that our model is setup for in the first place got worse and this
section got worse because we're relying more on the noisy measurements so if we
can't trust the prediction and we're mostly relying on the sensor
measurements anyway then what good is this estimation filter the whole point
is to use a prediction to account for some of the measurement noise lowering
the overall uncertainty well this is the problem that we're left with how do we
estimate the state of a manoeuvring object better than what the sensors
alone are capable of measuring and the answer is to run more than one model
basically we can think of this as running several simultaneous estimation
filters each with a different prediction model and process noise the idea is to
have one model for each type of motion that you expect the tracked object to
engage in you know things like move at a constant velocity or constant
acceleration or constant turning and so on whatever is necessary to cover the
full range of possible motion each model predicts where the object
will be if it follows that particular motion then when we get a measurement
it's compared to every single prediction and from this claims can be made as to
which model most likely represents the true motion and we can place more trust
in that model for the next prediction cycle and this behaves just like how a
human would do prediction if the airplane seems to be flying straight
assume it'll keep flying straight and if you see that it's starting to turn
assume that turn will continue for some time with this method there will be some
transient error whenever the object transitions to a new motion but the
filter will quickly realize that a new model has a better prediction and will
start to increase its likelihood this is the general idea behind multiple model
algorithms but there is still one more step to get to interacting multiple
models the problem we have with the current way we've set up the filters is
that each one is operating on its own isolated from the others this means that
for a model that doesn't represent the true motion it's still going to be
maintaining its own bad estimate of system state and state covariance then
if the object changes motion and there's a transition to this model with its bad
state estimate and covariance the filters going to take some time to
converge again so in this way every time there's a transition to a new motion the
transient period will be longer than necessary while the filter is trying to
catch up so to fix this we allow the models to interact after a measurement
the overall filter gets an updated state and state covariance based on the
blending of the most likely models at that point every filter is reinitialized
with a mixed estimate of state and covariance based on their probability of
being switched to or mixing with each other
this is constantly improving each individual filter to reduce its own
residual error even when that filter doesn't represent the true motion of the
object in this way an IMM filter can switch to an individual model without
having to wait for it to converge first so now we can finally make sense of the
IMM result that I showed you at the beginning of this video this IMM is set
up with three models constant velocity turn and acceleration to match the three
expected motions of the object on the left are plots showing the normalized
distance or error for the different models that we talked about that way you
can see the results of all three side-by-side the top right shows the
manoeuvre profile of the object and there's a new graph in the bottom right
that shows how likely each model in the i'mmm is to represent the true motion
the colored overlay is just there to give you a visual reference for which
motion the object is currently engaged in so let's kick this off
okay check out the IMM results you can see that the overall normalized distance
is very low for all three maneuvers it's much lower than either of the single
model results also check out how the likelihood of each model skyrockets when
the object is doing the motion that it's predicting and the transient time
between motions is pretty low it switches pretty quickly from one to the
other so as long as the object isn't constantly and quickly changing motions
then this transient error won't contribute much to the overall quality
of the estimate so this is how we make up for the lack of control input
information when tracking uncooperative objects we build a model for each
expected motion and then set up an IMM to blend them together based on the
likelihood that they represent the true motion now before I end this video I do
want to address one more thing you might be tempted to just run an IMM with a
million models you know something that could cover every possible motion
scenario well the problem with this is that every model you run you have to pay
a price namely the computational cost of running a pile of predictions and if
it's a high-speed real-time tracking situation you may only have a few
milliseconds to run the full filter in addition there is also the pain of
having to set up all of these filters and get the processed noise right but
let's say computational speed isn't a problem for you you really only care
about performance well even then having too many models can hurt performance for
one it increases the number of transitions between models and it's
harder to determine when the transition should take place if there's a lot of
models that represent very similar motions and both of these contribute to
a less optimal estimation so unfortunately you still have to approach
this filter in a smart way and try to find the smallest set of models that can
adequately predict the possible motions for the object that you're tracking
practically speaking this tends to be less than 10 models and usually around
just three or four and something else to keep in mind is that everything I've
just explained is what's necessary to track a single object our problem gets
even harder when we expand this to tracking multiple
at once and that is what we'll cover in the next video so if you don't want to
miss future Tech Talk videos don't forget to subscribe to this channel also
if you want to check out my channel control system lectures I cover more
controlled theory topics there as well I'll see you next time
Parcourir plus de vidéos associées
![](https://i.ytimg.com/vi/hN8dL55rP5I/hq720.jpg)
Understanding Sensor Fusion and Tracking, Part 3: Fusing a GPS and IMU to Estimate Pose
![](https://i.ytimg.com/vi/IIt1LHIHYc4/hq720.jpg)
Understanding Sensor Fusion and Tracking, Part 5: How to Track Multiple Objects at Once
![](https://i.ytimg.com/vi/6qV3YjFppuc/hq720.jpg)
Understanding Sensor Fusion and Tracking, Part 1: What Is Sensor Fusion?
![](https://i.ytimg.com/vi/QHDhSidFhcQ/hq720.jpg)
AFTER EFFECTS BASICS
![](https://i.ytimg.com/vi/fQNnA4owtx4/hq720.jpg)
How to Track Links Without Hurting Email Deliverability
![](https://i.ytimg.com/vi/qpqWzTwnwUk/hq720.jpg)
GCSE Physics Revision "Acceleration 2"
5.0 / 5 (0 votes)