Understanding Sensor Fusion and Tracking, Part 3: Fusing a GPS and IMU to Estimate Pose
Summary
TLDRThis MATLAB Tech Talk explores sensor fusion for precise positioning and localization. Brian demonstrates how integrating IMU and GPS sensors can enhance object tracking, especially in high-speed scenarios. The video visually compares the performance of a GPS-only system to one augmented with IMU sensors, highlighting the importance of sensor fusion in maintaining accurate position estimates. It delves into the workings of an extended Kalman filter, emphasizing the need for proper initialization and the impact of asynchronous sensor data on the estimation process.
Takeaways
- 📐 The video discusses the use of sensor fusion for positioning and localization, particularly combining IMU and GPS sensors to estimate an object's orientation, position, and velocity.
- 🌐 GPS sensors provide absolute measurements for position and velocity, which can be integrated with IMU sensors to correct for drift and improve accuracy in dynamic environments.
- 🔍 The fusion algorithm's goal is to provide an intuitive understanding of how each sensor contributes to the final estimation of an object's state, rather than a detailed technical description.
- 🤖 The video uses MATLAB's sensor fusion and tracking toolbox to demonstrate pose estimation from asynchronous sensors, showing how different sensor configurations affect estimation accuracy.
- 🔄 The script in the toolbox allows for adjusting sensor sample rates and even removing sensors to observe the impact on the estimation process, providing a hands-on learning experience.
- 🔢 In scenarios with slow movement and less stringent accuracy requirements, GPS alone may suffice for position estimation, but for high-speed motion or when accuracy is critical, additional sensors like IMU are necessary.
- ⏱ The video illustrates the limitations of relying solely on GPS for fast-moving objects, where the velocity changes rapidly and the one-second interval between GPS updates can lead to significant errors.
- 🔧 The importance of sensor bias estimation is highlighted, as biases can drift over time and affect the accuracy of the sensor fusion algorithm if not accounted for.
- 🛠 The fusion algorithm used in the video is a continuous-discrete extended Kalman filter (EKF) that can handle asynchronous sensor measurements, which is beneficial for systems with different sensor sample rates.
- 🔄 The EKF operates on a predict-correct cycle, using the model of the system's dynamics to predict state changes and then correcting these predictions with actual sensor measurements.
- 🚀 The video encourages viewers to experiment with the example code, adjusting parameters and observing the effects on the state estimation to deepen their understanding of sensor fusion.
Q & A
What is the main topic of the video?
-The video discusses the use of sensor fusion for positioning and localization, particularly focusing on how GPS and IMU sensors can be combined to estimate an object's orientation, position, and velocity.
Why is it necessary to correct the drift from the gyro using absolute measurements from the accelerometer and magnetometer?
-The gyro can accumulate errors over time, causing a drift in the estimated orientation. Absolute measurements from the accelerometer and magnetometer help correct this drift by providing a reference to the actual orientation with respect to gravity and the Earth's magnetic field.
What is the role of GPS in the sensor fusion algorithm discussed in the video?
-GPS is used to measure the position and velocity of an object. It provides absolute measurements that can be integrated into the fusion algorithm to improve the estimation of the object's state.
How does the video demonstrate the impact of different sensor sample rates on the estimation process?
-The video uses a MATLAB example that allows changing the sample rates of the sensors or removing them from the solution to visually demonstrate how these changes affect the estimation of orientation and position.
What is the significance of the IMU sensors in improving the position estimate when the system is moving fast?
-IMU sensors, which include accelerometers and gyros, provide high-frequency updates that can capture rapid changes in motion. When the system is moving fast, these sensors help in estimating the velocity and rotation more accurately, which is crucial for maintaining an accurate position estimate.
Why is sensor bias estimation important in the fusion algorithm?
-Sensor bias estimation is important because biases can drift over time, affecting the accuracy of the measurements. If not corrected, these biases can cause the estimate to deviate from the true state, especially when relying on the affected sensor for an extended period.
What is the purpose of the predict and correct steps in a Kalman filter?
-The predict step uses the current state estimate and a model of the system dynamics to forecast the state at the next time step. The correct step then updates this prediction with new measurements, taking into account the uncertainty in both the prediction and the measurement to produce a more accurate state estimate.
How does the video illustrate the limitations of using GPS alone for fast-moving systems?
-The video shows an example where, with only GPS data and a slow update rate, the position estimate quickly diverges from the actual path when the system is moving fast and changing directions rapidly.
What is the benefit of using an Extended Kalman Filter (EKF) for sensor fusion?
-An EKF is beneficial for sensor fusion because it can handle nonlinear systems by linearizing the models around the current estimate. This allows for the integration of measurements from different sensors, even when their update rates are asynchronous.
How does the video suggest initializing the filter for accurate state estimation?
-The video suggests initializing the filter close to the true state to ensure accurate linearization. In simulations, this can be done knowing the ground truth, but in real systems, it may involve using direct sensor measurements or allowing the filter to converge on a stationary system.
What advice does the presenter give for better understanding the concepts discussed in the video?
-The presenter encourages viewers to experiment with the MATLAB example, changing sensor settings and observing the effects on estimation. They also suggest diving into the code to understand how the EKF is implemented and how different functions update the state vector.
Outlines
📍 Introduction to Sensor Fusion for Positioning and Localization
This paragraph introduces the topic of using sensor fusion for accurate positioning and localization. It explains the previous video's focus on combining IMU sensors to estimate an object's orientation and correct gyro drift with accelerometer and magnetometer data. The current video aims to extend this concept by incorporating GPS data to estimate position and velocity, enhancing the fusion algorithm. The goal is to provide an intuitive understanding of how each sensor contributes to the final solution, rather than a technical explanation of the fusion algorithm itself. The video will use MATLAB's sensor fusion and tracking toolbox to demonstrate the impact of different sensors on position and orientation estimation through a visual example.
🚀 GPS and IMU Integration for Enhanced Positioning
The second paragraph delves into the practical application of GPS in conjunction with IMU sensors for precise positioning. It contrasts the use of GPS alone for slow-moving systems with the need for additional sensors like IMU for fast-moving systems requiring high update rates and accuracy. The paragraph uses a MATLAB example to illustrate the difference in position and orientation estimation with and without IMU sensors. It highlights the limitations of GPS in fast-moving scenarios and the benefits of sensor fusion, especially in dynamic situations like drone navigation. The example also demonstrates how the fusion algorithm handles asynchronous sensor data and the importance of sensor bias estimation for maintaining accuracy over time.
🔍 Understanding the Continuous-Discrete EKF for Sensor Fusion
The final paragraph provides an in-depth look at the underlying algorithm of the sensor fusion process, which is a continuous-discrete extended Kalman filter (EKF). It explains the filter's ability to handle asynchronous sensor measurements and its large state vector that includes not only the primary states like orientation and velocity but also secondary states like sensor biases. The paragraph emphasizes the importance of initializing the filter correctly and the challenges of doing so without knowing the true state. It outlines the two-step process of the EKF, which involves predicting the state based on the model and then correcting it with new measurements. The explanation also covers the filter's approach to handling changing velocities and the significance of quick updates from IMU sensors in maintaining accurate state estimation.
Mindmap
Keywords
💡Sensor Fusion
💡IMU (Inertial Measurement Unit)
💡GPS (Global Positioning System)
💡Position Estimation
💡Localization
💡Drift Correction
💡Gyroscope
💡Accelerometer
💡Magnetometer
💡Extended Kalman Filter (EKF)
💡State Estimation
💡Asynchronous Measurements
💡Bias Estimation
💡Predict-Correct Cycle
Highlights
The video discusses sensor fusion for positioning and localization, focusing on the integration of IMU and GPS sensors.
IMU sensors are used to estimate an object's orientation, and GPS sensors measure position and velocity for enhanced estimation.
The fusion algorithm structure is explained, emphasizing the visual contribution of each sensor to the final solution.
GPS is sufficient for some applications with low accuracy requirements and slow motion systems.
High-accuracy and high-update-rate applications, such as drones, may require additional sensors like IMU to complement GPS.
MATLAB's sensor fusion and tracking toolbox is demonstrated with an example of pose estimation from asynchronous sensors.
The example shows how the fusion algorithm uses GPS, gyro, and magnetometer data to estimate orientation and position.
The script allows for adjusting sensor sample rates and removing sensors to observe the impact on estimation.
Removing all sensors except GPS results in significant orientation error but reasonable position accuracy.
Adding IMU sensors to the GPS-only setup shows minor improvements in position estimation for slow movements.
For faster movements and rapid changes in velocity, the IMU's high update rate is crucial for accurate estimation.
The video explains the continuous-discrete extended Kalman filter used in the fusion algorithm, which handles asynchronous sensor measurements.
The filter's state vector includes 28 elements, estimating not only the main states but also sensor biases.
Estimating sensor bias is crucial due to their drift over time, which can affect the accuracy of the estimation.
The video discusses the importance of filter initialization and its impact on convergence and accuracy.
The predict-correct process of the EKF is explained, highlighting how it uses predictions and measurements to refine state estimation.
The video concludes by encouraging viewers to experiment with the example and explore the code for a deeper understanding.
Transcripts
let's continue our discussion on using sensor fusion for positioning and
localization in the last video we combined the sensors in an IMU to
estimate an object's orientation and showed how the absolute measurements of
the accelerometer and magnetometer were used to correct the drift from the gyro
now in this video we're going to do sort of a similar thing but we're going to
add a GPS sensor GPS can measure position and velocity and so in this way
we can extend the fusion algorithm to estimate them as well and just like the
last video the goal is not to fully describe the fusion algorithm it's again
too much for one video instead I mostly want to go over the structure of the
algorithm and show you visually how each sensor contributes to the final solution
so you have a more intuitive understanding of the problem so I hope
you stick around for it I'm Brian and welcome to a MATLAB Tech Talk now it
might seem obvious to use a GPS if you want to know the position of something
relative to the surface of the earth just strap a GPS sensor on to your
system and you've got latitude longitude and altitude simple enough and this is
perfectly fine in some situations like when the system is accelerating and
changing directions relatively slowly and you only need position accuracy to a
few meters this might be the case for a system that's determining directions in
your car as long as the GPS locates you to within a few meters of your actual
spot the map application can figure out which road you're on and therefore where
to go next on the other hand imagine if the system requires position information
to a few feet or less and it needs position updates at hundreds of times
per second to keep up with the fast motion of your system like for example
trying to follow a fast trajectory through obstacles with a drone in this
case GPS might have to be paired with additional sensors like the sensors in
an IMU to get the accuracy that you need to give you a more visual sense of what
I'm talking about here let's run an example from the MATLAB sensor fusion
and tracking tool box called pose estimation from asynchronous sensors
this example uses a GPS excell gyro and magnetometer to estimate
which is both orientation and position as well as a few other states now the
script generates a true path and orientation profile that the system
follows the true orientation is the red cube and the true position is the red
diamond now the pose algorithm is using the available sensors to estimate
orientation and position and it shows the results of that as the blue cube and
the blue diamond respectively so that's what we want to watch how closely do the
blue objects follow the red objects and the graph on the right plots the error
if you just want to see a more quantitative result and the cool thing
about this is that while the script is running the interface allows us to
change the sample rates of each of the sensors or remove them from the solution
altogether so that we can see how it impacts the estimation so let's start by
removing all of the sensors except for the GPS and we'll read the GPS five
times a second the default trajectory in the script is
to follow a circle with a radius of about 15 meters and you can see that
it's moving around this circle pretty slowly now the orientation estimate is
way off as you'd expect since we don't have any
orientation sensors active but the position estimate isn't too bad
after the algorithm settles and removes that initial bias we see position errors
of around plus and minus 2 meters in each axis so now let me add back in the
IMU sensors and we can see if our result is improved well it's taking several
seconds for the orientation to converge but you can see that it's slowly
correcting itself back to the true orientation also the position estimate
is well it's about the same plus or minus two meters maybe a little less
than that this is a relatively slow movement and it's such a large
trajectory that the IMU sensors that are modeled here are only contributing a
minor improvement over the GPS alone the GPS velocity measurement is enough to
predict how the object moves over the point two seconds between measurements
since the object isn't accelerating too quickly this setup is kind of analogous
to using GPS to get directions from a map
your phone while you're driving adding those additional sensors from the IMU
aren't really going to help too much so now let's go in the opposite direction
and create a trajectory that is much faster in the trajectory generation
script I'll just speed up the velocity of the object going around the circle
from two point five to twelve point five meters per second this is going to
create more angular acceleration in a shorter amount of time
and to really emphasize the point I'm trying to make here I'm going to slow
the GPS sample time down to once per second so let's give this a shot
okay so what's happening here is that when we get a GPS measurement we get
both position and velocity so once a second we get a new position update that
puts the estimate within a few meters of the truth but we also get the current
velocity and so for one second the algorithm propagates that velocity
forward to predict what the object is doing between measurements and this
works really well if the velocity is near constant for that one second
but poorly as you can see when the velocity is rapidly changing this is the
type of situation that is similar to a drone that has to make rapid turns and
avoid obstacles and it's here where the addition of the IMU will help because we
won't have to rely on propagating a static velocity for one second we can
estimate velocity and rotation using the IMU sensors now to see the improvement
I've placed two different runs next to each other the left is the GPS only that
we just saw and the right is with the addition of the IMU you can see at least
visually how the GPS with the IMU is different than the GPS alone it's able
to follow the position of the object more closely and creates a circular
result rather than a saw blade so adding an IMU seems to help estimate position
so the question at this point might be why is this the case I mean how does the
algorithm combine these sensors to get this result in the first place well
again intuitively we can imagine that the IMU is allowing us to dead rec in
the state of the system between GPS updates you know similar to how we use
the gyro to dead reckon between the mag and Excel updates in the last video and
this is true except it's not as cut and dry as that it's a lot more intertwined
than you might think and to understand why this is the case we need to explore
the code a little bit the fusion algorithm is a continuous
discrete extended Kalman filter and this particular one is set up to accept the
sensor measurements asynchronously which means that each of the sensors can be
read at their own rate and this is beneficial if you want to run say your
gyro at 100 Hertz your mag and accelerometer at 50 Hertz and your GPS
at 1 Hertz you're gonna see below how this is handled but the thing I want to
point out here is that this is a massive common filter the state vector has 28
elements in it that are being estimated simultaneously there's the obvious
states like orientation angular velocity linear position velocity and
acceleration but the filter is also estimating the sensor biases and the mag
field vector estimating sensor bias is extremely important because bias drifts
over time this means that even if you calculate sensor bias before you operate
your system and you hard-code that calibration value into your software
it's not going to be accurate for long and any bias that we don't remove will
be integrated and cause the estimate to walk away from the truth when we rely on
that sensor now if you don't have a good initial estimate of sensor bias when you
start your system then you can't just turn on your filter and Trust it right
away you have to give it some time to not just estimate the main states that
you care about like position and velocity but also to estimate some of
the secondary states like bias usually you let the common filter converge on
the correct solution when the system is stationary and not controlled or maybe
while you're controlling it using a different estimation algorithm or maybe
you just let it run and you don't really care that the system is performing
poorly while the filter converges but this is one of the things that you need
to consider during initialization of your system
now another thing we need to talk about here is how to initialize the filter
this is an EKF and it can estimate state for nonlinear systems it does this by
linearizing the models around its current estimate and then using that
linear model to predict the state into the future so if the filter is not
initialized close enough to the true state the linearization process can be
so far off that it causes the filter to never actually converge now this isn't
really a problem for this example because the ground truth is known in the
simulation so the filter is simply initialized to a state close to truth
but in a real system you need to think about how to initialize the filter when
you don't know that truth now often this can be done by just using the
measurements from the sensors directly like using the last GPS reading to
initialize position and velocity and just using the gyro to initialize your
angular rate and so on all right with the filter initialized we can start
running it and every common filter consists of the same two-step process
predict and correct to understand why we can think about it like this if we want
it to estimate the state of something you know where it is or how fast it's
going there's two general ways to do this we could just measure it directly
or we could use our knowledge of dynamics and kinematics and predict
where it is for example imagine a car driving down the road and we want to
know its location we could use GPS to measure its position directly that's one
way but if we knew where it started and its average speed we could also predict
where it'll be after a certain amount of time with some accuracy and using those
predictions alongside a measurement can produce a better estimate so the
question might be why wouldn't we just trust our measurement completely here
it's probably better than our prediction well as sort of an extreme example what
if you checked your watch and it said it was 3:00 p.m. and then you waited a few
seconds and checked it again and it said 4:00 p.m. what you wouldn't
automatically assume an hour has passed just because your measurement said so
this is because you have a basic understanding of time right that is you
have this internal model that you can use to predict how much time has passed
and that would cause you to be sceptical of your watch if you thought seconds
passed and it said an hour now on the other hand if you thought about an hour
has passed but the watch said 65 minutes you'd probably be more inclined to
believe the watch over your own estimate since you'd be less confident in your
prediction and this is precisely what a common filter is doing it's predicting
how the states will change over time based on a model that it has and along
with the states it's also keeping track of how trustworthy the prediction is
based on the process noise that you've given it and the longer the filter has
to predict the state the less confidence it has in the result then whenever a new
measurement comes in which has its own measurement noise associated with it the
filter compares the prediction with the measurement and then corrects its
estimate based on the relative confidence in both and this is what the
scripts doing also the simulation runs at 100 Hertz and at every time step it
predicts forward the estimate of the states and then if there's a new
measurement from any of the sensors it runs the update portion of the common
filter adjusting the states based on the relative confidence in the prediction
and the specific measurement so it's in this way that the filter can run with
asynchronous measurements now with the GPS only solution that we started with
the prediction step could only assume that the velocity isn't changing over
the one second and since there were no updates to correct that assumption the
estimate would drastically run away from truth however with the IMU
the filter is updating a hundred times a second and looking at the accelerometer
in seeing that the velocity is in fact changing so in this way the filter can
react to a changing state faster with the quick updates of the IMU
then it can with the slower updates of the GPS and once the filter converges
and it has a good estimate of sensor biases then that will give us an overall
better prediction and therefore a better overall state estimation and this is the
power of sensor fusion now I know this explanation might not have been
perfectly clear and probably a bit fast but I think it's hard to really grasp
the topic by watching a video so I would encourage you to play around with this
example you know turn sensors on and off change the rates noise characteristics
and the trajectory to see how the estimation is affected yourself you can
even dive further into the code and see how the EKF is implemented I found it
was helpful to place breakpoints and pause the execution of the script so
that I could see how the different functions update the state vector okay
this is where I'm going to leave this in the next video we'll start to look at
estimating the state of other objects when we talk about tracking algorithms
so if you don't want to miss that in future Tech Talk videos don't forget to
subscribe to this channel and if you want you can check out my channel
control system lectures where I cover more control theory topics there as well
thanks for watching
Browse More Related Video
![](https://i.ytimg.com/vi/hJG08iWlres/hq720.jpg)
Understanding Sensor Fusion and Tracking, Part 4: Tracking a Single Object With an IMM Filter
![](https://i.ytimg.com/vi/6qV3YjFppuc/hq720.jpg)
Understanding Sensor Fusion and Tracking, Part 1: What Is Sensor Fusion?
![](https://i.ytimg.com/vi/0rlvvYgmTvI/hq720.jpg)
Understanding Sensor Fusion and Tracking, Part 2: Fusing a Mag, Accel, & Gyro Estimate
![](https://i.ytimg.com/vi/whSw42XddsU/hq720.jpg)
Drone Control and the Complementary Filter
![](https://i.ytimg.com/vi/i2x5rOzatbU/hq720.jpg)
HVACR Temperature Control Basics
![](https://i.ytimg.com/vi/IIt1LHIHYc4/hq720.jpg)
Understanding Sensor Fusion and Tracking, Part 5: How to Track Multiple Objects at Once
5.0 / 5 (0 votes)