What Is Autonomous Navigation? | Autonomous Navigation, Part 1
Summary
TLDRThis video series aims to provide a foundational understanding of autonomous navigation, exploring the complexities and algorithms involved. It discusses the ability of vehicles to navigate without human intervention, using sensors to determine location and plan paths to goals. The series differentiates between heuristic and optimal approaches to autonomy, illustrating their applications in various environments. Challenges such as navigating uncertain and dynamic environments are highlighted, emphasizing the importance of sensor fusion, tracking, and planning in achieving successful autonomous navigation.
Takeaways
- đ The video series aims to provide a foundational understanding of autonomous navigation, including its terminology, algorithms, and challenges.
- đ Autonomous navigation is the ability for a vehicle to determine its location and plan a path to a destination without human intervention.
- đ The term 'vehicle' in this context encompasses a wide range of mobile machines, from cars and UAVs to spacecraft and underwater robots.
- đ There are varying levels of autonomy, from simple remote operation with basic onboard algorithms to fully autonomous vehicles with no human interaction.
- đ The series will primarily focus on fully autonomous vehicles, as they represent the most complex end of the autonomy spectrum.
- đ Two main approaches to autonomy are discussed: heuristic, which uses practical rules and doesn't guarantee optimal results, and optimal, which requires detailed environmental knowledge for planning.
- 𧩠Heuristic approaches can work with incomplete information and are sufficient for immediate goals, such as a maze-solving vehicle keeping a wall on one side.
- đ Optimal approaches involve building or updating an environmental model to determine the best path, crucial for complex tasks like autonomous driving.
- đ€ The combination of heuristic and optimal approaches can be used to achieve goals effectively, as seen in autonomous cars deciding when to pass slower vehicles.
- đ Autonomous navigation is challenging due to the need to navigate through uncertain and changing environments, requiring constant model updating.
- đ The complexity of navigation varies by environment, with space being more predictable than air, which in turn is more predictable than urban driving conditions.
Q & A
What is the main goal of the video series?
-The main goal of the video series is to provide a basic understanding of the autonomous navigation problem, including the terms, algorithms needed, and the challenges it presents in certain environments.
What is autonomous navigation?
-Autonomous navigation is the ability of a vehicle to determine its location within an environment and to figure out a path to reach a goal without human intervention.
What are the different types of vehicles that can perform autonomous navigation?
-Autonomous navigation can be performed by various mobile machines, including cars, UAVs, spacecraft, submersibles, and other mobile robots.
What are the different levels of autonomy for vehicles?
-Autonomy levels range from vehicles operated remotely by humans with basic onboard algorithms to prevent accidents, to fully autonomous vehicles with no human interaction at all.
What is the difference between a heuristic approach and an optimal approach in autonomous navigation?
-A heuristic approach uses practical rules or behaviors that do not guarantee optimal results but are sufficient to achieve immediate goals. An optimal approach requires more environmental knowledge and involves planning and actions based on the maximization or minimization of an objective function.
How does a heuristic-based autonomous vehicle navigate through a maze?
-A heuristic-based autonomous vehicle navigating a maze might follow simple rules like 'drive forward and keep the wall on the left,' allowing it to reach the goal without needing a map or complete environmental information.
What are some practical examples of heuristic-based autonomy?
-Practical examples of heuristic-based autonomy include robotic vacuums that clean floors by moving randomly and autonomous vehicles that follow simple behaviors like 'always attempt to pass slower cars when safe to do so.'
How do fully autonomous vehicles that solve optimization problems navigate?
-Fully autonomous vehicles solving optimization problems build or update a model of the environment and then determine an optimal path to the goal, taking into account dynamic and complex environments.
Why is autonomous navigation challenging?
-Autonomous navigation is challenging because vehicles must navigate through environments that are not perfectly known, requiring them to build and constantly update a model of the environment, which is subject to change.
What are some examples of autonomous systems that use both heuristic and optimal approaches?
-Examples include autonomous cars that use heuristic behaviors for lane changing while planning an optimal path, and ground vehicles in Amazon warehouses that maneuver around while avoiding collisions.
Why is it important for autonomous vehicles to be able to navigate in uncertain and changing environments?
-It is important for autonomous vehicles to navigate in uncertain and changing environments because it demonstrates their ability to adapt to real-world conditions, such as varying traffic, weather, and unexpected obstacles.
What are the key steps in the autonomous navigation process as outlined in the script?
-The key steps in the autonomous navigation process are sensing the environment, understanding the vehicle's location and environment model, perceiving and tracking dynamic objects, replanning based on new information, and controlling the vehicle to follow the planned path.
Outlines
đ Introduction to Autonomous Navigation
The first paragraph introduces the topic of autonomous navigation, emphasizing its importance in enabling vehicles to determine their location and plan a path to a goal without human intervention. The speaker, Brian, sets the stage for the video series by explaining the basics of autonomous navigation, including the different types of vehicles it applies to, such as cars, UAVs, spacecraft, and submersibles. The concept of autonomy levels is introduced, ranging from remote operation with simple onboard algorithms to fully autonomous vehicles. The focus of the series is on fully autonomous vehicles, which can be further divided into heuristic and optimal approaches. Heuristic approaches rely on practical rules that do not guarantee optimal results but are sufficient for immediate goals, whereas optimal approaches require a comprehensive understanding of the environment to plan the best path. Examples of heuristic approaches are given, such as a maze-solving vehicle that follows simple wall-following rules.
đ Combining Heuristics and Optimization in Autonomous Systems
The second paragraph delves into the practical application of heuristic and optimal approaches in autonomous systems. It explains how these two methods can complement each other, using the example of an autonomous car that might use a heuristic behavior to pass slower cars when it is safe to do so, and then calculate an optimal path for the maneuver. The paragraph also highlights various real-world applications of autonomous navigation, such as ground vehicles in Amazon warehouses, disaster area explorers, space missions like Osiris Rex, and robotic arms. The challenges of autonomous navigation are discussed, particularly the need to build and update environmental models in the face of uncertainty and changing conditions. The paragraph contrasts the navigation difficulties of spacecraft, aircraft, and cars, emphasizing the increasing complexity and uncertainty in each case. It concludes by stressing the impressiveness of autonomous vehicles not for their ability to move but for their ability to navigate efficiently and safely in uncertain environments.
đ ïž The Process of Autonomous Navigation
The third paragraph outlines the process of autonomous navigation, which includes sensing the environment, understanding and perceiving dynamic objects, replanning based on new information, and controlling the vehicle to follow the planned path. It recaps the importance of sensor fusion and tracking as part of the 'sense and perceive' steps in autonomous navigation. The paragraph sets the agenda for the video series, promising to explore topics such as environment modeling, vehicle localization, obstacle tracking, path planning, and system validation. The speaker also encourages viewers to subscribe for upcoming videos and to check out related content on control system lectures. The paragraph ends with a teaser for the next video, which will discuss how vehicles determine their location within an environment using particle filters and Monte Carlo localization.
Mindmap
Keywords
đĄAutonomous Navigation
đĄVehicle Autonomy
đĄSensors
đĄHeuristic Approach
đĄOptimal Approach
đĄEnvironment Modeling
đĄPath Planning
đĄUncertainty
đĄSensor Fusion
đĄControl Systems
Highlights
The video series aims to provide a basic understanding of the autonomous navigation problem.
Autonomous navigation is the ability to determine location and plan a path without human intervention.
Autonomous vehicles can range from simple remote operation to fully autonomous systems.
Fully autonomous vehicles require more complex algorithms and will be the main focus of the series.
Autonomous navigation can be approached heuristically or optimally depending on the environment's knowledge.
Heuristic approaches use practical rules and don't guarantee optimal results but are sufficient for immediate goals.
Optimal approaches require detailed environmental knowledge and plan actions to maximize or minimize an objective function.
Examples of heuristic approaches include maze-solving vehicles and robotic vacuums.
Optimal approaches are exemplified by autonomous driving, which requires dynamic environment modeling.
Autonomous systems often use a combination of heuristic and optimal approaches for complex tasks.
The difficulty of autonomous navigation increases with the uncertainty and changeability of the environment.
Spacecraft navigation is typically simpler than aircraft or car navigation due to less environmental uncertainty.
Autonomous vehicles must navigate through uncertain and changing environments to reach their goals.
Autonomous systems interact with the physical world by collecting data through sensors.
Sensor data is interpreted to understand the environment and the vehicle's state, which is crucial for path planning.
The series will explore algorithms for environment modeling, localization, tracking, and path planning.
Upcoming videos will cover how vehicles determine their location using particle filters and Monte Carlo localization.
Transcripts
The goal of this video series is to give you
a basic understanding of the autonomous navigation problem--
what some of the terms are, some of the needed algorithms,
and what makes this problem difficult
in certain environments.
So that's what we're going to cover over a few videos.
But in this first one, I want to set the stage a bit
and just introduce the idea of autonomous navigation.
I think it's pretty interesting, so I
hope you stick around for it.
I'm Brian, and welcome to a Matlab Tech Talk.
Navigation is the ability to determine your location
within an environment and to be able to figure out
a path that will take you from your current location
to some goal.
Navigating in the wilderness might require, say,
GPS to determine where you are and a map to plan the best
path to get around mountains and lakes to reach your campsite
or whatever your goal is.
Now, autonomous navigation is doing exactly this,
but without a human in the loop.
Broadly speaking, it's how we get
a vehicle to determine its location using a set of sensors
and then to move on its own through an environment
to reach a desired goal.
And when I say vehicle, I mean any kind of mobile machine.
It could be a car traveling down a road or a UAV
making its way back to the airport or a spacecraft
journeying across the solar system or a submersible
exploring the depths of the ocean
or some other mobile robot.
In this way, we've given the vehicle autonomy--
the ability to make decisions and act on its own.
But there are different levels of autonomy,
and they range from a vehicle that is simply
operated by a human from a remote location,
but it has some simple algorithms onboard
that will take over and autonomously
keep it from running off a cliff or something, all the way
up to a fully autonomous vehicle with no human interaction
at all.
Now, for this series, we're mostly
going to focus on what it takes to make
a fully autonomous vehicle.
This is because there's a lot more involved in it,
and we can apply that knowledge to other vehicles that
fall elsewhere on this autonomy spectrum.
But even with fully autonomous navigation,
we can further divide this into two different approaches.
A heuristic approach where autonomy
is accomplished through a set of practical rules or behaviors--
this doesn't guarantee an optimal result,
but it's good enough to achieve some immediate goal.
The benefit of heuristics is that you
don't need complete information about the environment
to accomplish autonomy.
And then on the other side, there is an optimal approach,
which typically requires more knowledge of the environment,
and then a plan and resulting actions
comes from maximization or minimization
of an objective function.
So let's go into a little bit more detail into each of these.
An example of a heuristic approach
is a maze-solving vehicle where the simple rules
might be to drive forward and keep the wall on the left.
So it turns left when the wall turns left,
makes a U-turn at the end of the wall,
and it turns right at a corner.
This type of autonomous vehicle will
proceed to wander up and down the hallways
until it happens to reach the goal.
So in this way, the vehicle doesn't
have to maintain a map of the maze
or even know that it's in a maze in order to find the end.
It doesn't follow an optimal path, but it works--
at least as long as it doesn't get itself stuck in the loop.
Other types of heuristic-based autonomy
include things like the simplest of robotic vacuums
where when it approaches an obstacle like a wall,
it rotates to a new random angle and just keeps going.
And as time increases, the chance
that the entire floor is covered approaches 100%.
And so in the end, the goal of having a clean floor
is met, even if the vehicle doesn't take the optimal path
to achieve it.
So this brings us to the second type
of fully autonomous vehicles, ones that are solving
an optimization problem.
In these systems, the vehicle builds
a model of the environment or it updates
a model that was given to it, and then
it figures out an optimal path to reach the goal.
And there's lots of great practical examples
of autonomous vehicles where optimal-based strategies
produce a much better result than their heuristic-based
counterparts.
Possibly the most famous at the moment
is autonomous driving where a vehicle has
to navigate to a destination through dynamic and chaotic
streets.
And relying on simple behaviors like drive
forward and keep the curb to your right
is probably not the best approach
to safely and quickly get to where you want to go.
It makes more sense to give the vehicle the ability
to model the dynamic environment, even
if that model is imperfect, and then
use it to determine an optimal solution.
Now, it's not usually the case where
a solution is either 100% heuristic or 100% optimal.
Often we can use both approaches to achieve a larger goal.
For example, with an autonomous car,
when it approaches a slower car, it
has to make a decision to either slow down
or to change lanes and pass.
Now, if it was going to make this decision optimally,
it would have to have knowledge beyond the front car
to determine if changing lanes is the best solution.
And that could be difficult to obtain.
So possibly a better solution is to have a heuristic behavior
that says something like, if it's safe to do so,
always attempt to pass slower cars.
And once that decision is made, an optimal path
to the adjacent lane can be created.
And in this way, these two approaches
can complement each other depending on the situation.
Autonomous cars aren't the only examples
of systems that make use of these two approaches.
There are other ground vehicles like they have in Amazon
warehouses that have to quickly maneuver to a given storage
area to move packages around while not running
into other mobile vehicles and stationary shelves or vehicles
that search within disaster areas
that have to navigate unknown and hazardous terrain.
There are space missions like Osiris Rex which
has to navigate around the previously
unvisited asteroid Bennu and prepare for a precisely
located touch-and-go to collect a sample to return to Earth.
There are robotic arms and manipulators
that navigate within their local space to pick things up
and move them to new locations.
There's UAVs and drones that survey areas
and many, many more applications.
But autonomous navigation isn't necessarily easy,
despite how common it's becoming in the world.
And most of what makes it difficult
is that the vehicle has to navigate through an environment
that isn't perfectly known.
And so in order to create a plan,
it has to build up a model of the environment over time.
And the environment is constantly changing,
and so the model has to be constantly updated.
And then there's obstacles that move around and aren't
necessarily obvious, so sensing and recognizing them
is difficult as well.
And the more uncertainty there is
in the environment and the environment model,
the harder the navigation problem becomes.
For example, building an autonomous spacecraft
that's orbiting the earth is typically a simpler navigation
problem than an autonomous aircraft, at least
in terms of environment complexity.
Space is a more predictable environment than air
because we have less uncertainty with the forces that
act on the vehicle, and we have more certainty in the tracks
that other nearby objects are on.
Therefore, we can have more confidence in a plan
and then have a better expectation
that the spacecraft will autonomously
be able to follow that plan.
With aircraft, we have to deal with unknown turbulence
and flocks of birds flying around
and other human-controlled planes and landing and taxiing
around an airport.
But an autonomous aircraft is itself typically
a simpler problem than an autonomous
car for the same reasons.
There is much more uncertainty driving around in a city
than there is flying around in relatively open air.
So the thing I want to stress here
is that what makes these vehicles impressive
is not the fact that they can move on their own.
I mean, it's pretty trivial to get a car
to drive forward by itself.
You just need an actuator that compresses the gas pedal.
The car will take off and drive forward.
The difficult part is getting it to navigate autonomously
within an uncertain and changing environment.
For the car, it's to get to the destination efficiently
and to follow local traffic laws and to avoid potholes and balls
rolling into the street and to reroute around construction
and to avoid other cars driven by unpredictable humans,
and to do all of this in the snow and in the rain and so on.
It's not an easy feat.
So to understand how we get vehicles
to do that and other incredible autonomous tasks,
we need to revisit the capabilities of autonomous
systems that we covered in the first video of the Sensor
Fusion and Tracking series.
And if you haven't seen it and want a longer description,
I've left a link below.
But here's a quick recap.
Autonomous systems need to interact
with the physical world.
And part of interaction is to collect data
about the environment using sensors.
This sensor data has to be interpreted
into something that is more useful than just
measured quantities.
These are things like understanding
where other objects and obstacles are
and building a model or a map of the environment
and understanding the state of the autonomous vehicle itself,
what its location is and orientation.
And with this information, the vehicle
has everything it needs to plan a path
from the current location to the goal,
avoiding obstacles and other objects along the way.
And then the last step is to act on that plan,
to drive the motors and the actuators in such a way
that the vehicle follows the path.
The actuators impact the physical world,
and the whole loop continues.
We sense the environment.
We understand where we are in relation to landmarks
in the environment.
We perceive and track dynamic objects.
We replan given this new set of information.
We control the vehicle to follow that plan, and so on,
until we get to the goal.
Now, in the sensor fusion video, we
talked about how sensor fusion and tracking straddled
the sense and perceive steps.
And while sensor fusion and tracking
are absolutely necessary parts of autonomous navigation,
in this series, we're going to focus our attention
on other algorithms within the perceive step
and on the planning step.
And we're going to answer questions
like, what does it mean to generate
a model of the environment?
How does a vehicle know where it is within that model?
How does a vehicle track other large objects and obstacles?
What are some of the ways path planning is accomplished,
and then how do you know that the system is
going to work in the end?
So that's what you have to look forward to.
In the next video, we're going to explore
how a vehicle can determine its location within an environment
model using a particle filter and Monte Carlo localization.
So if you don't want to miss that or any other future Tech
Talk videos, don't forget to subscribe to this channel.
And if you want to check out my channel, Control System
Lectures, I cover more control theory topics there as well.
Thanks for watching, and I'll see you next time.
5.0 / 5 (0 votes)