What Is Autonomous Navigation? | Autonomous Navigation, Part 1

MATLAB
24 Jun 202011:30

Summary

TLDRThis video series aims to provide a foundational understanding of autonomous navigation, exploring the complexities and algorithms involved. It discusses the ability of vehicles to navigate without human intervention, using sensors to determine location and plan paths to goals. The series differentiates between heuristic and optimal approaches to autonomy, illustrating their applications in various environments. Challenges such as navigating uncertain and dynamic environments are highlighted, emphasizing the importance of sensor fusion, tracking, and planning in achieving successful autonomous navigation.

Takeaways

  • 🌐 The video series aims to provide a foundational understanding of autonomous navigation, including its terminology, algorithms, and challenges.
  • πŸ“ Autonomous navigation is the ability for a vehicle to determine its location and plan a path to a destination without human intervention.
  • πŸš— The term 'vehicle' in this context encompasses a wide range of mobile machines, from cars and UAVs to spacecraft and underwater robots.
  • πŸ”„ There are varying levels of autonomy, from simple remote operation with basic onboard algorithms to fully autonomous vehicles with no human interaction.
  • πŸ” The series will primarily focus on fully autonomous vehicles, as they represent the most complex end of the autonomy spectrum.
  • πŸ›  Two main approaches to autonomy are discussed: heuristic, which uses practical rules and doesn't guarantee optimal results, and optimal, which requires detailed environmental knowledge for planning.
  • 🧩 Heuristic approaches can work with incomplete information and are sufficient for immediate goals, such as a maze-solving vehicle keeping a wall on one side.
  • πŸ“‰ Optimal approaches involve building or updating an environmental model to determine the best path, crucial for complex tasks like autonomous driving.
  • πŸ€– The combination of heuristic and optimal approaches can be used to achieve goals effectively, as seen in autonomous cars deciding when to pass slower vehicles.
  • πŸ›‘ Autonomous navigation is challenging due to the need to navigate through uncertain and changing environments, requiring constant model updating.
  • πŸ”„ The complexity of navigation varies by environment, with space being more predictable than air, which in turn is more predictable than urban driving conditions.

Q & A

  • What is the main goal of the video series?

    -The main goal of the video series is to provide a basic understanding of the autonomous navigation problem, including the terms, algorithms needed, and the challenges it presents in certain environments.

  • What is autonomous navigation?

    -Autonomous navigation is the ability of a vehicle to determine its location within an environment and to figure out a path to reach a goal without human intervention.

  • What are the different types of vehicles that can perform autonomous navigation?

    -Autonomous navigation can be performed by various mobile machines, including cars, UAVs, spacecraft, submersibles, and other mobile robots.

  • What are the different levels of autonomy for vehicles?

    -Autonomy levels range from vehicles operated remotely by humans with basic onboard algorithms to prevent accidents, to fully autonomous vehicles with no human interaction at all.

  • What is the difference between a heuristic approach and an optimal approach in autonomous navigation?

    -A heuristic approach uses practical rules or behaviors that do not guarantee optimal results but are sufficient to achieve immediate goals. An optimal approach requires more environmental knowledge and involves planning and actions based on the maximization or minimization of an objective function.

  • How does a heuristic-based autonomous vehicle navigate through a maze?

    -A heuristic-based autonomous vehicle navigating a maze might follow simple rules like 'drive forward and keep the wall on the left,' allowing it to reach the goal without needing a map or complete environmental information.

  • What are some practical examples of heuristic-based autonomy?

    -Practical examples of heuristic-based autonomy include robotic vacuums that clean floors by moving randomly and autonomous vehicles that follow simple behaviors like 'always attempt to pass slower cars when safe to do so.'

  • How do fully autonomous vehicles that solve optimization problems navigate?

    -Fully autonomous vehicles solving optimization problems build or update a model of the environment and then determine an optimal path to the goal, taking into account dynamic and complex environments.

  • Why is autonomous navigation challenging?

    -Autonomous navigation is challenging because vehicles must navigate through environments that are not perfectly known, requiring them to build and constantly update a model of the environment, which is subject to change.

  • What are some examples of autonomous systems that use both heuristic and optimal approaches?

    -Examples include autonomous cars that use heuristic behaviors for lane changing while planning an optimal path, and ground vehicles in Amazon warehouses that maneuver around while avoiding collisions.

  • Why is it important for autonomous vehicles to be able to navigate in uncertain and changing environments?

    -It is important for autonomous vehicles to navigate in uncertain and changing environments because it demonstrates their ability to adapt to real-world conditions, such as varying traffic, weather, and unexpected obstacles.

  • What are the key steps in the autonomous navigation process as outlined in the script?

    -The key steps in the autonomous navigation process are sensing the environment, understanding the vehicle's location and environment model, perceiving and tracking dynamic objects, replanning based on new information, and controlling the vehicle to follow the planned path.

Outlines

00:00

πŸš€ Introduction to Autonomous Navigation

The first paragraph introduces the topic of autonomous navigation, emphasizing its importance in enabling vehicles to determine their location and plan a path to a goal without human intervention. The speaker, Brian, sets the stage for the video series by explaining the basics of autonomous navigation, including the different types of vehicles it applies to, such as cars, UAVs, spacecraft, and submersibles. The concept of autonomy levels is introduced, ranging from remote operation with simple onboard algorithms to fully autonomous vehicles. The focus of the series is on fully autonomous vehicles, which can be further divided into heuristic and optimal approaches. Heuristic approaches rely on practical rules that do not guarantee optimal results but are sufficient for immediate goals, whereas optimal approaches require a comprehensive understanding of the environment to plan the best path. Examples of heuristic approaches are given, such as a maze-solving vehicle that follows simple wall-following rules.

05:02

πŸ” Combining Heuristics and Optimization in Autonomous Systems

The second paragraph delves into the practical application of heuristic and optimal approaches in autonomous systems. It explains how these two methods can complement each other, using the example of an autonomous car that might use a heuristic behavior to pass slower cars when it is safe to do so, and then calculate an optimal path for the maneuver. The paragraph also highlights various real-world applications of autonomous navigation, such as ground vehicles in Amazon warehouses, disaster area explorers, space missions like Osiris Rex, and robotic arms. The challenges of autonomous navigation are discussed, particularly the need to build and update environmental models in the face of uncertainty and changing conditions. The paragraph contrasts the navigation difficulties of spacecraft, aircraft, and cars, emphasizing the increasing complexity and uncertainty in each case. It concludes by stressing the impressiveness of autonomous vehicles not for their ability to move but for their ability to navigate efficiently and safely in uncertain environments.

10:04

πŸ› οΈ The Process of Autonomous Navigation

The third paragraph outlines the process of autonomous navigation, which includes sensing the environment, understanding and perceiving dynamic objects, replanning based on new information, and controlling the vehicle to follow the planned path. It recaps the importance of sensor fusion and tracking as part of the 'sense and perceive' steps in autonomous navigation. The paragraph sets the agenda for the video series, promising to explore topics such as environment modeling, vehicle localization, obstacle tracking, path planning, and system validation. The speaker also encourages viewers to subscribe for upcoming videos and to check out related content on control system lectures. The paragraph ends with a teaser for the next video, which will discuss how vehicles determine their location within an environment using particle filters and Monte Carlo localization.

Mindmap

Keywords

πŸ’‘Autonomous Navigation

Autonomous navigation refers to the ability of a system or vehicle to determine its location within an environment and to plan a path to reach a desired destination without human intervention. In the context of the video, this concept is central as it discusses the challenges and methods by which various vehicles, from cars to spacecraft, can navigate their surroundings autonomously. The video mentions how autonomous navigation is achieved through a combination of sensor data, environmental modeling, and decision-making algorithms.

πŸ’‘Vehicle Autonomy

Vehicle autonomy is the concept of a vehicle operating independently, making decisions, and performing actions on its own. The video discusses different levels of autonomy, ranging from simple remote operation with basic onboard algorithms to fully autonomous vehicles that require no human interaction. The script uses examples such as cars, UAVs, spacecraft, and submersibles to illustrate how these vehicles achieve a degree of autonomy, which is crucial for their operation in various environments.

πŸ’‘Sensors

Sensors are devices that detect and respond to some type of input from the external environment. In the video, sensors are essential for autonomous vehicles as they provide the raw data needed to understand the vehicle's surroundings. The script mentions how sensors help in determining the vehicle's location and the presence of obstacles, which are vital for planning a safe and efficient path.

πŸ’‘Heuristic Approach

A heuristic approach in the context of the video refers to a problem-solving strategy that uses practical rules or behaviors to guide decision-making. It does not guarantee an optimal solution but is sufficient for achieving immediate goals. The video gives an example of a maze-solving vehicle that follows simple rules like 'keep the wall on the left' to navigate, which is a heuristic method that does not require a complete map of the environment.

πŸ’‘Optimal Approach

The optimal approach, as discussed in the video, is a method that requires comprehensive knowledge of the environment and involves planning and actions derived from maximizing or minimizing an objective function. This approach is contrasted with the heuristic approach, where the video explains that an optimal approach is necessary for complex tasks like autonomous driving, where dynamic and chaotic conditions require more than just simple rules to navigate safely and efficiently.

πŸ’‘Environment Modeling

Environment modeling is the process of creating a representation of the surroundings that an autonomous vehicle operates in. The video emphasizes the importance of this concept, as it allows the vehicle to understand its location and the presence of obstacles. The script mentions that the environment model must be constantly updated due to the dynamic nature of real-world settings, which is a challenge for autonomous navigation systems.

πŸ’‘Path Planning

Path planning is the process of determining a route from a starting point to a destination while avoiding obstacles and other constraints. In the video, path planning is a critical component of autonomous navigation, where the vehicle must calculate an optimal or feasible path based on its environmental model and the goals it is trying to achieve. The script illustrates this with examples such as a vehicle navigating through dynamic streets or a spacecraft planning its trajectory.

πŸ’‘Uncertainty

Uncertainty in the video refers to the lack of complete information or the presence of unpredictable elements in the environment that can affect the navigation process. The script discusses how uncertainty makes autonomous navigation more challenging, as it requires the vehicle to constantly update its model of the environment and adapt its plans accordingly. Examples include dealing with unknown turbulence for an aircraft or unpredictable human-driven cars on the road.

πŸ’‘Sensor Fusion

Sensor fusion is the process of combining data from multiple sensors to produce more consistent, accurate, and useful information than any individual sensor could provide alone. The video mentions sensor fusion as a necessary part of autonomous navigation, where it helps in interpreting sensor data into a coherent understanding of the environment, which is essential for tasks like obstacle detection and environmental modeling.

πŸ’‘Control Systems

Control systems in the context of the video are the mechanisms that govern the behavior of the autonomous vehicle based on the planned path. The script touches on how control systems are responsible for driving the motors and actuators to follow the determined path, which is a critical step in the autonomous navigation process. The video suggests that understanding control systems is key to ensuring that the vehicle can execute its navigation plans effectively.

Highlights

The video series aims to provide a basic understanding of the autonomous navigation problem.

Autonomous navigation is the ability to determine location and plan a path without human intervention.

Autonomous vehicles can range from simple remote operation to fully autonomous systems.

Fully autonomous vehicles require more complex algorithms and will be the main focus of the series.

Autonomous navigation can be approached heuristically or optimally depending on the environment's knowledge.

Heuristic approaches use practical rules and don't guarantee optimal results but are sufficient for immediate goals.

Optimal approaches require detailed environmental knowledge and plan actions to maximize or minimize an objective function.

Examples of heuristic approaches include maze-solving vehicles and robotic vacuums.

Optimal approaches are exemplified by autonomous driving, which requires dynamic environment modeling.

Autonomous systems often use a combination of heuristic and optimal approaches for complex tasks.

The difficulty of autonomous navigation increases with the uncertainty and changeability of the environment.

Spacecraft navigation is typically simpler than aircraft or car navigation due to less environmental uncertainty.

Autonomous vehicles must navigate through uncertain and changing environments to reach their goals.

Autonomous systems interact with the physical world by collecting data through sensors.

Sensor data is interpreted to understand the environment and the vehicle's state, which is crucial for path planning.

The series will explore algorithms for environment modeling, localization, tracking, and path planning.

Upcoming videos will cover how vehicles determine their location using particle filters and Monte Carlo localization.

Transcripts

play00:00

The goal of this video series is to give you

play00:02

a basic understanding of the autonomous navigation problem--

play00:05

what some of the terms are, some of the needed algorithms,

play00:08

and what makes this problem difficult

play00:10

in certain environments.

play00:12

So that's what we're going to cover over a few videos.

play00:15

But in this first one, I want to set the stage a bit

play00:18

and just introduce the idea of autonomous navigation.

play00:21

I think it's pretty interesting, so I

play00:22

hope you stick around for it.

play00:24

I'm Brian, and welcome to a Matlab Tech Talk.

play00:28

Navigation is the ability to determine your location

play00:31

within an environment and to be able to figure out

play00:33

a path that will take you from your current location

play00:36

to some goal.

play00:38

Navigating in the wilderness might require, say,

play00:40

GPS to determine where you are and a map to plan the best

play00:43

path to get around mountains and lakes to reach your campsite

play00:47

or whatever your goal is.

play00:50

Now, autonomous navigation is doing exactly this,

play00:53

but without a human in the loop.

play00:55

Broadly speaking, it's how we get

play00:56

a vehicle to determine its location using a set of sensors

play01:00

and then to move on its own through an environment

play01:03

to reach a desired goal.

play01:05

And when I say vehicle, I mean any kind of mobile machine.

play01:08

It could be a car traveling down a road or a UAV

play01:12

making its way back to the airport or a spacecraft

play01:15

journeying across the solar system or a submersible

play01:18

exploring the depths of the ocean

play01:20

or some other mobile robot.

play01:22

In this way, we've given the vehicle autonomy--

play01:25

the ability to make decisions and act on its own.

play01:30

But there are different levels of autonomy,

play01:33

and they range from a vehicle that is simply

play01:35

operated by a human from a remote location,

play01:38

but it has some simple algorithms onboard

play01:40

that will take over and autonomously

play01:42

keep it from running off a cliff or something, all the way

play01:45

up to a fully autonomous vehicle with no human interaction

play01:49

at all.

play01:50

Now, for this series, we're mostly

play01:52

going to focus on what it takes to make

play01:54

a fully autonomous vehicle.

play01:56

This is because there's a lot more involved in it,

play01:58

and we can apply that knowledge to other vehicles that

play02:01

fall elsewhere on this autonomy spectrum.

play02:06

But even with fully autonomous navigation,

play02:08

we can further divide this into two different approaches.

play02:11

A heuristic approach where autonomy

play02:13

is accomplished through a set of practical rules or behaviors--

play02:18

this doesn't guarantee an optimal result,

play02:20

but it's good enough to achieve some immediate goal.

play02:24

The benefit of heuristics is that you

play02:25

don't need complete information about the environment

play02:28

to accomplish autonomy.

play02:31

And then on the other side, there is an optimal approach,

play02:34

which typically requires more knowledge of the environment,

play02:37

and then a plan and resulting actions

play02:40

comes from maximization or minimization

play02:42

of an objective function.

play02:44

So let's go into a little bit more detail into each of these.

play02:48

An example of a heuristic approach

play02:50

is a maze-solving vehicle where the simple rules

play02:53

might be to drive forward and keep the wall on the left.

play02:56

So it turns left when the wall turns left,

play02:59

makes a U-turn at the end of the wall,

play03:01

and it turns right at a corner.

play03:04

This type of autonomous vehicle will

play03:06

proceed to wander up and down the hallways

play03:08

until it happens to reach the goal.

play03:10

So in this way, the vehicle doesn't

play03:12

have to maintain a map of the maze

play03:14

or even know that it's in a maze in order to find the end.

play03:18

It doesn't follow an optimal path, but it works--

play03:22

at least as long as it doesn't get itself stuck in the loop.

play03:27

Other types of heuristic-based autonomy

play03:29

include things like the simplest of robotic vacuums

play03:32

where when it approaches an obstacle like a wall,

play03:35

it rotates to a new random angle and just keeps going.

play03:39

And as time increases, the chance

play03:41

that the entire floor is covered approaches 100%.

play03:44

And so in the end, the goal of having a clean floor

play03:47

is met, even if the vehicle doesn't take the optimal path

play03:51

to achieve it.

play03:53

So this brings us to the second type

play03:55

of fully autonomous vehicles, ones that are solving

play03:58

an optimization problem.

play04:00

In these systems, the vehicle builds

play04:02

a model of the environment or it updates

play04:04

a model that was given to it, and then

play04:06

it figures out an optimal path to reach the goal.

play04:10

And there's lots of great practical examples

play04:12

of autonomous vehicles where optimal-based strategies

play04:15

produce a much better result than their heuristic-based

play04:18

counterparts.

play04:19

Possibly the most famous at the moment

play04:21

is autonomous driving where a vehicle has

play04:24

to navigate to a destination through dynamic and chaotic

play04:28

streets.

play04:29

And relying on simple behaviors like drive

play04:31

forward and keep the curb to your right

play04:34

is probably not the best approach

play04:36

to safely and quickly get to where you want to go.

play04:39

It makes more sense to give the vehicle the ability

play04:42

to model the dynamic environment, even

play04:44

if that model is imperfect, and then

play04:46

use it to determine an optimal solution.

play04:49

Now, it's not usually the case where

play04:51

a solution is either 100% heuristic or 100% optimal.

play04:55

Often we can use both approaches to achieve a larger goal.

play04:59

For example, with an autonomous car,

play05:01

when it approaches a slower car, it

play05:03

has to make a decision to either slow down

play05:06

or to change lanes and pass.

play05:08

Now, if it was going to make this decision optimally,

play05:11

it would have to have knowledge beyond the front car

play05:13

to determine if changing lanes is the best solution.

play05:17

And that could be difficult to obtain.

play05:20

So possibly a better solution is to have a heuristic behavior

play05:23

that says something like, if it's safe to do so,

play05:27

always attempt to pass slower cars.

play05:30

And once that decision is made, an optimal path

play05:32

to the adjacent lane can be created.

play05:34

And in this way, these two approaches

play05:36

can complement each other depending on the situation.

play05:41

Autonomous cars aren't the only examples

play05:43

of systems that make use of these two approaches.

play05:46

There are other ground vehicles like they have in Amazon

play05:48

warehouses that have to quickly maneuver to a given storage

play05:51

area to move packages around while not running

play05:54

into other mobile vehicles and stationary shelves or vehicles

play05:58

that search within disaster areas

play06:00

that have to navigate unknown and hazardous terrain.

play06:03

There are space missions like Osiris Rex which

play06:05

has to navigate around the previously

play06:07

unvisited asteroid Bennu and prepare for a precisely

play06:10

located touch-and-go to collect a sample to return to Earth.

play06:14

There are robotic arms and manipulators

play06:16

that navigate within their local space to pick things up

play06:18

and move them to new locations.

play06:20

There's UAVs and drones that survey areas

play06:23

and many, many more applications.

play06:27

But autonomous navigation isn't necessarily easy,

play06:30

despite how common it's becoming in the world.

play06:32

And most of what makes it difficult

play06:34

is that the vehicle has to navigate through an environment

play06:37

that isn't perfectly known.

play06:39

And so in order to create a plan,

play06:41

it has to build up a model of the environment over time.

play06:44

And the environment is constantly changing,

play06:46

and so the model has to be constantly updated.

play06:49

And then there's obstacles that move around and aren't

play06:51

necessarily obvious, so sensing and recognizing them

play06:55

is difficult as well.

play06:56

And the more uncertainty there is

play06:58

in the environment and the environment model,

play07:01

the harder the navigation problem becomes.

play07:05

For example, building an autonomous spacecraft

play07:07

that's orbiting the earth is typically a simpler navigation

play07:10

problem than an autonomous aircraft, at least

play07:13

in terms of environment complexity.

play07:15

Space is a more predictable environment than air

play07:18

because we have less uncertainty with the forces that

play07:21

act on the vehicle, and we have more certainty in the tracks

play07:24

that other nearby objects are on.

play07:27

Therefore, we can have more confidence in a plan

play07:30

and then have a better expectation

play07:31

that the spacecraft will autonomously

play07:33

be able to follow that plan.

play07:35

With aircraft, we have to deal with unknown turbulence

play07:39

and flocks of birds flying around

play07:40

and other human-controlled planes and landing and taxiing

play07:44

around an airport.

play07:46

But an autonomous aircraft is itself typically

play07:50

a simpler problem than an autonomous

play07:52

car for the same reasons.

play07:54

There is much more uncertainty driving around in a city

play07:57

than there is flying around in relatively open air.

play08:02

So the thing I want to stress here

play08:04

is that what makes these vehicles impressive

play08:06

is not the fact that they can move on their own.

play08:08

I mean, it's pretty trivial to get a car

play08:10

to drive forward by itself.

play08:13

You just need an actuator that compresses the gas pedal.

play08:16

The car will take off and drive forward.

play08:19

The difficult part is getting it to navigate autonomously

play08:22

within an uncertain and changing environment.

play08:25

For the car, it's to get to the destination efficiently

play08:29

and to follow local traffic laws and to avoid potholes and balls

play08:33

rolling into the street and to reroute around construction

play08:36

and to avoid other cars driven by unpredictable humans,

play08:40

and to do all of this in the snow and in the rain and so on.

play08:44

It's not an easy feat.

play08:48

So to understand how we get vehicles

play08:50

to do that and other incredible autonomous tasks,

play08:53

we need to revisit the capabilities of autonomous

play08:56

systems that we covered in the first video of the Sensor

play08:59

Fusion and Tracking series.

play09:01

And if you haven't seen it and want a longer description,

play09:04

I've left a link below.

play09:06

But here's a quick recap.

play09:09

Autonomous systems need to interact

play09:12

with the physical world.

play09:13

And part of interaction is to collect data

play09:16

about the environment using sensors.

play09:18

This sensor data has to be interpreted

play09:20

into something that is more useful than just

play09:23

measured quantities.

play09:24

These are things like understanding

play09:26

where other objects and obstacles are

play09:29

and building a model or a map of the environment

play09:32

and understanding the state of the autonomous vehicle itself,

play09:35

what its location is and orientation.

play09:39

And with this information, the vehicle

play09:41

has everything it needs to plan a path

play09:43

from the current location to the goal,

play09:46

avoiding obstacles and other objects along the way.

play09:49

And then the last step is to act on that plan,

play09:53

to drive the motors and the actuators in such a way

play09:55

that the vehicle follows the path.

play09:58

The actuators impact the physical world,

play10:01

and the whole loop continues.

play10:03

We sense the environment.

play10:05

We understand where we are in relation to landmarks

play10:07

in the environment.

play10:09

We perceive and track dynamic objects.

play10:11

We replan given this new set of information.

play10:14

We control the vehicle to follow that plan, and so on,

play10:18

until we get to the goal.

play10:21

Now, in the sensor fusion video, we

play10:23

talked about how sensor fusion and tracking straddled

play10:25

the sense and perceive steps.

play10:28

And while sensor fusion and tracking

play10:30

are absolutely necessary parts of autonomous navigation,

play10:33

in this series, we're going to focus our attention

play10:36

on other algorithms within the perceive step

play10:39

and on the planning step.

play10:41

And we're going to answer questions

play10:42

like, what does it mean to generate

play10:44

a model of the environment?

play10:47

How does a vehicle know where it is within that model?

play10:50

How does a vehicle track other large objects and obstacles?

play10:54

What are some of the ways path planning is accomplished,

play10:57

and then how do you know that the system is

play11:00

going to work in the end?

play11:03

So that's what you have to look forward to.

play11:05

In the next video, we're going to explore

play11:07

how a vehicle can determine its location within an environment

play11:10

model using a particle filter and Monte Carlo localization.

play11:15

So if you don't want to miss that or any other future Tech

play11:17

Talk videos, don't forget to subscribe to this channel.

play11:20

And if you want to check out my channel, Control System

play11:23

Lectures, I cover more control theory topics there as well.

play11:27

Thanks for watching, and I'll see you next time.

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
Autonomous NavigationTech TalkRoboticsAI AlgorithmsVehicle ControlSensor FusionPath PlanningMachine LearningSpacecraftUAVs