Hill Climbing Algorithm in Artificial Intelligence with Real Life Examples| Heuristic Search

Gate Smashers
27 Dec 201910:13

Summary

TLDRThis video introduces the Simple Hill Climbing algorithm, a local search technique with no global problem knowledge. It explores the greedy approach, where the algorithm keeps moving forward as long as it finds a better state, stopping when no improvements are possible. Key points covered include local maxima, plateaus, and ridges—common problems that can limit progress. Using examples, the video explains how the algorithm selects the best move based on heuristic values and highlights the challenges faced when the algorithm cannot backtrack. It's a valuable resource for students preparing for exams.

Takeaways

  • 🔍 Simple hill climbing is a local search algorithm with knowledge limited to its local domain, not the global problem.
  • ⚡ The algorithm uses a greedy approach, continuously progressing until the best move is no longer available.
  • 🚫 Hill climbing does not allow backtracking, meaning it cannot reverse once it reaches a point where no better move is possible.
  • 🌲 The algorithm starts from an initial state and explores new nodes by selecting the best from multiple branches based on their heuristic values.
  • 🧮 Beam width in hill climbing is 1, meaning it explores only the best move and forgets the others without saving them in memory.
  • 🧗 The algorithm mimics climbing a hill while blindfolded, moving upward as long as a better state is found but stopping once no better state is available.
  • 📉 The local maximum problem occurs when the algorithm reaches a peak that is not globally optimal, preventing further improvement.
  • ⛰️ A plateau or flat maximum occurs when all neighboring states have the same heuristic value, causing the algorithm to stop due to no clear direction.
  • 🧗‍♂️ Ridge is a problem in hill climbing where the algorithm moves in only one direction, even if a better state is available in another direction.
  • ❌ Hill climbing is efficient in terms of space complexity since it does not store all states, only the best current state, but this can lead to issues like local maxima.

Q & A

  • What is the simple hill climbing algorithm?

    -The simple hill climbing algorithm is a local search algorithm that focuses on finding the best move by comparing the current state with neighboring states, aiming to achieve a better state without knowledge of the global solution.

  • What does it mean that the simple hill climbing algorithm is a local search algorithm?

    -It means that the algorithm only has knowledge of its immediate surroundings (local domain) and makes decisions based solely on this information without considering the global view of the problem.

  • How does the greedy approach work in hill climbing?

    -In the greedy approach, the algorithm continuously moves towards a better state as long as it finds one. Once no better state is found, the algorithm stops.

  • Why is backtracking not allowed in the simple hill climbing algorithm?

    -Backtracking is not allowed because the algorithm does not store previously visited states in memory. Once it moves to a new state, it cannot go back, even if no better state is found.

  • What is the role of heuristic values in the simple hill climbing algorithm?

    -Heuristic values guide the algorithm to choose the best option by evaluating the cost or value of each state. The goal is to minimize the heuristic value to find the optimal solution.

  • What is the 'local maximum' problem in hill climbing?

    -The local maximum problem occurs when the algorithm reaches a state that seems like the best solution based on its local knowledge, but there may be a better solution (global maximum) beyond its current view.

  • What is the 'plateau' or 'flat maximum' problem in hill climbing?

    -The plateau problem arises when the algorithm reaches a state where all neighboring states have the same heuristic value. Since there is no better move, the algorithm cannot decide where to go next and stops.

  • What is the 'ridge' problem in hill climbing?

    -The ridge problem occurs when the algorithm is moving in a single direction and reaches a point where a better state exists in a different direction. However, because the algorithm does not change its direction, it cannot reach the better state.

  • How does space complexity affect the performance of the simple hill climbing algorithm?

    -The space complexity is low in the simple hill climbing algorithm because it does not store all the explored states in memory. It only keeps track of the current best state, which reduces memory usage.

  • What is the significance of 'beam width' in the hill climbing algorithm?

    -In simple hill climbing, the beam width is set to one, meaning the algorithm only explores the best move from the current state and discards other options. This narrow focus helps reduce complexity but limits exploration of alternative solutions.

Outlines

00:00

🤖 Introduction to Simple Hill Climbing Algorithm

This paragraph introduces the Simple Hill Climbing Algorithm and its relevance in competitive exams and university-level assessments. The speaker quickly touches on important points such as its local search nature, greedy approach, and no backtracking. The algorithm explores the best move in each step and continues until it no longer finds a better option. A basic algorithmic structure is provided, with a real-world analogy of blindfolded hill climbing to explain how the algorithm operates by always progressing upwards unless a better state is no longer found.

05:00

🚩 Challenges in Simple Hill Climbing: Local Maximum Problem

This section explains the problem of local maximums in the Simple Hill Climbing algorithm. The speaker compares it to reaching a peak while blindfolded, unaware that there may be a taller peak nearby. Once the local maximum is reached, the algorithm cannot proceed to explore higher hills because it only allows movement toward better states, without backtracking to lower points. The example focuses on how this creates limitations, particularly in scenarios like the Traveling Salesman and 8-Queen problems, where this type of search can miss better solutions due to its narrow focus on the immediate best choice.

10:03

📊 Flat Maximum and Ridge Problems in Simple Hill Climbing

This paragraph delves into two other key issues with Simple Hill Climbing: the flat maximum (or plateau) and ridge problems. A flat maximum occurs when all neighboring states have the same heuristic value, leading the algorithm to a standstill as it cannot choose between equally valued options. The ridge problem is highlighted as a case where the algorithm moves in one direction, unable to switch direction even if a higher point exists nearby. These issues further limit the efficiency of Simple Hill Climbing, making it more prone to getting stuck without finding optimal solutions in certain complex problems.

Mindmap

Keywords

💡Hill Climbing Algorithm

The hill climbing algorithm is a local search technique used in artificial intelligence to find solutions by iteratively improving a single state. It chooses the neighboring state that appears better based on a heuristic, without considering the global landscape. The video explains this algorithm as being 'blindfolded,' where the algorithm only focuses on the local environment to make decisions, such as climbing a hill until a better move is no longer possible.

💡Local Search

Local search refers to an algorithm's ability to focus on a small part of the search space rather than the entire problem domain. The hill climbing algorithm is an example of a local search algorithm because it does not have global knowledge but only makes decisions based on its current state and immediate neighboring states. The script emphasizes this feature by explaining that the algorithm 'works in the local domain.'

💡Heuristic Value

A heuristic value is a measure used to estimate how close a given state is to the goal state. In the video, heuristic values help the algorithm decide which state is better by comparing values and choosing the one with the lower cost. The video uses an example where the current state has a heuristic of 5, and the algorithm compares it with neighboring states to decide the next move.

💡Greedy Approach

The greedy approach refers to the algorithm's behavior of always selecting the best immediate option without considering future consequences. In hill climbing, this means choosing the next state that has the best heuristic value and continuing until no better options are available. The video describes this as a key feature where the algorithm keeps moving 'until it is getting the best move.'

💡Beam Width

Beam width in this context refers to the number of states the algorithm explores at each level of the search tree. In simple hill climbing, the beam width is 1, meaning only one state is considered at each step, which reduces space complexity but can limit the ability to explore multiple possibilities. The video contrasts this with other algorithms that may explore multiple paths.

💡Local Maximum

A local maximum is a point in the search space where the algorithm can no longer find a better neighboring state, even though there may be a better state globally. The video explains this problem using the metaphor of climbing a hill blindfolded and thinking you’ve reached the top, only to realize later that a taller hill exists nearby. This is one of the key limitations of hill climbing.

💡No Backtracking

No backtracking means the algorithm cannot return to a previous state once it has moved to a new one. In hill climbing, this restriction prevents the algorithm from revisiting earlier options, even if they may eventually lead to better solutions. The video highlights this as a limitation, explaining that once the algorithm reaches a local maximum, it cannot explore other paths that require going downhill first.

💡Plateau or Flat Maximum

A plateau, or flat maximum, occurs when the neighboring states all have the same heuristic value, leaving the algorithm with no clear direction to proceed. In the video, this problem is illustrated as a scenario where all neighboring states have the value 5, making it impossible for the algorithm to choose a better move, causing it to stop without finding an optimal solution.

💡Ridge

A ridge is a situation where the algorithm is stuck in a narrow, steep section of the search space, making it hard to move towards a better solution without first descending. The video uses a 3D example to explain how the hill climbing algorithm may fail to reach a higher peak (B) because it is moving in only one direction and cannot go back to explore other paths.

💡Space Complexity

Space complexity refers to the amount of memory the algorithm requires as it runs. The hill climbing algorithm is efficient in terms of space complexity because it only stores the current state and does not keep track of previously explored states. This is a key advantage discussed in the video, as the algorithm 'removes the others' and only focuses on the best state.

Highlights

Simple hill climbing algorithm is a local search algorithm with knowledge of only its local domain.

The algorithm follows a greedy approach and stops when it no longer finds a better move.

Hill climbing algorithm does not allow backtracking; once it moves forward, it cannot revisit previous states.

The algorithm starts from an initial state and selects the best option among multiple branches using heuristic values.

Beam width in hill climbing is one, meaning only the best option is explored while others are discarded.

The primary rule in hill climbing is that if a state is better than the current state, it becomes the new current state.

One major problem in hill climbing is the local maximum issue, where the algorithm gets stuck at a local peak, thinking it has reached the global maximum.

The algorithm works like climbing a hill blindfolded—only knowing if the current step is better than the last one.

Local maximum means the algorithm reaches a stable state where it can't move up or down and assumes it has reached the goal.

In some cases, like the 8-queen or traveling salesman problems, hill climbing is beneficial due to its low space complexity.

Another issue with hill climbing is the plateau or flat maximum, where all neighboring states have the same heuristic value, and no progress can be made.

Ridge is another problem in hill climbing, where the path is upward but the algorithm cannot change direction to reach the global maximum.

Hill climbing only focuses on one best move at a time and doesn't store other possible paths in memory, reducing space complexity.

If the heuristic value of the next state is higher, the algorithm halts as it does not allow moving to worse states.

Backtracking is not allowed in hill climbing, meaning once a state is discarded, the algorithm cannot revisit it, limiting its exploration.

Transcripts

play00:00

Hello friends, welcome to Gate Smashers

play00:02

In this video we are going to discuss

play00:04

Simple hill climbing algorithm

play00:05

And in this video we are going to discuss all the important points about it

play00:09

Which will be very beneficial in a competitive exam and in your university and college level exams also

play00:15

So guys quickly like the video

play00:17

Subscribe the channel if you have still done

play00:19

And please press the bell button

play00:21

So that you can get all the latest notifications

play00:23

So we will start simple hill climbing algorithm

play00:26

First important point is

play00:27

It is a local search algorithm

play00:29

That means it has got knowledge of its local domain

play00:32

It does not have any knowledge of the whole global problem

play00:37

Second is greedy approach

play00:39

Until it is getting best move it keeps on going

play00:41

As it will stop getting the best move, it will stop over there

play00:45

3rd no back tracking

play00:47

If it is not getting any best move

play00:49

Then it cannot do backtrack

play00:52

We will go ahead and discuss this with an example

play00:55

So over here there is a simple algorithm

play00:57

Just we're starting with the initial state

play01:00

We will start any problem form initial state

play01:02

Until we don't get any solution

play01:04

So what we are doing we are finding a new operator that is a new node

play01:07

If we see over here with a simple example

play01:10

Then this is my initial state

play01:11

Ahead of which I have 3 states

play01:14

That means it has 3 branches

play01:16

Now from these 3 branches

play01:17

Best under heuristic value based on their cost

play01:22

Whichever method we are using

play01:23

Based on there heuristic at we did

play01:25

We have to find out best from all 3

play01:28

What is beam width in this ?1

play01:30

Beam width one means

play01:32

The one which is best from all these 3

play01:34

It will only explore this ahead

play01:37

It will forget others

play01:39

That means neither we are saving them in the memory

play01:41

This was the best from them

play01:43

So we explored this ahead

play01:45

Now let's say we have these 3 problems ahead

play01:47

Now these are the 3 branches as we go ahead

play01:49

Now let's say if this is best from the 3

play01:51

Then this will be explored ahead

play01:53

You forget this

play01:55

This is the simple concept of simple hill climbing algorithm

play01:59

This line is the most important

play02:01

If the state is better than the current state

play02:05

Then it is a new current state

play02:07

That means you keep on going

play02:09

Until you are getting better state

play02:11

Till then you keep on moving ahead

play02:13

As you stop getting better state

play02:15

So from there you cannot do backtrack

play02:19

It works in a very simple way

play02:21

Because what is its name hill climbing

play02:23

What is the meaning of hill climbing over here

play02:24

That you are simply climbing a hill

play02:27

But blindfolded

play02:29

Blindfolded means you have tried a cloth on your eyes

play02:31

And you are climbing a hill

play02:33

In that what is the best move that we have

play02:35

Until you are going on a height

play02:38

Until you are climbing up

play02:40

Till then you will keep on climbing

play02:41

If in the middle somewhere you feel that your leg is going downwards

play02:45

Then you will not go there

play02:48

Until your leg is going towards up

play02:50

You keep on following the best move

play02:53

At one point your leg will be stable

play02:57

That means neither you can go down nor up

play03:00

So you will go there and stop

play03:03

So that is the best path for you

play03:07

Over here you went till this path

play03:10

So you have achieved maximum value over here

play03:14

All the problem over here is which we are going to discuss ahead

play03:17

So this example I have already written over here just to save the time

play03:21

Let's see over here this is the problem this is the starting and this is the goal state

play03:25

Now from starting state I have these 3 options

play03:28

I can move towards anyone from these 3

play03:30

But now what I have to check? Heuristic value

play03:34

Now when I checked the heuristic value, the heuristic value of this state is 4

play03:37

It's heuristic is 5

play03:38

It's heuristic is 6

play03:39

What I have to do is I have to select only one which is the best

play03:41

That means from these 3

play03:43

Which is my current state? This

play03:45

So which one is better than the current state?

play03:47

You just have to focus on this line

play03:49

If the state is better than the current state

play03:52

Then see which is the better from these 3

play03:54

Is this better? No

play03:55

This is equal

play03:56

Is this better? No it's heuristic value is more

play03:59

We want less

play04:00

We have to reach to the goal state in the minimum cost

play04:03

So which one is the better state

play04:05

This one

play04:06

Where will it move? Towards this side

play04:08

It will forget this

play04:09

Neither we are saving it in memory

play04:12

Nothing

play04:13

Now we will move ahead

play04:13

Now we will again search ahead of this

play04:15

In this way it is moving

play04:18

But over here what are the problems

play04:20

In hill climbing

play04:21

Problem is local maximum

play04:24

What is the meaning of local maximum

play04:26

As I told you right now

play04:28

That you are going blindfolded

play04:29

Because you have got knowledge only of local domain

play04:31

So if you are not blindfolded

play04:33

Then you will have knowledge of the whole domain

play04:36

You will come to know that this is a mountain, this is also one mountain

play04:39

On this mountain I have to climb

play04:41

You will be able to see the whole view

play04:41

But over here you are not able to see the complete view

play04:44

Because you are working on the local search

play04:46

So what is there in the local domain

play04:48

You are climbing blindfolded

play04:49

You're climbing the mountain

play04:51

What is your best move?

play04:53

Best movies that until your leg is going towards up direction

play04:57

That is your best move

play04:58

So on one point

play05:00

You will reach on a stable state

play05:02

So after reaching there neither your legs towards up nor it is towards down

play05:05

Then what you will feel that you have achieved the goal state

play05:10

I have reached at the maximum value

play05:12

So as you removed the blindfold from your eyes

play05:14

Then you will feel that wow I have achieved the maximum value

play05:17

But there is a problem

play05:20

What can be the problem

play05:21

That there is a another hill

play05:23

There is another hill whose height or value is more than this

play05:27

Which can be seen over here in the diagram

play05:29

But now you cannot achieve this

play05:33

Why? Because it works until

play05:35

Until you get better state their current state

play05:38

So you have already achieved maximum value over here

play05:41

So now we will not go towards minimum

play05:44

Because to go over your first you will have to go towards minimum

play05:47

You will have to go down

play05:47

So this will not go down

play05:50

So although if we want to solve travelling salesman problem

play05:53

Or we want to solve 8 queen problem

play05:55

It works good in some problems

play05:57

Because over here the space complexity is less

play06:00

We are not keeping all the states in the memory

play06:03

We are just keeping the best state in the memory

play06:06

We are removing the others

play06:07

So until you are getting the best

play06:09

Till then it is good

play06:10

That means this whole value depends on heuristic

play06:13

If you are getting good heuristic

play06:15

Then you keep on moving

play06:16

But over here local maximum means

play06:18

You achieved the maximum

play06:20

But is that globally maximum?

play06:22

No this is globally maximum

play06:25

So this will not be achieved

play06:27

So this is the concept of what

play06:29

You can say this is a problem called local maximum

play06:34

So over here also I have shown you this problem

play06:36

Although this problem is a theoretical part only

play06:40

What over here I am explaining you with an example that what is the meaning of local maximum

play06:44

You started from 5 state

play06:46

You are heuristic is 5

play06:47

What is the value in current state? 5

play06:49

So you need better value than this

play06:51

Obviously better value means

play06:54

You want to reach in less heuristic, in less cost

play06:56

So which one will you select from 4,5 and 6 ?

play06:59

Obviously you will select 4

play07:01

So you removed this, you did not save this in the memory

play07:04

This is also not saved in the memory

play07:06

So my space complexity will be better

play07:08

Because we are going towards only one best move

play07:11

Now let's say you explored this ahead

play07:14

When you explored this state

play07:15

Now your current state is 4

play07:17

So now when on this current state you explored next state

play07:21

This is taking me at 5

play07:22

That means you are heuristic has increased

play07:25

So what is my question telling over here

play07:26

Algorithm is telling if it is better than the current state

play07:29

Then it is a new current state

play07:30

So over here a new current state will not be formed

play07:33

Because heuristic value has increased

play07:35

So this will stop over here only

play07:38

That is called a local maximum problem

play07:42

Then plateau or which is also called as flat maximum

play07:46

Actually these words are based on hill climbing only

play07:48

What is the meaning of hill climbing

play07:50

When you are climbing

play07:51

Then you got a flat point

play07:53

Flat point means

play07:55

Where your neighbors

play07:57

That means all the neighbors that you have

play07:59

Let's say value of all is same

play08:03

5 5 5

play08:04

Now at this point whom will you select?

play08:07

When all are giving you the same heuristic value

play08:10

Then which one will you select from them?

play08:12

So in this case also it does not give you any answer ahead

play08:15

So it stops over here

play08:18

So is also one of the problem

play08:19

Of plateau or flat maximum problem in the hill climbing

play08:24

And 3rd is ridge

play08:25

What is the meaning of Ridge

play08:27

When you have multiple values

play08:28

See over here you will be able to see clearly in the 3D diagram

play08:31

Let's say you are moving in One Direction

play08:34

You are climbing over here

play08:37

You keep on climbing

play08:38

Until you get the best move

play08:40

And what is the meaning of best move

play08:41

Until your leg is towards front

play08:44

It is towards up

play08:45

That is the best

play08:46

That means you are climbing the mountain

play08:48

You reached at 1 point

play08:51

So you remove your blindfold over here

play08:54

That means I have reached at the maximum value

play08:56

Now obviously after this minimum value will start

play08:58

So you will not go there

play09:00

But what is the meaning of Ridge

play09:01

That between A and B, B is above A

play09:06

Now this B is above A

play09:08

But now after reaching over here

play09:10

It will not change the direction

play09:13

Let us not changing the direction it is moving in One Direction only

play09:16

So although B point is above it

play09:19

But you cannot reach on that

play09:22

Because it is only moving in One Direction

play09:26

Although it is a special case of local maximum only

play09:29

So this is all about the hill climbing problem

play09:32

This is the algorithm and over here I have cleared with an example

play09:35

What is the problem of local maximum

play09:37

And how do we actually evaluate it

play09:39

So when we discussed beam search

play09:42

In beam search the value of beta

play09:44

It can be multiple like 2, 3, 4, 5

play09:47

But in hill climbing

play09:49

The value of beta is one

play09:50

You just have to go on one best move

play09:53

Forget others

play09:54

You simply skip others

play09:56

So lets say you will stop at one point

play09:59

Now we know that it is not taking me towards the best

play10:02

Then I should go back

play10:04

But you cannot go back

play10:05

Because back tracking is not allowed over here

play10:08

So this is all about the hill climbing algorithm

play10:11

Thank you

Rate This

5.0 / 5 (0 votes)

Ähnliche Tags
Hill ClimbingLocal SearchGreedy AlgorithmOptimizationAI AlgorithmsLocal MaximumHeuristic SearchSpace ComplexityAlgorithm TutorialExam Prep
Benötigen Sie eine Zusammenfassung auf Englisch?