Preemptive and Non-Preemptive Scheduling

Neso Academy
25 Aug 201918:56

Summary

TLDRThis lecture delves into the concepts of pre-emptive and non-pre-emptive CPU scheduling, explaining that they are not algorithms but methods of CPU scheduling. It clarifies the roles of the CPU scheduler and dispatcher, introduces the concept of dispatch latency, and outlines four key circumstances under which CPU scheduling decisions occur. The lecture distinguishes between non-pre-emptive (cooperative) scheduling, where the CPU is only reassigned upon process termination or waiting, and pre-emptive scheduling, which allows for the CPU to be taken away even while a process is running, highlighting the importance of these concepts in operating systems.

Takeaways

  • πŸ˜€ Pre-emptive and non-pre-emptive scheduling are two methods of CPU scheduling, not specific algorithms.
  • πŸ”„ CPU scheduling involves decisions made by the CPU scheduler and dispatcher, where the scheduler selects a process and the dispatcher allocates the CPU to it.
  • πŸ•’ Dispatch latency is the time taken by the dispatcher to switch the CPU from one process to another, and minimizing it is crucial for efficient computation.
  • πŸ”„ CPU scheduling decisions can occur in four circumstances: a process switching to a waiting state, a process switching to a ready state due to an interrupt, a process switching from a waiting to a ready state after IO completion, and a process terminating.
  • βœ… In situations 1 and 4, there is no choice in CPU scheduling; a new process from the ready queue must be selected for execution.
  • ❓ In situations 2 and 3, there is a choice in CPU scheduling; the CPU can be given to the same process that was interrupted or waiting, or it can be given to a different process.
  • 🚫 Non-preemptive scheduling occurs when the CPU is only reassigned under circumstances 1 and 4, meaning the CPU is not forcibly taken away from a running process.
  • πŸ›‘ Pre-emptive scheduling allows the CPU to be taken away from a running process even before it completes execution, under circumstances 2 and 3.
  • πŸ”„ The choice between pre-emptive and non-preemptive scheduling depends on the scenario; both have their advantages and are used in different operating systems.
  • πŸ”„ Pre-emptive scheduling can be necessary for high-priority processes, but it may lead to issues like inconsistent data when dealing with shared memory.
  • πŸ“š Understanding pre-emptive and non-preemptive scheduling is fundamental to studying CPU scheduling algorithms and related topics in operating systems.

Q & A

  • What are the two main types of CPU scheduling discussed in the lecture?

    -The two main types of CPU scheduling discussed are pre-emptive and non-pre-emptive scheduling.

  • What is the role of the CPU scheduler in the context of CPU scheduling?

    -The CPU scheduler is responsible for selecting a process from the ready queue to be executed when the CPU becomes idle.

  • What is the dispatcher in the context of CPU scheduling?

    -The dispatcher is the module that gives control of the CPU to the process selected by the CPU scheduler.

  • What is dispatch latency in CPU scheduling?

    -Dispatch latency is the time taken by the dispatcher to stop one process and start another, which is the time it takes to switch the CPU between two processes.

  • Under which circumstances can CPU scheduling decisions take place according to the lecture?

    -CPU scheduling decisions can take place under four circumstances: 1) when a process switches from running to waiting state, 2) when a process switches from running to ready state due to an interrupt, 3) when a process switches from waiting to ready state after IO completion, and 4) when a process terminates.

  • What is the difference between pre-emptive and non-pre-emptive scheduling in terms of when the CPU can be taken away from a process?

    -In non-pre-emptive scheduling, the CPU is not taken away from a process until it has either completed its execution or gone to a waiting state. In pre-emptive scheduling, the CPU can be taken away from a process even while it is running and given to another process.

  • Why is it said that non-pre-emptive scheduling is also known as cooperative scheduling?

    -Non-pre-emptive scheduling is called cooperative because a process is allowed to use the CPU until its execution is complete without being disturbed, indicating cooperation between processes.

  • What are the potential issues with pre-emptive scheduling when it comes to shared memory?

    -One potential issue with pre-emptive scheduling in shared memory scenarios is that if a process is preempted while writing to shared memory, another process might read inconsistent data from that memory region.

  • Why can't we say that one type of scheduling is universally better than the other?

    -We cannot say one type of scheduling is universally better than the other because different scenarios require different approaches. Pre-emptive scheduling may be necessary for high-priority processes, while non-pre-emptive scheduling can be beneficial when process interruption is not desirable.

  • What is the importance of understanding pre-emptive and non-pre-emptive scheduling in the study of operating systems?

    -Understanding pre-emptive and non-pre-emptive scheduling is important as it helps in the study of CPU scheduling algorithms and other related topics in operating systems, allowing for better system design and process management.

  • How does the lecture define the running state and waiting state of a process in the context of CPU scheduling?

    -The running state is when a process is currently using the CPU and undergoing execution. The waiting state is when a process is not making use of the CPU because it is waiting for IO operations or other events to complete.

Outlines

00:00

πŸ€– Introduction to CPU Scheduling Concepts

This paragraph introduces the topic of CPU scheduling, focusing on the concepts of preemptive and non-preemptive scheduling. It clarifies that these are not algorithms but rather methods by which CPU scheduling occurs. The paragraph also introduces two key terms: CPU scheduler, which selects a process from the ready queue for execution, and dispatcher, which hands over control of the CPU to the selected process. The importance of minimal dispatch latency for efficient computation is highlighted.

05:01

πŸ”„ Understanding CPU Scheduling Decisions

The second paragraph delves into the four circumstances under which CPU scheduling decisions may occur: a process switching from running to waiting state, a process switching from running to ready state due to an interrupt, a process moving from waiting to ready state upon IO completion, and a process terminating. It emphasizes that there is no choice in scheduling decisions for the first and fourth circumstances, as the CPU must be assigned to another process, while there is a choice in the second and third circumstances, allowing for the CPU to be potentially given to a different process.

10:01

πŸ›‘ Exploring Preemptive and Non-Preemptive Scheduling

This paragraph explains the difference between preemptive and non-preemptive scheduling. In non-preemptive scheduling, the CPU is only reassigned when a process has terminated or moved to a waiting state, meaning a process holds the CPU until it completes execution or waits. Conversely, preemptive scheduling allows the CPU to be reassigned to another process even if the current process has not completed execution, as seen when a process moves from running to ready state due to an interrupt or IO completion. The paragraph concludes by stating that both scheduling methods have their uses depending on the scenario.

15:01

πŸ”„ Preemptive Scheduling: Advantages and Drawbacks

The final paragraph discusses the implications of preemptive scheduling, such as the potential for higher priority processes to preempt lower priority ones, and the issues that can arise, such as inconsistent data access in shared memory scenarios. It concludes by emphasizing that neither preemptive nor non-preemptive scheduling is universally superior; the choice depends on the specific requirements of the operating system and the situation at hand.

Mindmap

Keywords

πŸ’‘Pre-emptive Scheduling

Pre-emptive scheduling is a concept where the CPU can be forcibly taken away from a process before its execution is complete, typically to allocate it to another process that may have higher priority or is ready to run. This is a key theme in the video, as it contrasts with non-preemptive scheduling and affects how processes are managed and prioritized in an operating system. For example, the script discusses that in pre-emptive scheduling, if an interrupt occurs or a higher priority process becomes ready, the CPU can be reassigned.

πŸ’‘Non-Preemptive Scheduling

Non-preemptive scheduling, also known as cooperative scheduling, is an approach where once a process is given the CPU, it holds onto it until it either completes its execution or voluntarily releases the CPU by entering a waiting state. This concept is central to the video's exploration of CPU scheduling, as it highlights a method where processes are not interrupted, potentially leading to more consistent execution flows but at the risk of less responsive system behavior. The script mentions that non-preemptive scheduling is used when the CPU is only reassigned upon process termination or when a process enters a waiting state.

πŸ’‘CPU Scheduler

The CPU scheduler is a component of the operating system responsible for selecting which process in the ready queue will be given access to the CPU for execution. It is integral to the theme of the video, as it explains the decision-making process behind CPU allocation. The script illustrates this by describing the role of the CPU scheduler in choosing a process from those ready to execute and allocating the CPU to it, which is a fundamental aspect of both preemptive and non-preemptive scheduling.

πŸ’‘Dispatcher

The dispatcher is a module that hands over control of the CPU to the process selected by the CPU scheduler. It is a crucial element in the video's discussion on CPU scheduling, as it acts as the facilitator between the decision made by the CPU scheduler and the actual execution of the process. The script emphasizes the dispatcher's role in quickly transferring control to the selected process, highlighting the importance of minimal dispatch latency for efficient CPU utilization.

πŸ’‘Dispatch Latency

Dispatch latency refers to the time it takes for the dispatcher to stop one process and start another. This term is relevant to the video's focus on efficiency in CPU scheduling, as minimizing dispatch latency is essential for quick process switching and maximizing CPU usage. The script uses this term to discuss the importance of a swift transition between processes to avoid wasting CPU resources.

πŸ’‘Running State

A running state in the context of processes refers to when a process is actively utilizing the CPU to execute its instructions. The video script uses this term to describe one of the key states a process can be in, particularly when discussing CPU scheduling decisions that occur when a process transitions from running to another state, such as waiting or ready.

πŸ’‘Waiting State

The waiting state is a process state where the process is not actively using the CPU because it is waiting for an event or resource, such as IO operations to complete. The video script explains this state in the context of CPU scheduling decisions, where a process may transition from running to waiting, prompting the CPU scheduler to select another process for execution.

πŸ’‘Ready State

A process is in the ready state when it is prepared and waiting to be assigned the CPU for execution. The video script discusses this state in relation to CPU scheduling, explaining that a process may transition to the ready state after an interrupt or completion of a waiting event, at which point the CPU scheduler may choose it or another process for execution.

πŸ’‘Process Termination

Process termination refers to the completion of a process's execution, freeing up the CPU for another process. The video script uses this term to describe a scenario where CPU scheduling decisions are made, as the termination of one process necessitates the selection of a new process to utilize the CPU.

πŸ’‘CPU Scheduling Decisions

CPU scheduling decisions are the choices made by the CPU scheduler regarding which process should be assigned the CPU next. The video script outlines four circumstances under which these decisions are made, which are central to understanding both preemptive and non-preemptive scheduling. The script explains that the nature of these decisions can vary depending on whether they occur during a process transition from running to waiting, running to ready, waiting to ready, or upon process termination.

Highlights

Pre-emptive and non-pre-emptive scheduling are two different methods of CPU scheduling.

Pre-emptive and non-pre-emptive are not CPU scheduling algorithms but represent ways in which CPU scheduling occurs.

A CPU scheduler selects a process from the ready queue to be executed when the CPU is idle.

The dispatcher is responsible for giving control of the CPU to the process selected by the CPU scheduler.

Dispatch latency is the time taken by the dispatcher to switch the CPU between two processes.

Efficient computation requires minimal dispatch latency.

CPU scheduling decisions may occur under four circumstances: process switch to waiting state, process switch from running to ready state due to interrupt, process switch from waiting to ready state after IO completion, and process termination.

In circumstances 1 & 4, there is no choice in CPU scheduling as a new process must be selected from the ready queue.

Circumstances 2 & 3 allow for a choice in CPU scheduling as the CPU can be given to the same or a different process.

Non-preemptive or cooperative scheduling occurs when scheduling takes place only under circumstances 1 & 4.

Pre-emptive scheduling allows the CPU to be taken away from a process even before its execution is complete or it enters a waiting state.

Pre-emptive scheduling is necessary for scenarios where high-priority processes need immediate CPU access.

Pre-emptive scheduling can lead to issues with shared memory consistency if not managed properly.

The choice between pre-emptive and non-preemptive scheduling depends on the specific requirements and scenarios of the system.

Both pre-emptive and non-preemptive scheduling have their advantages and are used in different operating systems.

Understanding the concepts of pre-emptive and non-preemptive scheduling is crucial for studying CPU scheduling algorithms and related topics.

Transcripts

play00:00

in this lecture we are going to study

play00:01

about pre-emptive and non pre-emptive

play00:03

scheduling so pre-emptive and non

play00:06

pre-emptive scheduling are two different

play00:09

ways in which scheduling can take place

play00:11

now remember that pre-emptive and non

play00:13

pre-emptive scheduling are not CPU

play00:15

scheduling algorithms but they are two

play00:18

ways in which CPU scheduling takes place

play00:21

so in this lecture we will try to

play00:23

understand what this means and we will

play00:25

try to get a good idea about preemptive

play00:27

and non pre-emptive scheduling but

play00:29

before we move ahead into this topic

play00:31

there are two other important terms that

play00:33

we need to first know about so let us

play00:35

see what they are so the first one is

play00:37

CPU scheduler so we have been talking

play00:39

about CPU scheduling from the previous

play00:41

few lectures and we need to understand

play00:44

what a CPU scheduler is so by now we

play00:46

must already have a brief idea of what

play00:49

scheduling actually is so what is a CPU

play00:51

schedule so it says here when the CPU

play00:54

becomes idle the operating system must

play00:57

select one of the processes in the ready

play00:59

queue to be executed the selection

play01:01

process is carried out by the short-term

play01:04

scheduler or CPU scheduler the scheduler

play01:07

selects a process from the processes in

play01:10

the memory that are ready to execute and

play01:13

allocates the CPU to that process so

play01:16

from the name itself we can understand

play01:18

that CPU scheduler is the one who is

play01:21

going to select the process that will

play01:24

get the CPU for its execution

play01:27

so we know that scheduling means

play01:28

assigning the CPU to different processes

play01:31

for their execution so there may be many

play01:34

processes that are ready to be executed

play01:36

and are waiting to just get the CPU

play01:38

among those already processes the CPU

play01:42

scheduler is the one who will select

play01:45

which process will get the CPU in order

play01:48

to do its execution so what we will do

play01:52

it will select a process from the

play01:54

processes in the memory that are ready

play01:56

to execute and it will allocate the CPU

play01:59

to that process so that is a function of

play02:01

the CPU scheduler so similarly there is

play02:04

another thing that we need to know about

play02:05

and let's see what that is so the second

play02:08

thing that we need to know about is the

play02:09

dispatcher now

play02:11

what is this dispatcher and what does it

play02:12

do the dispatcher is the module that

play02:15

gives control of the CPU to the process

play02:18

selected by the short-term scheduler so

play02:21

what does dispatcher does is that it

play02:23

will give the control of the CPU to the

play02:26

process that was selected by the CPU

play02:29

scheduler so we see that there are

play02:32

different number of processes waiting in

play02:34

memory to be executed they are ready to

play02:36

be executed that means they are ready to

play02:39

get the CPU so what the cpu scheduler

play02:41

does it will select one of the process

play02:43

and say that yes you are the one that is

play02:46

going to get the CPU to begin your

play02:48

execution and what is a dispatcher do

play02:50

dispatcher is a module that actually

play02:53

gives control of the CPU to that process

play02:56

that was selected by the CPU scheduler

play02:59

so we need to understand that this

play03:01

dispatcher has to be very quick because

play03:04

there are a lot of process switching

play03:06

that is taking place so we know that

play03:08

there are different processes waiting

play03:10

for the CPU and when one process is

play03:12

executing and when it goes into a

play03:14

waiting State we need to give the CPU to

play03:16

another process so this happens very

play03:19

frequently and the switching between

play03:21

these processes must happen very quickly

play03:23

so the dispatcher has to be very quick

play03:26

and so the time it takes for the

play03:29

dispatcher to stop one process and start

play03:32

another running is known as the dispatch

play03:35

latency so here is another term that we

play03:37

need to know about dispatched latency

play03:40

that is the time that the dispatcher

play03:42

takes for stopping one process and

play03:45

starting another one that means the time

play03:47

it takes for the dispatcher to switch

play03:49

the CPU between two processes that is

play03:51

what we mean by dispatch latency so we

play03:54

can know that in order to have a

play03:56

efficient computation we need to have

play03:58

minimal dispatch latency all right now

play04:02

since we have learned about these two

play04:03

terms CPU scheduler and dispatcher now

play04:06

let us go ahead and look into

play04:08

pre-emptive and non pre-emptive

play04:10

scheduling so in order to understand

play04:12

preemptive and non pre-emptive

play04:14

scheduling we need to do that CPU

play04:17

scheduling decisions may take place

play04:19

under the following four circumstances

play04:22

so we will see what are the

play04:23

circumstances and we will see what do we

play04:26

mean by this CPU scheduling decisions

play04:28

that we are talking about so let us see

play04:30

what are the four circumstances that we

play04:32

have and let us see how the CPU

play04:34

scheduling take place in these four

play04:37

circumstances and from that let us see

play04:39

how we can understand the concept of

play04:41

pre-emptive and non pre-emptive

play04:42

scheduling so coming to the first

play04:44

circumstance it is when a process

play04:47

switches from the running state to the

play04:49

waiting state so CPU scheduling decision

play04:53

may take place when a process switches

play04:56

from a running state to the waiting

play04:58

State so what do we mean by running

play05:00

state and waiting state so a process has

play05:03

already begun its execution that means a

play05:06

process is already in hold of the CPU at

play05:09

that time we say that the process is in

play05:11

a running state and then what is a

play05:13

waiting State we have already discussed

play05:15

in the previous lecture when the process

play05:17

is waiting for some IO operations or

play05:19

something like that to be complete at

play05:21

that time we said that the process is in

play05:23

waiting State so at that time it is not

play05:25

making use of the CPU so you need to

play05:28

have a good understanding about the

play05:30

different states in which a process can

play05:32

be so in this lecture series I have

play05:34

already done a lecture about the

play05:36

different process States where I have

play05:38

explained in detail about the states in

play05:40

which a process can be and how a process

play05:43

switches from one state to another so if

play05:45

you have not watched that lecture I

play05:47

highly recommend you to watch that

play05:48

lecture so that you will understand the

play05:50

different states that we are talking

play05:52

about in this lecture here so I will

play05:53

leave a link in the description for that

play05:55

lecture if you have not watched so that

play05:57

you can watch it so coming back we see

play05:59

that the first circumstance is when a

play06:01

process switches from running state to

play06:04

waiting state so at this time a CPU

play06:07

scheduling decision may take place all

play06:09

right let's go to the next point so the

play06:11

next circumstance is when a process

play06:13

switches from the running state to the

play06:16

ready State for example when an

play06:17

interrupt occurs so we already know what

play06:20

a running State is when a process is

play06:22

using the CPU and is undergoing

play06:24

execution it is in running state and for

play06:28

example when an interrupt occurs at that

play06:30

time the process will have to halt its

play06:33

execution

play06:34

so it has not come

play06:35

today's execution but due to some

play06:37

interrupts it can halt its execution now

play06:40

when it is halting its execution

play06:42

remember that the process execution has

play06:44

not yet completed it still needs the CPU

play06:47

to complete its execution so after the

play06:50

halt is complete the process again goes

play06:53

back to the ready state that means it is

play06:55

again ready to get the CPU to continue

play06:58

its execution so that is what we mean by

play07:01

the ready state when a process is

play07:03

switching from a running state to the

play07:05

ready state at that time also CPU

play07:07

scheduling decisions may take place so

play07:10

moving ahead we are going to understand

play07:11

what do we mean by these decisions for

play07:13

now let us just focus on these four

play07:15

circumstances so let's come to the third

play07:17

one

play07:18

so the third circumstance is when a

play07:20

process switches from the waiting state

play07:22

to the ready state for example at the

play07:26

completion of a input-output request so

play07:29

here we already know what a waiting

play07:30

state is when does a process go into a

play07:33

waiting State a process goes into a

play07:35

waiting state when it had some are your

play07:38

request to be complete so we have

play07:40

already discussed that when we studied

play07:41

about the CPU burst and i/o burst in the

play07:44

previous lecture so when a process is

play07:46

waiting for some IO to be complete it is

play07:49

in a waiting State and once the IO

play07:51

request has been completed it has to

play07:54

continue its execution so what will

play07:56

happen the process will again go to the

play07:58

ready state that means it is ready to

play08:00

continue its execution and once the CPU

play08:03

to continue its execution so this is

play08:05

another circumstance and even at this

play08:08

circumstance a CPU scheduling decision

play08:10

may take place now let's go to the final

play08:13

point so the final point is when a

play08:15

process terminates so this is very

play08:17

straightforward and easy to understand

play08:18

when a process has terminated then the

play08:21

CPU has to be assigned to some other

play08:23

process so a CPU scheduling can also

play08:26

take place at this circumstance now let

play08:29

us see what is a CPU scheduling decision

play08:32

that we were talking about in these four

play08:34

circumstances so here it says four

play08:37

situations one and four there is no

play08:40

choice in terms of scheduling a new

play08:43

process if one exists in the ready queue

play08:45

must be selected for execution

play08:48

however there is a choice for situations

play08:51

2 & 3 so let us try to understand what

play08:54

this means it says as for situations 1 &

play08:57

4 there is no choice in terms of

play09:00

scheduling and why is that let us see

play09:02

let us closely observe situations 1 & 4

play09:05

so in situation number 1 the process is

play09:07

going from a running state to a waiting

play09:10

State that means it was running and then

play09:13

it came to the waiting state where it is

play09:14

waiting for some IO operations or

play09:17

something like that so in the previous

play09:18

lectures when we started studying about

play09:20

scheduling itself we said that why we

play09:23

are having the scheduling is when a

play09:25

process is waiting for some IO request

play09:28

or something like that to be complete we

play09:30

don't want the CPU to remain idle we

play09:32

wander CPU to be utilized by some other

play09:35

process at that time so that we can

play09:37

maximize our CPU utilization so that is

play09:40

why when a process is in waiting state

play09:43

we have to make sure that another

play09:45

process gets the CPU and can utilize a

play09:47

CPU so in this point number one when a

play09:50

process switches from running state to

play09:52

the waiting state we have to assign the

play09:55

CPU to some other process for sure so we

play09:58

have no choice here we have to assign

play10:00

the CPU to some other process because

play10:03

the current process is waiting all right

play10:05

so we see that in point number one there

play10:07

is no choice and in point number four

play10:10

also it is very easy to understand that

play10:12

there is no choice because when a

play10:13

process terminates then the CPU is freed

play10:16

by that process and then the CPU has to

play10:18

be assigned to some other process so

play10:20

this is very easy to understand so in

play10:23

point number four also there is no

play10:24

choice

play10:24

so in situation one and fourth there are

play10:27

no choices in terms of scheduling so

play10:29

what should we do a new process if one

play10:31

exists in the ready queue must be

play10:33

selected for execution so in the radio

play10:35

queue if there are processes available

play10:36

we have to select one and they must be

play10:39

given for execution but however there is

play10:42

a choice for situations two and three so

play10:45

it says that there are choices for

play10:47

situations two and three so let us

play10:49

closely observe situations two and three

play10:51

and try to understand what is the choice

play10:54

that it is talking about

play10:55

let's see situation number two when a

play10:57

process switches from the running state

play11:00

to the ready state for example when an

play11:02

interrupt occurs so if you closely

play11:04

observe at this point we see that the

play11:07

process did not go to a waiting State

play11:09

nor has it completed its execution that

play11:12

means it is not in a waiting State and

play11:14

also it has not yet terminated so when

play11:17

does this happen this usually happens

play11:20

when an interrupt occurs so a process

play11:22

was running and it was in hold of the

play11:25

CPU and then while it was running some

play11:28

interrupt occurred and due to that it

play11:30

had to halt its execution so it did not

play11:33

go into a waiting State because it is

play11:35

not waiting for any i/o operations for

play11:38

itself but what will happen it will hold

play11:40

its execution and go to the ready state

play11:43

because it says that whenever the CPU is

play11:46

available I am ready to execute so it

play11:49

goes from the running State to the ready

play11:51

State now here what is the choice that

play11:53

we have so here the choice that we have

play11:55

is that once a process goes from a

play11:58

running state to the ready state then we

play12:01

see that at this moment that process has

play12:04

lost control over the CPU so in the next

play12:07

phase who should get the CPU whether

play12:09

this same process that was executing

play12:12

will be given the CPU or some other

play12:15

process will be given the CPU so this is

play12:18

the choice that we are talking about

play12:19

here even though the process has not

play12:22

completed its execution since it went to

play12:25

the ready state due to some interrupts

play12:26

the moment the CPU is available again

play12:28

will the CPU be given to this same

play12:31

process or will it be given to some

play12:33

other process so that is the choice that

play12:35

we have in this situation number two now

play12:38

let's look at situation number three so

play12:40

here it says when a process switches

play12:42

from the waiting state to the ready

play12:44

State for example at the completion of

play12:47

an i/o so here what happens the process

play12:50

was in waiting state that means it was

play12:52

already running and it had some IO

play12:54

operations to complete so it went to the

play12:57

waiting State now when the i/o operation

play12:59

has finally completed then the process

play13:02

comes back from the waiting state to the

play13:04

ready state that means now it is ready

play13:07

to be executed so even at this

play13:09

time we have a choice that we can make

play13:11

because the cpu now is available and

play13:14

whether to give the cpu to this process

play13:17

that we were talking about or should we

play13:19

give the cpu to some other process so

play13:22

this is also a choice that we can have

play13:24

in this situation number three so with

play13:27

that I hope you understood the choices

play13:29

that we can make in terms of CPU

play13:32

scheduling decisions according to the

play13:34

circumstances that we have now since we

play13:37

have understood all this we are in a

play13:38

position to define and understand the

play13:40

meaning of pre-emptive and non

play13:42

pre-emptive scheduling so when

play13:44

scheduling takes place only under

play13:47

circumstance 1 & 4 we say that the

play13:50

scheduling scheme is non pre-emptive or

play13:52

cooperative otherwise it is pre-emptive

play13:55

so let us try to break this sentence

play13:57

down and try to understand what it means

play14:00

so when scheduling takes place only

play14:01

under circumstance 1 & 4 it is said to

play14:05

be a non pre-emptive or cooperative

play14:07

scheduling and why is that that is

play14:10

because the CPU is assigned to some

play14:13

other process only when a process has

play14:16

either terminated or it has gone to a

play14:19

waiting State so when a process is in

play14:22

the running state the CPU is never taken

play14:25

away from it so when a process is

play14:26

running it will not be disturbed it will

play14:29

be allowed to use the CPU until its

play14:31

execution is complete it is allowed to

play14:34

hold the CPU until and unless its

play14:36

execution is complete so the CPU will

play14:39

only be taken away from it once it has

play14:41

terminated or if it has gone to a

play14:43

waiting State only in these two

play14:45

scenarios the CPU will be taken away

play14:48

from it that is why it is called non

play14:49

pre-emptive or cooperative but in

play14:52

situations 2 & 3 we say that it is

play14:54

preemptive and why is that

play14:56

that is because we see that here the

play14:58

processes were still running and we're

play15:01

ready to be executed but instead of

play15:04

giving the CPU to that same process the

play15:06

CPU can also be given to some other

play15:09

process so we see that the CPU can be

play15:11

taken away from a process even while it

play15:14

is running so here we see that it was in

play15:16

a running State in number 2 it was not

play15:19

in a waiting State nor has it completed

play15:21

but it was in a running stay

play15:22

and it went to the ready state and why

play15:25

it was because of some interrupts so the

play15:27

process was still running and then it

play15:29

went to the ready state and at this

play15:31

point the CPU could be given to some

play15:33

other process so we call it a

play15:35

pre-emptive scheduling because before it

play15:38

has completed the CPU is taken away from

play15:40

it and even in point number three we see

play15:42

that it was in a waiting State and then

play15:45

it went to the ready state that means it

play15:47

was ready to be executed and keep in

play15:49

mind that this was already a running

play15:50

process which came to a waiting State

play15:52

then to the ready state here also we can

play15:55

give the CPU to some other process

play15:57

instead of giving it to this same

play15:59

process so we see that the CPU is taken

play16:02

away from it before its execution was

play16:04

completed so here also it is a

play16:07

pre-emptive scheduling so we see that in

play16:09

pre-emptive scheduling the basic idea is

play16:11

that the CPU can be taken away from a

play16:15

process even before it has completed its

play16:17

execution or even before it is going

play16:20

into a waiting State so that is what we

play16:22

mean by pre-emptive scheduling and in

play16:24

non pre-emptive scheduling the CPU will

play16:26

never be taken away from a process until

play16:29

and unless it has either completed its

play16:31

execution or it has gone to a waiting

play16:34

State so that is what we mean by non

play16:36

pre-emptive or cooperative scheduling so

play16:39

if we compare these two types of

play16:41

scheduling we will need these two types

play16:43

of scheduling for different scenarios we

play16:45

cannot say one is better than the other

play16:47

we may need both of them according to

play16:49

the different scenarios that we have and

play16:51

different operating systems follow

play16:53

different kind of scheduling it may be

play16:55

pre-emptive or non pre-emptive we may

play16:57

think that non pre-emptive is a good way

play16:59

because a process will not be disturbed

play17:01

until and unless it is complete or until

play17:04

and unless it goes to a waiting State

play17:05

but there may be scenarios where when a

play17:07

process is being executed there is a

play17:09

very high priority process that has to

play17:12

be executed so at that time we may need

play17:14

to stop the execution of the current one

play17:16

and give the CPU to that process of

play17:19

higher priority so at that time

play17:20

pre-emptive scheduling will be required

play17:22

but also pre-emptive scheduling it comes

play17:25

with the course now why do I say that

play17:27

that is because there are different

play17:29

scenarios and one example that I will

play17:31

give is about shared memory so let's say

play17:33

that there is a shared region of memory

play17:35

in two

play17:36

different process has access and

play17:38

different processes can write or read

play17:40

from that shared region of memory so

play17:42

let's say that we are following this

play17:43

pre-emptive scheduling and one process

play17:46

was writing something into that shared

play17:48

region of memory so another process came

play17:51

and then the first process had to be

play17:52

preempted that means the first process

play17:55

was halted and then the CPU was given to

play17:57

the second process now the second

play17:59

process when it reads from that memory

play18:01

into which the first process was just a

play18:04

writing and did not complete what it was

play18:06

writing so that second process is going

play18:09

to read some inconsistent data because

play18:11

the first process that was doing the

play18:13

writing job did not complete what it was

play18:15

doing so those are some of the problems

play18:18

that we can face but as I said we cannot

play18:21

say that one is better than the other

play18:22

we may need both of these types of

play18:24

scheduling depending upon the situations

play18:27

that we have so with that I hope the

play18:29

concept of pre-emptive and non

play18:31

pre-emptive scheduling so a clear to you

play18:33

so these two concepts are very important

play18:35

when we study about Syria scheduling

play18:36

algorithms or any other topic related to

play18:39

scheduling so I hope this was clear and

play18:41

helpful to you thank you for watching

play18:43

and see you in the next one

play18:45

[Applause]

play18:47

[Music]

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
CPU SchedulingPreemptiveNon-PreemptiveOperating SystemProcess StatesScheduling AlgorithmsProcess ManagementSystem EfficiencyInterrupt HandlingResource Allocation