Preemptive and Non-Preemptive Scheduling
Summary
TLDRThis lecture delves into the concepts of pre-emptive and non-pre-emptive CPU scheduling, explaining that they are not algorithms but methods of CPU scheduling. It clarifies the roles of the CPU scheduler and dispatcher, introduces the concept of dispatch latency, and outlines four key circumstances under which CPU scheduling decisions occur. The lecture distinguishes between non-pre-emptive (cooperative) scheduling, where the CPU is only reassigned upon process termination or waiting, and pre-emptive scheduling, which allows for the CPU to be taken away even while a process is running, highlighting the importance of these concepts in operating systems.
Takeaways
- 😀 Pre-emptive and non-pre-emptive scheduling are two methods of CPU scheduling, not specific algorithms.
- 🔄 CPU scheduling involves decisions made by the CPU scheduler and dispatcher, where the scheduler selects a process and the dispatcher allocates the CPU to it.
- 🕒 Dispatch latency is the time taken by the dispatcher to switch the CPU from one process to another, and minimizing it is crucial for efficient computation.
- 🔄 CPU scheduling decisions can occur in four circumstances: a process switching to a waiting state, a process switching to a ready state due to an interrupt, a process switching from a waiting to a ready state after IO completion, and a process terminating.
- ✅ In situations 1 and 4, there is no choice in CPU scheduling; a new process from the ready queue must be selected for execution.
- ❓ In situations 2 and 3, there is a choice in CPU scheduling; the CPU can be given to the same process that was interrupted or waiting, or it can be given to a different process.
- 🚫 Non-preemptive scheduling occurs when the CPU is only reassigned under circumstances 1 and 4, meaning the CPU is not forcibly taken away from a running process.
- 🛑 Pre-emptive scheduling allows the CPU to be taken away from a running process even before it completes execution, under circumstances 2 and 3.
- 🔄 The choice between pre-emptive and non-preemptive scheduling depends on the scenario; both have their advantages and are used in different operating systems.
- 🔄 Pre-emptive scheduling can be necessary for high-priority processes, but it may lead to issues like inconsistent data when dealing with shared memory.
- 📚 Understanding pre-emptive and non-preemptive scheduling is fundamental to studying CPU scheduling algorithms and related topics in operating systems.
Q & A
What are the two main types of CPU scheduling discussed in the lecture?
-The two main types of CPU scheduling discussed are pre-emptive and non-pre-emptive scheduling.
What is the role of the CPU scheduler in the context of CPU scheduling?
-The CPU scheduler is responsible for selecting a process from the ready queue to be executed when the CPU becomes idle.
What is the dispatcher in the context of CPU scheduling?
-The dispatcher is the module that gives control of the CPU to the process selected by the CPU scheduler.
What is dispatch latency in CPU scheduling?
-Dispatch latency is the time taken by the dispatcher to stop one process and start another, which is the time it takes to switch the CPU between two processes.
Under which circumstances can CPU scheduling decisions take place according to the lecture?
-CPU scheduling decisions can take place under four circumstances: 1) when a process switches from running to waiting state, 2) when a process switches from running to ready state due to an interrupt, 3) when a process switches from waiting to ready state after IO completion, and 4) when a process terminates.
What is the difference between pre-emptive and non-pre-emptive scheduling in terms of when the CPU can be taken away from a process?
-In non-pre-emptive scheduling, the CPU is not taken away from a process until it has either completed its execution or gone to a waiting state. In pre-emptive scheduling, the CPU can be taken away from a process even while it is running and given to another process.
Why is it said that non-pre-emptive scheduling is also known as cooperative scheduling?
-Non-pre-emptive scheduling is called cooperative because a process is allowed to use the CPU until its execution is complete without being disturbed, indicating cooperation between processes.
What are the potential issues with pre-emptive scheduling when it comes to shared memory?
-One potential issue with pre-emptive scheduling in shared memory scenarios is that if a process is preempted while writing to shared memory, another process might read inconsistent data from that memory region.
Why can't we say that one type of scheduling is universally better than the other?
-We cannot say one type of scheduling is universally better than the other because different scenarios require different approaches. Pre-emptive scheduling may be necessary for high-priority processes, while non-pre-emptive scheduling can be beneficial when process interruption is not desirable.
What is the importance of understanding pre-emptive and non-pre-emptive scheduling in the study of operating systems?
-Understanding pre-emptive and non-pre-emptive scheduling is important as it helps in the study of CPU scheduling algorithms and other related topics in operating systems, allowing for better system design and process management.
How does the lecture define the running state and waiting state of a process in the context of CPU scheduling?
-The running state is when a process is currently using the CPU and undergoing execution. The waiting state is when a process is not making use of the CPU because it is waiting for IO operations or other events to complete.
Outlines
🤖 Introduction to CPU Scheduling Concepts
This paragraph introduces the topic of CPU scheduling, focusing on the concepts of preemptive and non-preemptive scheduling. It clarifies that these are not algorithms but rather methods by which CPU scheduling occurs. The paragraph also introduces two key terms: CPU scheduler, which selects a process from the ready queue for execution, and dispatcher, which hands over control of the CPU to the selected process. The importance of minimal dispatch latency for efficient computation is highlighted.
🔄 Understanding CPU Scheduling Decisions
The second paragraph delves into the four circumstances under which CPU scheduling decisions may occur: a process switching from running to waiting state, a process switching from running to ready state due to an interrupt, a process moving from waiting to ready state upon IO completion, and a process terminating. It emphasizes that there is no choice in scheduling decisions for the first and fourth circumstances, as the CPU must be assigned to another process, while there is a choice in the second and third circumstances, allowing for the CPU to be potentially given to a different process.
🛑 Exploring Preemptive and Non-Preemptive Scheduling
This paragraph explains the difference between preemptive and non-preemptive scheduling. In non-preemptive scheduling, the CPU is only reassigned when a process has terminated or moved to a waiting state, meaning a process holds the CPU until it completes execution or waits. Conversely, preemptive scheduling allows the CPU to be reassigned to another process even if the current process has not completed execution, as seen when a process moves from running to ready state due to an interrupt or IO completion. The paragraph concludes by stating that both scheduling methods have their uses depending on the scenario.
🔄 Preemptive Scheduling: Advantages and Drawbacks
The final paragraph discusses the implications of preemptive scheduling, such as the potential for higher priority processes to preempt lower priority ones, and the issues that can arise, such as inconsistent data access in shared memory scenarios. It concludes by emphasizing that neither preemptive nor non-preemptive scheduling is universally superior; the choice depends on the specific requirements of the operating system and the situation at hand.
Mindmap
Keywords
💡Pre-emptive Scheduling
💡Non-Preemptive Scheduling
💡CPU Scheduler
💡Dispatcher
💡Dispatch Latency
💡Running State
💡Waiting State
💡Ready State
💡Process Termination
💡CPU Scheduling Decisions
Highlights
Pre-emptive and non-pre-emptive scheduling are two different methods of CPU scheduling.
Pre-emptive and non-pre-emptive are not CPU scheduling algorithms but represent ways in which CPU scheduling occurs.
A CPU scheduler selects a process from the ready queue to be executed when the CPU is idle.
The dispatcher is responsible for giving control of the CPU to the process selected by the CPU scheduler.
Dispatch latency is the time taken by the dispatcher to switch the CPU between two processes.
Efficient computation requires minimal dispatch latency.
CPU scheduling decisions may occur under four circumstances: process switch to waiting state, process switch from running to ready state due to interrupt, process switch from waiting to ready state after IO completion, and process termination.
In circumstances 1 & 4, there is no choice in CPU scheduling as a new process must be selected from the ready queue.
Circumstances 2 & 3 allow for a choice in CPU scheduling as the CPU can be given to the same or a different process.
Non-preemptive or cooperative scheduling occurs when scheduling takes place only under circumstances 1 & 4.
Pre-emptive scheduling allows the CPU to be taken away from a process even before its execution is complete or it enters a waiting state.
Pre-emptive scheduling is necessary for scenarios where high-priority processes need immediate CPU access.
Pre-emptive scheduling can lead to issues with shared memory consistency if not managed properly.
The choice between pre-emptive and non-preemptive scheduling depends on the specific requirements and scenarios of the system.
Both pre-emptive and non-preemptive scheduling have their advantages and are used in different operating systems.
Understanding the concepts of pre-emptive and non-preemptive scheduling is crucial for studying CPU scheduling algorithms and related topics.
Transcripts
in this lecture we are going to study
about pre-emptive and non pre-emptive
scheduling so pre-emptive and non
pre-emptive scheduling are two different
ways in which scheduling can take place
now remember that pre-emptive and non
pre-emptive scheduling are not CPU
scheduling algorithms but they are two
ways in which CPU scheduling takes place
so in this lecture we will try to
understand what this means and we will
try to get a good idea about preemptive
and non pre-emptive scheduling but
before we move ahead into this topic
there are two other important terms that
we need to first know about so let us
see what they are so the first one is
CPU scheduler so we have been talking
about CPU scheduling from the previous
few lectures and we need to understand
what a CPU scheduler is so by now we
must already have a brief idea of what
scheduling actually is so what is a CPU
schedule so it says here when the CPU
becomes idle the operating system must
select one of the processes in the ready
queue to be executed the selection
process is carried out by the short-term
scheduler or CPU scheduler the scheduler
selects a process from the processes in
the memory that are ready to execute and
allocates the CPU to that process so
from the name itself we can understand
that CPU scheduler is the one who is
going to select the process that will
get the CPU for its execution
so we know that scheduling means
assigning the CPU to different processes
for their execution so there may be many
processes that are ready to be executed
and are waiting to just get the CPU
among those already processes the CPU
scheduler is the one who will select
which process will get the CPU in order
to do its execution so what we will do
it will select a process from the
processes in the memory that are ready
to execute and it will allocate the CPU
to that process so that is a function of
the CPU scheduler so similarly there is
another thing that we need to know about
and let's see what that is so the second
thing that we need to know about is the
dispatcher now
what is this dispatcher and what does it
do the dispatcher is the module that
gives control of the CPU to the process
selected by the short-term scheduler so
what does dispatcher does is that it
will give the control of the CPU to the
process that was selected by the CPU
scheduler so we see that there are
different number of processes waiting in
memory to be executed they are ready to
be executed that means they are ready to
get the CPU so what the cpu scheduler
does it will select one of the process
and say that yes you are the one that is
going to get the CPU to begin your
execution and what is a dispatcher do
dispatcher is a module that actually
gives control of the CPU to that process
that was selected by the CPU scheduler
so we need to understand that this
dispatcher has to be very quick because
there are a lot of process switching
that is taking place so we know that
there are different processes waiting
for the CPU and when one process is
executing and when it goes into a
waiting State we need to give the CPU to
another process so this happens very
frequently and the switching between
these processes must happen very quickly
so the dispatcher has to be very quick
and so the time it takes for the
dispatcher to stop one process and start
another running is known as the dispatch
latency so here is another term that we
need to know about dispatched latency
that is the time that the dispatcher
takes for stopping one process and
starting another one that means the time
it takes for the dispatcher to switch
the CPU between two processes that is
what we mean by dispatch latency so we
can know that in order to have a
efficient computation we need to have
minimal dispatch latency all right now
since we have learned about these two
terms CPU scheduler and dispatcher now
let us go ahead and look into
pre-emptive and non pre-emptive
scheduling so in order to understand
preemptive and non pre-emptive
scheduling we need to do that CPU
scheduling decisions may take place
under the following four circumstances
so we will see what are the
circumstances and we will see what do we
mean by this CPU scheduling decisions
that we are talking about so let us see
what are the four circumstances that we
have and let us see how the CPU
scheduling take place in these four
circumstances and from that let us see
how we can understand the concept of
pre-emptive and non pre-emptive
scheduling so coming to the first
circumstance it is when a process
switches from the running state to the
waiting state so CPU scheduling decision
may take place when a process switches
from a running state to the waiting
State so what do we mean by running
state and waiting state so a process has
already begun its execution that means a
process is already in hold of the CPU at
that time we say that the process is in
a running state and then what is a
waiting State we have already discussed
in the previous lecture when the process
is waiting for some IO operations or
something like that to be complete at
that time we said that the process is in
waiting State so at that time it is not
making use of the CPU so you need to
have a good understanding about the
different states in which a process can
be so in this lecture series I have
already done a lecture about the
different process States where I have
explained in detail about the states in
which a process can be and how a process
switches from one state to another so if
you have not watched that lecture I
highly recommend you to watch that
lecture so that you will understand the
different states that we are talking
about in this lecture here so I will
leave a link in the description for that
lecture if you have not watched so that
you can watch it so coming back we see
that the first circumstance is when a
process switches from running state to
waiting state so at this time a CPU
scheduling decision may take place all
right let's go to the next point so the
next circumstance is when a process
switches from the running state to the
ready State for example when an
interrupt occurs so we already know what
a running State is when a process is
using the CPU and is undergoing
execution it is in running state and for
example when an interrupt occurs at that
time the process will have to halt its
execution
so it has not come
today's execution but due to some
interrupts it can halt its execution now
when it is halting its execution
remember that the process execution has
not yet completed it still needs the CPU
to complete its execution so after the
halt is complete the process again goes
back to the ready state that means it is
again ready to get the CPU to continue
its execution so that is what we mean by
the ready state when a process is
switching from a running state to the
ready state at that time also CPU
scheduling decisions may take place so
moving ahead we are going to understand
what do we mean by these decisions for
now let us just focus on these four
circumstances so let's come to the third
one
so the third circumstance is when a
process switches from the waiting state
to the ready state for example at the
completion of a input-output request so
here we already know what a waiting
state is when does a process go into a
waiting State a process goes into a
waiting state when it had some are your
request to be complete so we have
already discussed that when we studied
about the CPU burst and i/o burst in the
previous lecture so when a process is
waiting for some IO to be complete it is
in a waiting State and once the IO
request has been completed it has to
continue its execution so what will
happen the process will again go to the
ready state that means it is ready to
continue its execution and once the CPU
to continue its execution so this is
another circumstance and even at this
circumstance a CPU scheduling decision
may take place now let's go to the final
point so the final point is when a
process terminates so this is very
straightforward and easy to understand
when a process has terminated then the
CPU has to be assigned to some other
process so a CPU scheduling can also
take place at this circumstance now let
us see what is a CPU scheduling decision
that we were talking about in these four
circumstances so here it says four
situations one and four there is no
choice in terms of scheduling a new
process if one exists in the ready queue
must be selected for execution
however there is a choice for situations
2 & 3 so let us try to understand what
this means it says as for situations 1 &
4 there is no choice in terms of
scheduling and why is that let us see
let us closely observe situations 1 & 4
so in situation number 1 the process is
going from a running state to a waiting
State that means it was running and then
it came to the waiting state where it is
waiting for some IO operations or
something like that so in the previous
lectures when we started studying about
scheduling itself we said that why we
are having the scheduling is when a
process is waiting for some IO request
or something like that to be complete we
don't want the CPU to remain idle we
wander CPU to be utilized by some other
process at that time so that we can
maximize our CPU utilization so that is
why when a process is in waiting state
we have to make sure that another
process gets the CPU and can utilize a
CPU so in this point number one when a
process switches from running state to
the waiting state we have to assign the
CPU to some other process for sure so we
have no choice here we have to assign
the CPU to some other process because
the current process is waiting all right
so we see that in point number one there
is no choice and in point number four
also it is very easy to understand that
there is no choice because when a
process terminates then the CPU is freed
by that process and then the CPU has to
be assigned to some other process so
this is very easy to understand so in
point number four also there is no
choice
so in situation one and fourth there are
no choices in terms of scheduling so
what should we do a new process if one
exists in the ready queue must be
selected for execution so in the radio
queue if there are processes available
we have to select one and they must be
given for execution but however there is
a choice for situations two and three so
it says that there are choices for
situations two and three so let us
closely observe situations two and three
and try to understand what is the choice
that it is talking about
let's see situation number two when a
process switches from the running state
to the ready state for example when an
interrupt occurs so if you closely
observe at this point we see that the
process did not go to a waiting State
nor has it completed its execution that
means it is not in a waiting State and
also it has not yet terminated so when
does this happen this usually happens
when an interrupt occurs so a process
was running and it was in hold of the
CPU and then while it was running some
interrupt occurred and due to that it
had to halt its execution so it did not
go into a waiting State because it is
not waiting for any i/o operations for
itself but what will happen it will hold
its execution and go to the ready state
because it says that whenever the CPU is
available I am ready to execute so it
goes from the running State to the ready
State now here what is the choice that
we have so here the choice that we have
is that once a process goes from a
running state to the ready state then we
see that at this moment that process has
lost control over the CPU so in the next
phase who should get the CPU whether
this same process that was executing
will be given the CPU or some other
process will be given the CPU so this is
the choice that we are talking about
here even though the process has not
completed its execution since it went to
the ready state due to some interrupts
the moment the CPU is available again
will the CPU be given to this same
process or will it be given to some
other process so that is the choice that
we have in this situation number two now
let's look at situation number three so
here it says when a process switches
from the waiting state to the ready
State for example at the completion of
an i/o so here what happens the process
was in waiting state that means it was
already running and it had some IO
operations to complete so it went to the
waiting State now when the i/o operation
has finally completed then the process
comes back from the waiting state to the
ready state that means now it is ready
to be executed so even at this
time we have a choice that we can make
because the cpu now is available and
whether to give the cpu to this process
that we were talking about or should we
give the cpu to some other process so
this is also a choice that we can have
in this situation number three so with
that I hope you understood the choices
that we can make in terms of CPU
scheduling decisions according to the
circumstances that we have now since we
have understood all this we are in a
position to define and understand the
meaning of pre-emptive and non
pre-emptive scheduling so when
scheduling takes place only under
circumstance 1 & 4 we say that the
scheduling scheme is non pre-emptive or
cooperative otherwise it is pre-emptive
so let us try to break this sentence
down and try to understand what it means
so when scheduling takes place only
under circumstance 1 & 4 it is said to
be a non pre-emptive or cooperative
scheduling and why is that that is
because the CPU is assigned to some
other process only when a process has
either terminated or it has gone to a
waiting State so when a process is in
the running state the CPU is never taken
away from it so when a process is
running it will not be disturbed it will
be allowed to use the CPU until its
execution is complete it is allowed to
hold the CPU until and unless its
execution is complete so the CPU will
only be taken away from it once it has
terminated or if it has gone to a
waiting State only in these two
scenarios the CPU will be taken away
from it that is why it is called non
pre-emptive or cooperative but in
situations 2 & 3 we say that it is
preemptive and why is that
that is because we see that here the
processes were still running and we're
ready to be executed but instead of
giving the CPU to that same process the
CPU can also be given to some other
process so we see that the CPU can be
taken away from a process even while it
is running so here we see that it was in
a running State in number 2 it was not
in a waiting State nor has it completed
but it was in a running stay
and it went to the ready state and why
it was because of some interrupts so the
process was still running and then it
went to the ready state and at this
point the CPU could be given to some
other process so we call it a
pre-emptive scheduling because before it
has completed the CPU is taken away from
it and even in point number three we see
that it was in a waiting State and then
it went to the ready state that means it
was ready to be executed and keep in
mind that this was already a running
process which came to a waiting State
then to the ready state here also we can
give the CPU to some other process
instead of giving it to this same
process so we see that the CPU is taken
away from it before its execution was
completed so here also it is a
pre-emptive scheduling so we see that in
pre-emptive scheduling the basic idea is
that the CPU can be taken away from a
process even before it has completed its
execution or even before it is going
into a waiting State so that is what we
mean by pre-emptive scheduling and in
non pre-emptive scheduling the CPU will
never be taken away from a process until
and unless it has either completed its
execution or it has gone to a waiting
State so that is what we mean by non
pre-emptive or cooperative scheduling so
if we compare these two types of
scheduling we will need these two types
of scheduling for different scenarios we
cannot say one is better than the other
we may need both of them according to
the different scenarios that we have and
different operating systems follow
different kind of scheduling it may be
pre-emptive or non pre-emptive we may
think that non pre-emptive is a good way
because a process will not be disturbed
until and unless it is complete or until
and unless it goes to a waiting State
but there may be scenarios where when a
process is being executed there is a
very high priority process that has to
be executed so at that time we may need
to stop the execution of the current one
and give the CPU to that process of
higher priority so at that time
pre-emptive scheduling will be required
but also pre-emptive scheduling it comes
with the course now why do I say that
that is because there are different
scenarios and one example that I will
give is about shared memory so let's say
that there is a shared region of memory
in two
different process has access and
different processes can write or read
from that shared region of memory so
let's say that we are following this
pre-emptive scheduling and one process
was writing something into that shared
region of memory so another process came
and then the first process had to be
preempted that means the first process
was halted and then the CPU was given to
the second process now the second
process when it reads from that memory
into which the first process was just a
writing and did not complete what it was
writing so that second process is going
to read some inconsistent data because
the first process that was doing the
writing job did not complete what it was
doing so those are some of the problems
that we can face but as I said we cannot
say that one is better than the other
we may need both of these types of
scheduling depending upon the situations
that we have so with that I hope the
concept of pre-emptive and non
pre-emptive scheduling so a clear to you
so these two concepts are very important
when we study about Syria scheduling
algorithms or any other topic related to
scheduling so I hope this was clear and
helpful to you thank you for watching
and see you in the next one
[Applause]
[Music]
5.0 / 5 (0 votes)