Scheduling Algorithms - First Come First Served (FCFS)
Summary
TLDRThis lecture introduces the first-come, first-served (FCFS) CPU scheduling algorithm, the simplest of its kind. FCFS operates on a queue basis, where processes are executed in the order they arrive, managed by a FIFO (first-in, first-out) queue. The algorithm is non-preemptive, meaning once a process starts, it holds the CPU until completion or I/O request. While easy to understand and implement, FCFS can lead to long average waiting times, especially with varying process burst times. The lecture also highlights its inefficiency in time-sharing systems, where equal CPU access is crucial, and concludes by comparing the waiting times of processes in different arrival orders.
Takeaways
- 📝 The script discusses the first come, first served (FCFS) CPU scheduling algorithm.
- 🧠 The FCFS algorithm is the simplest CPU scheduling method, where processes are allocated CPU time in the order they arrive.
- 📉 FCFS is implemented using a FIFO (First-In, First-Out) queue, ensuring a straightforward management system.
- 👥 The algorithm is non-preemptive, meaning once a process starts executing, it holds the CPU until it completes or waits for I/O.
- ⏱ The average waiting time for processes can vary greatly under FCFS, depending on the order of arrival and burst times.
- 📉 An example given in the script illustrates that changing the order of process arrival can significantly affect the average waiting time.
- 🚫 FCFS is not suitable for time-sharing systems where regular CPU access is required for each user or process.
- 🔍 The script emphasizes that the FCFS algorithm can lead to inefficiency, especially when large processes delay smaller ones in the queue.
- 🤔 The disadvantages of FCFS include potential for process starvation, where smaller processes are kept waiting by longer ones.
- 🔑 The script serves as an introduction to CPU scheduling algorithms, with more advanced algorithms to be discussed in subsequent lectures.
- 👋 The presenter concludes by hoping the explanation of FCFS was clear and invites viewers to the next lecture for further exploration of CPU scheduling.
Q & A
What is the First Come First Serve (FCFS) scheduling algorithm?
-The FCFS scheduling algorithm is the simplest CPU scheduling algorithm where processes are allocated the CPU in the order they arrive, ensuring that the first process to request the CPU is the first to be served.
How does the FCFS algorithm relate to the concept of a queue?
-The FCFS algorithm is implemented using a FIFO (First-In First-Out) queue, where the process that arrives first is the first to be executed, similar to how people wait in a queue for service.
What is the main disadvantage of the FCFS scheduling algorithm in terms of waiting time?
-The main disadvantage of the FCFS algorithm is that it can result in a long average waiting time for processes, especially if there are processes with significantly longer burst times.
How does the order of process arrival affect the average waiting time in the FCFS algorithm?
-The order of process arrival greatly affects the average waiting time in the FCFS algorithm. If shorter processes arrive first, the average waiting time is reduced, whereas if longer processes arrive first, it can significantly increase the waiting time for subsequent processes.
What is a Gantt chart in the context of the FCFS scheduling algorithm?
-A Gantt chart in the context of the FCFS scheduling algorithm is a graphical representation that shows the time each process spends waiting and the time it spends executing on the CPU.
Is the FCFS scheduling algorithm pre-emptive or non-pre-emptive?
-The FCFS scheduling algorithm is non-pre-emptive, meaning once a process has started executing, it cannot be interrupted by another process until it completes its execution or waits for I/O.
Why is the FCFS algorithm not suitable for time-sharing systems?
-The FCFS algorithm is not suitable for time-sharing systems because it does not guarantee that each user or process gets a share of the CPU at regular intervals, potentially leading to some processes waiting indefinitely if a long-running process is ahead in the queue.
What is the significance of the burst time in the context of the FCFS scheduling algorithm?
-In the context of the FCFS scheduling algorithm, the burst time is the duration a process needs to hold the CPU to complete its execution. It significantly impacts the waiting time of other processes in the queue.
How is the process control block (PCB) used in the FCFS scheduling algorithm?
-In the FCFS scheduling algorithm, when a process enters the ready queue, its PCB is linked to the tail of the queue. When the CPU is free, it is allocated to the process at the head of the queue, and the running process's PCB is then removed from the queue.
What is the impact of having a large process with a long burst time in the FCFS queue?
-Having a large process with a long burst time in the FCFS queue can cause a bottleneck, as it delays the execution of all other processes waiting in the queue, potentially causing them to wait for an extended period.
Outlines
📝 Introduction to First-Come, First-Served (FCFS) Scheduling
This paragraph introduces the concept of the first-come, first-served (FCFS) scheduling algorithm, which is the simplest CPU scheduling method. The FCFS algorithm operates on a first-in, first-out (FIFO) basis, meaning the process that requests the CPU first is the first to be served. The paragraph explains the straightforward nature of this scheduling method and its ease of implementation using a queue analogy, where individuals in a queue wait for their turn to perform an activity. It also discusses how a process control block (PCB) is linked to the queue and how the CPU is allocated to the process at the head of the queue once it becomes available.
🕒 Analyzing Efficiency of FCFS Scheduling with Examples
The second paragraph delves into the efficiency of the FCFS scheduling algorithm by examining the average waiting time for processes. It uses an example with three processes (P1, P2, and P3) to illustrate how the waiting times can be substantial, especially when processes have varying burst times. The paragraph presents a Gantt chart to show the execution and waiting times for each process under FCFS scheduling. It also contrasts the average waiting times when the same processes arrive in a different order, demonstrating how the order of arrival can significantly impact the efficiency of the scheduling.
🔄 Impact of Process Arrival Order on FCFS Scheduling Efficiency
This paragraph further explores how the order in which processes arrive affects the efficiency of the FCFS scheduling algorithm. It provides another example with the same processes but in a different arrival sequence (P2, P3, P1), showing a substantial reduction in average waiting time compared to the previous example. The paragraph emphasizes that the FCFS algorithm is non-preemptive, meaning once a process starts executing, it holds the CPU until it completes or waits for I/O. This characteristic can lead to inefficiencies in time-sharing systems where regular CPU access for each process is crucial.
🚫 Limitations and Disadvantages of FCFS Scheduling
The final paragraph highlights the limitations and disadvantages of the FCFS scheduling algorithm, particularly in time-sharing systems. It discusses the potential for longer waiting times and process starvation when large processes with long burst times are present in the queue. The paragraph explains that smaller processes may be delayed due to the non-preemptive nature of FCFS, which can be problematic for systems requiring regular CPU access. It concludes by stating that while FCFS is easy to understand and implement, it is not the most efficient algorithm, and upcoming lectures will cover more advanced CPU scheduling algorithms with potential advantages over FCFS.
Mindmap
Keywords
💡Scheduling Algorithms
💡First Come, First Serve (FCFS)
💡FIFO Queue
💡Process Control Block (PCB)
💡CPU Burst Time
💡Waiting Time
💡Gantt Chart
💡Average Waiting Time
💡Non-Preemptive
💡Time-Sharing Systems
Highlights
Introduction to CPU scheduling algorithms and the concept of First Come, First Serve (FCFS).
FCFS is the simplest CPU scheduling algorithm, allocating the CPU to the first process that requests it.
Explanation of how FCFS works, with a clear analogy to everyday queues.
FCFS is implemented using a FIFO (First-In, First-Out) queue, ensuring a fair and straightforward process order.
Description of the process control block (PCB) and its role in the ready queue for FCFS scheduling.
Analysis of the average waiting time in FCFS, which can be quite long depending on process burst times.
Example illustrating the waiting times for processes in FCFS, showing how it can vary with different arrival orders.
Demonstration of the impact of process order on average waiting time, highlighting the inefficiency of FCFS in certain scenarios.
FCFS is a non-preemptive scheduling algorithm, meaning once a process starts executing, it holds the CPU until completion or I/O request.
Discussion on the challenges FCFS presents for time-sharing systems where regular CPU access is crucial.
The potential for process starvation in FCFS when large burst time processes are present, delaying smaller ones.
FCFS's simplicity in understanding and implementation, despite its inefficiencies in certain conditions.
The importance of knowing FCFS as the foundational CPU scheduling algorithm before exploring more advanced ones.
The lecturer's intention to cover more CPU scheduling algorithms in subsequent lectures, promising a more comprehensive understanding.
Conclusion and summary of the key points regarding FCFS, emphasizing its limitations and the need for alternative scheduling methods.
Transcripts
from this lecture onwards we are going
to start studying about the different
scheduling algorithms so in this lecture
we are going to discuss the first
scheduling algorithm which is first come
first serve scheduling so we will see
what is this first come first serve
scheduling how does it work and if this
is a good scheduling algorithm
so this first come first served
scheduling algorithm is by far the
simplest CPU scheduling algorithm and
from the name itself it is easy to
understand that first come first served
scheduling means the process that
requests the CPU first is allocated the
CPU first that means whoever comes first
will be given the CPU first so the first
process that requests the CPU will be
given the CPU and in the second process
will be given the second chance the
third process that comes will be given
the third chance and so on so it is
fairly a very simple method that can be
followed and it is also a very simple
method that can be implemented and also
it is a simple method to understand so
the implementation of the FCFS policy or
the first come first serve policy is
easily managed with a FIFO queue sophie
4 stands for first-in first-out
so this is a normal method that we
follow in a normal queue so what we mean
by first-in first-out is the element
that comes in first will be the element
to go out first that means the element
that comes in first will be the element
to complete its work and go out first so
it is very similar to the normal queue
that we have in our day-to-day life so
think of this simple example where you
are standing in a queue in order to pay
your bills or something like that in
front of a counter so what happens is
the person who comes first will be the
one who will be given the chance to
perform the activity first so let's say
that suppose this queue is for paying
some kind of bills so the person who
comes first will get the first chance to
go into the counter and pay the bill and
the second person will be given the
second chance and so on so whenever a
new person comes he has to stand at the
N
of the queue or at the tail of the queue
and then this pattern is continued so if
we are following a very fair kind of
queue then a person who comes at the end
will never be given a chance to go in
first or will never be given a chance to
come in between any of the other members
he has to stand at the end of the queue
and also if you are following a really
fair queue then whoever is standing in
queue has to follow that pattern nobody
will be allowed to take the chance until
his chance arrives so let's say that
there are some all people standing or
some children standing and if you are
having a fare queue they will not be
given a chance to go into the counter
until unless their chance arrives you
cannot break this queue so I am taking
this example because in this first come
first served
scheduling this is what is going to be
exactly follow so keep this in mind and
as I told you FCFS policy is easily
managed by a FIFO queue so this is the
example of a FIFO queue where elements
are queued up like this and then the
first element is known as the head of
the queue and the last element is known
as the tail of the cube so when the head
of the queue which is this element
completes his execution and goes out
then this element over here will become
the head and if some other elements come
in then they will be added to the tail
of the queue that means they will be
kept over here and that element will
become the new tail so that is how a
FIFO queue is followed and by making use
of this FIFO queue this FCFS policy can
easily be implemented because it is
going to follow a
first-come-first-served basis so what
happens is when a process enters the
ready queue its PCB is linked onto the
tail of the queue so we have studied
about the different state that a process
can be in and we know that when a
process is ready to be executed it will
be in the ready queue and when it is in
the ready queue its PCB there is a
process control block will be linked to
the tail of the queue
if we are following our first come first
serve scheduling and then when the CPU
is free it is allocated to the process
at the head of the queue that means here
we have a queue where the processes are
queued up like this and then they are
all waiting for the CPU so when the CPU
is free the process which is at the head
of the queue that means this process
will be given the CPU for its execution
and then the running process is then
removed from the cube that means once
this process gets to CPU then this
process will be removed from this queue
because it has already got the CPU for
its execution so this process will be
removed and this second process over
here will be the new head and then if
some other processes come in again they
will be added to the tail and that will
be the new tail so that is how first
come first serve scheduling works now
let us see if this first come first
serve scheduling is an efficient
scheduling or not so here it says that
the average waiting time under the FCFS
policy however is often quite long so if
we are following a first-come
first-served scheduling policy or a
scheduling algorithm then the average
waiting time for each process is quite
long so let us see why that is by taking
an example so here let's consider the
following set of processes that arrive
at time zero so here we have three
processes p1 p2 and p3 that arrive at
time zero and wants to use a cpu for its
execution and then the burst time
indicates the time that the process will
take to complete its execution that
means the time it needs to hold the cpu
for itself to complete its execution so
the process p1 has a burst time of 24
milliseconds p2 has a burst time of 3
milliseconds and P 3 also has a burst
time of 3 milliseconds
now if the processes arrive in the order
p1 p2 p3 and are served in FCFS order
then we will get the result shown in the
following Gantt chart so here let's say
that the processes arrive in disorder p1
p2 p3 that
p1 is the first process to arrive
followed by P 2 and then P 3 so if they
arrive in disorder and we are using the
first come first served scheduling
algorithm then we will see the results
shown in this Gantt chart in which it
shows the waiting time for each
processes and the time each processes
take for executing in the cpu so if you
see here process p1 was the first one to
arrive so it did not have to wait any
amount of time so it is 0 and it took
the control of the CPU so p1 is
executing in the cpu and how long will
p1 execute it will execute for 24
milliseconds because that is the burst
time for process P 1 so it will execute
for 24 milliseconds and during the
execution of p1 p2 is already waiting in
queue because p2 was the second one to
arrive and p3 is also waiting behind p2
so how long did p2 have to wait p2 had
to wait 24 milliseconds because p1 took
24 milliseconds to complete its
execution and once p1 completes his
execution and frees the CPU then p2 will
get hold of the CPU and how long will p2
execute it will execute for 3
milliseconds so p2 execute for 3
milliseconds and meanwhile p3 is waiting
in the queue so how long did p3 wait
p3 had to wait for 27 milliseconds
why is that because p1 took 24
milliseconds then the next chance was
given to p2 which took 3 milliseconds so
24 plus 3 equal to 27 milliseconds so p3
had to wait twenty seven milliseconds
and once p3 gets the CPU at the 27th
millisecond it does its execution in the
cpu and it spends 3 milliseconds in the
cpu thus showing this last time in the
gunshot as 30 milliseconds so this is
what happens in a first come first serve
scheduling algorithm if the processes
arrive in the order p1 p2 p3 and these
are the worst times so as I said the
waiting time for p1 was 0 milliseconds
for p2 it was 24 million
seconds and p3 was 27 milliseconds so
what is the average waiting time it is
zero plus 24 plus 27 divided by 3
because 0 is the waiting time for p1 24
is a waiting time for p2 27 is the
waiting time for p3 and total there are
three processes so we are adding up
these values and dividing it by 3 and
hence we get 17 milliseconds so the
average waiting time for this particular
scenario when we followed first come
first serve scheduling algorithm is 17
milliseconds all right now let's take
another example now let's say that the
same processes with the same burst time
arrive in a different order so let's say
if the processes arrive in the order P 2
P 3 P 1 however the result will be shown
in the following Gantt chart so here in
this chart we are having the time each
processor stakes and has to wait if they
come in the order P 2 P 3 and P 1 so
remember that the burst time is the same
like the previous example P 1 has a
burst time of 24 milliseconds and p2 and
p3 has burst times of 3 milliseconds so
here what happens is they arrive in
disorder P 2 P 3 and P 1 so P 2 is the
first one to arrive so it did not have
to wait and it takes control of the CPU
and executes for 3 milliseconds because
the burst time of P 2 is 3 milliseconds
then the next one that arrived was P 3
which was waiting in queue so how long
did it have to wait it had to wait for 3
milliseconds because P 2 was executing
for 3 milliseconds so at 3 milliseconds
P 3 gets hold of the CPU and that P 3
also executes for 3 milliseconds because
the burst time of P 3 is 3 milliseconds
and meanwhile P 1 is waiting in queue
because P 1 was the next one to arrive
after P 3 how long did it have to wait
we see that P 3 completes his execution
at the sixth millisecond because the
execution time for P 3 is 3 milliseconds
so total how much time did P 1 have to
wait it had to wait 6 milliseconds
because P 2 took 3 milliseconds and P 3
also took 3 milliseconds so total of
six millisecond p1 had to eat and then
at the sixth millisecond p1 got hold of
the CPU and it does his execution up to
the thirtieth millisecond because the
burst time of p1 is 24 milliseconds so
in this case let us see what is the
average waiting time so in this case as
I said the waiting time for p1 was 6
milliseconds as we see here and the
waiting time for p2 was 0 milliseconds
because it was the first process to
arrive and the waiting time for p3 was 3
milliseconds because it was the second
one to arrive and p2 took 3 milliseconds
so here if we calculate the average
waiting time it is 6 plus 0 plus 3
divided by 3 so if we calculate here
it comes out to 3 milliseconds so we see
that in this case the average waiting
time for the processors has reduced
substantially in the previous example we
saw that it was 17 milliseconds for the
same processes but in a different order
so we see that depending upon the order
in which they arrive and depending upon
their birth time the average waiting
time in case of a first-come
first-served scheduling algorithm varies
greatly so this reduction is substantial
thus the average waiting time under an
FCFS policy is generally not minimal and
may vary substantially if the processes
CPU bursts I'm very greatly so this
reduction that we saw in these two
examples is a great reduction that means
there is a great difference between the
first and the second example the average
waiting times were really different 17
and 3 has a great difference between
them so the average waiting time in this
is not minimal generally and it may vary
substantially depending upon the burst
times of the processes and the order in
which they arrive and also the next
thing that we have to remember is the
FCFS scheduling algorithm is non
pre-emptive so we have already discussed
about pre-emptive and non pre-emptive
scheduling so when a process is
executing in the cpu if that process can
be stopped by some other process and if
some other process can get the cpu wild
this process was still executing then
that is called a preemptive technique
but when one process is in hold of the
CPU and the CPU cannot be taken away
from it until and unless that process
completes its execution or waits for an
i/o then that is known as a non
preemptive technique so the first come
first served
scheduling algorithm is non pre-emptive
that means if a process gets to CPU then
all the other processes have to wait
until and unless that process either
completes the institution or waits for
an i/o so that is why I took the example
of that queue in our day to day life so
until and unless the person who is at
the counter finishes his work no other
person can come and interfere in between
so only when he finishes his work in the
counter and goes out of the queue the
next person in queue will get the chance
so FCFS scheduling algorithm follows the
same policy so that is what is written
here once the CPU has been allocated to
a process that process keeps the CPU
until it releases the CPU either by
terminating or by requesting iowa so
this is what I just explained now the
problem with this FCFS scheduling is
that the FCFS algorithm is thus
particularly troublesome for time
sharing systems where it is important
that each user gets a share of the CPU
at regular intervals so if we are having
a time sharing system where each
processes needs to get the CPU at
regular intervals of time it is not
going to be possible in this FCFS
algorithm because only when one process
complete is execution completely or goes
to a waiting for IO state then the CPU
can be given to some other process so
until that no other process can take
control of the CPU so if a process is
having a very big burst time let's say
that there is a big process with a big
burst time that means it needs a huge
amount of time to complete its execution
the CPU then what will happen until and
unless that process completes this
execution all the other processes in the
queue will be waiting and starving for
the CPU they will not get it until and
unless
that big process completes its execution
so it would be disastrous to allow one
process to keep the CPU for an extended
period so this will not be a very good
technique because let's say that we have
so many small processes which needs very
less amount of time to complete but at
the head of the queue there is a big
process that needs a big amount of time
to complete then until and unless that
big process completes his execution
all the other processes though they may
have a very less burst time we'll have
to wait so that is the bottleneck here
so if you are having some big processes
that means processes needing more time
then they are going to delay the entire
processes that are waiting in the queue
so this is a disadvantage of this FCFS
scheduling algorithm so FCFS scheduling
algorithm is very easy to understand and
easy to implement but it is not an
efficient algorithm so this is the first
CPU scheduling algorithm that we need to
know about and moving on in the
following lectures we will be seeing
more CPU scheduling algorithms which may
be better than this and which may be
having more advantages as compared to
this FCFS scheduling algorithm so as a
basic this is the first algorithm that
you need to know about and I hope this
was clear to you thank you for watching
and see you in the next one
[Applause]
[Music]
5.0 / 5 (0 votes)