Scheduling Algorithms - First Come First Served (FCFS)

Neso Academy
5 Sept 201917:00

Summary

TLDRThis lecture introduces the first-come, first-served (FCFS) CPU scheduling algorithm, the simplest of its kind. FCFS operates on a queue basis, where processes are executed in the order they arrive, managed by a FIFO (first-in, first-out) queue. The algorithm is non-preemptive, meaning once a process starts, it holds the CPU until completion or I/O request. While easy to understand and implement, FCFS can lead to long average waiting times, especially with varying process burst times. The lecture also highlights its inefficiency in time-sharing systems, where equal CPU access is crucial, and concludes by comparing the waiting times of processes in different arrival orders.

Takeaways

  • πŸ“ The script discusses the first come, first served (FCFS) CPU scheduling algorithm.
  • 🧠 The FCFS algorithm is the simplest CPU scheduling method, where processes are allocated CPU time in the order they arrive.
  • πŸ“‰ FCFS is implemented using a FIFO (First-In, First-Out) queue, ensuring a straightforward management system.
  • πŸ‘₯ The algorithm is non-preemptive, meaning once a process starts executing, it holds the CPU until it completes or waits for I/O.
  • ⏱ The average waiting time for processes can vary greatly under FCFS, depending on the order of arrival and burst times.
  • πŸ“‰ An example given in the script illustrates that changing the order of process arrival can significantly affect the average waiting time.
  • 🚫 FCFS is not suitable for time-sharing systems where regular CPU access is required for each user or process.
  • πŸ” The script emphasizes that the FCFS algorithm can lead to inefficiency, especially when large processes delay smaller ones in the queue.
  • πŸ€” The disadvantages of FCFS include potential for process starvation, where smaller processes are kept waiting by longer ones.
  • πŸ”‘ The script serves as an introduction to CPU scheduling algorithms, with more advanced algorithms to be discussed in subsequent lectures.
  • πŸ‘‹ The presenter concludes by hoping the explanation of FCFS was clear and invites viewers to the next lecture for further exploration of CPU scheduling.

Q & A

  • What is the First Come First Serve (FCFS) scheduling algorithm?

    -The FCFS scheduling algorithm is the simplest CPU scheduling algorithm where processes are allocated the CPU in the order they arrive, ensuring that the first process to request the CPU is the first to be served.

  • How does the FCFS algorithm relate to the concept of a queue?

    -The FCFS algorithm is implemented using a FIFO (First-In First-Out) queue, where the process that arrives first is the first to be executed, similar to how people wait in a queue for service.

  • What is the main disadvantage of the FCFS scheduling algorithm in terms of waiting time?

    -The main disadvantage of the FCFS algorithm is that it can result in a long average waiting time for processes, especially if there are processes with significantly longer burst times.

  • How does the order of process arrival affect the average waiting time in the FCFS algorithm?

    -The order of process arrival greatly affects the average waiting time in the FCFS algorithm. If shorter processes arrive first, the average waiting time is reduced, whereas if longer processes arrive first, it can significantly increase the waiting time for subsequent processes.

  • What is a Gantt chart in the context of the FCFS scheduling algorithm?

    -A Gantt chart in the context of the FCFS scheduling algorithm is a graphical representation that shows the time each process spends waiting and the time it spends executing on the CPU.

  • Is the FCFS scheduling algorithm pre-emptive or non-pre-emptive?

    -The FCFS scheduling algorithm is non-pre-emptive, meaning once a process has started executing, it cannot be interrupted by another process until it completes its execution or waits for I/O.

  • Why is the FCFS algorithm not suitable for time-sharing systems?

    -The FCFS algorithm is not suitable for time-sharing systems because it does not guarantee that each user or process gets a share of the CPU at regular intervals, potentially leading to some processes waiting indefinitely if a long-running process is ahead in the queue.

  • What is the significance of the burst time in the context of the FCFS scheduling algorithm?

    -In the context of the FCFS scheduling algorithm, the burst time is the duration a process needs to hold the CPU to complete its execution. It significantly impacts the waiting time of other processes in the queue.

  • How is the process control block (PCB) used in the FCFS scheduling algorithm?

    -In the FCFS scheduling algorithm, when a process enters the ready queue, its PCB is linked to the tail of the queue. When the CPU is free, it is allocated to the process at the head of the queue, and the running process's PCB is then removed from the queue.

  • What is the impact of having a large process with a long burst time in the FCFS queue?

    -Having a large process with a long burst time in the FCFS queue can cause a bottleneck, as it delays the execution of all other processes waiting in the queue, potentially causing them to wait for an extended period.

Outlines

00:00

πŸ“ Introduction to First-Come, First-Served (FCFS) Scheduling

This paragraph introduces the concept of the first-come, first-served (FCFS) scheduling algorithm, which is the simplest CPU scheduling method. The FCFS algorithm operates on a first-in, first-out (FIFO) basis, meaning the process that requests the CPU first is the first to be served. The paragraph explains the straightforward nature of this scheduling method and its ease of implementation using a queue analogy, where individuals in a queue wait for their turn to perform an activity. It also discusses how a process control block (PCB) is linked to the queue and how the CPU is allocated to the process at the head of the queue once it becomes available.

05:02

πŸ•’ Analyzing Efficiency of FCFS Scheduling with Examples

The second paragraph delves into the efficiency of the FCFS scheduling algorithm by examining the average waiting time for processes. It uses an example with three processes (P1, P2, and P3) to illustrate how the waiting times can be substantial, especially when processes have varying burst times. The paragraph presents a Gantt chart to show the execution and waiting times for each process under FCFS scheduling. It also contrasts the average waiting times when the same processes arrive in a different order, demonstrating how the order of arrival can significantly impact the efficiency of the scheduling.

10:03

πŸ”„ Impact of Process Arrival Order on FCFS Scheduling Efficiency

This paragraph further explores how the order in which processes arrive affects the efficiency of the FCFS scheduling algorithm. It provides another example with the same processes but in a different arrival sequence (P2, P3, P1), showing a substantial reduction in average waiting time compared to the previous example. The paragraph emphasizes that the FCFS algorithm is non-preemptive, meaning once a process starts executing, it holds the CPU until it completes or waits for I/O. This characteristic can lead to inefficiencies in time-sharing systems where regular CPU access for each process is crucial.

15:04

🚫 Limitations and Disadvantages of FCFS Scheduling

The final paragraph highlights the limitations and disadvantages of the FCFS scheduling algorithm, particularly in time-sharing systems. It discusses the potential for longer waiting times and process starvation when large processes with long burst times are present in the queue. The paragraph explains that smaller processes may be delayed due to the non-preemptive nature of FCFS, which can be problematic for systems requiring regular CPU access. It concludes by stating that while FCFS is easy to understand and implement, it is not the most efficient algorithm, and upcoming lectures will cover more advanced CPU scheduling algorithms with potential advantages over FCFS.

Mindmap

Keywords

πŸ’‘Scheduling Algorithms

Scheduling algorithms are methods used by operating systems to determine the order in which processes are given access to the CPU. In the context of this video, the focus is on CPU scheduling algorithms which are crucial for efficient system performance. The script introduces the concept and discusses the first come, first serve (FCFS) algorithm as the simplest among them.

πŸ’‘First Come, First Serve (FCFS)

FCFS is a CPU scheduling algorithm that allocates the CPU to the process that requests it first. It is highlighted in the script as the simplest and most straightforward method, where processes are queued in the order of their arrival and served on a first-come-first-served basis, similar to a real-life queue at a counter.

πŸ’‘FIFO Queue

FIFO stands for 'First-In, First-Out,' a principle used in implementing the FCFS scheduling algorithm. The script explains that it is a queue where the process that arrives first is the first to be executed, and this principle is analogous to everyday queues, such as waiting in line to pay bills.

πŸ’‘Process Control Block (PCB)

A PCB is a data structure used by an operating system to store and manage the state of a process. In the script, it is mentioned that when a process enters the ready queue, its PCB is linked to the tail of the queue, which is part of the FCFS scheduling process.

πŸ’‘CPU Burst Time

CPU burst time refers to the amount of time a process needs to hold the CPU to complete its execution. The script uses this term to describe the execution time of processes in the FCFS scheduling algorithm, illustrating how different burst times can affect the waiting times of other processes.

πŸ’‘Waiting Time

Waiting time in the context of the script refers to the time a process spends waiting in the queue for its turn to use the CPU. The script provides examples to show how the FCFS algorithm can result in varying waiting times for different processes based on their order of arrival and burst times.

πŸ’‘Gantt Chart

A Gantt chart is a type of bar chart used to illustrate the duration and order of processes or tasks. In the script, a Gantt chart is used to visually represent the execution and waiting times of processes under the FCFS scheduling algorithm, providing a clear picture of how processes are managed over time.

πŸ’‘Average Waiting Time

The average waiting time is calculated by summing the waiting times of all processes and dividing by the number of processes. The script discusses how the average waiting time under the FCFS policy can vary greatly depending on the order of process arrival and their burst times, indicating the efficiency of the scheduling algorithm.

πŸ’‘Non-Preemptive

Non-preemptive refers to a scheduling technique where once a process has started executing, it cannot be interrupted by another process until it either completes or waits for I/O. The script explains that FCFS is a non-preemptive algorithm, which can lead to issues in time-sharing systems where fairness and regular CPU access are required.

πŸ’‘Time-Sharing Systems

Time-sharing systems are a type of computing system that allows multiple users to simultaneously share the CPU resources. The script points out that the FCFS algorithm can be problematic for time-sharing systems because it does not guarantee regular intervals of CPU access for each user, potentially leading to some users experiencing long waiting times.

Highlights

Introduction to CPU scheduling algorithms and the concept of First Come, First Serve (FCFS).

FCFS is the simplest CPU scheduling algorithm, allocating the CPU to the first process that requests it.

Explanation of how FCFS works, with a clear analogy to everyday queues.

FCFS is implemented using a FIFO (First-In, First-Out) queue, ensuring a fair and straightforward process order.

Description of the process control block (PCB) and its role in the ready queue for FCFS scheduling.

Analysis of the average waiting time in FCFS, which can be quite long depending on process burst times.

Example illustrating the waiting times for processes in FCFS, showing how it can vary with different arrival orders.

Demonstration of the impact of process order on average waiting time, highlighting the inefficiency of FCFS in certain scenarios.

FCFS is a non-preemptive scheduling algorithm, meaning once a process starts executing, it holds the CPU until completion or I/O request.

Discussion on the challenges FCFS presents for time-sharing systems where regular CPU access is crucial.

The potential for process starvation in FCFS when large burst time processes are present, delaying smaller ones.

FCFS's simplicity in understanding and implementation, despite its inefficiencies in certain conditions.

The importance of knowing FCFS as the foundational CPU scheduling algorithm before exploring more advanced ones.

The lecturer's intention to cover more CPU scheduling algorithms in subsequent lectures, promising a more comprehensive understanding.

Conclusion and summary of the key points regarding FCFS, emphasizing its limitations and the need for alternative scheduling methods.

Transcripts

play00:00

from this lecture onwards we are going

play00:01

to start studying about the different

play00:03

scheduling algorithms so in this lecture

play00:05

we are going to discuss the first

play00:07

scheduling algorithm which is first come

play00:10

first serve scheduling so we will see

play00:13

what is this first come first serve

play00:15

scheduling how does it work and if this

play00:18

is a good scheduling algorithm

play00:20

so this first come first served

play00:22

scheduling algorithm is by far the

play00:25

simplest CPU scheduling algorithm and

play00:27

from the name itself it is easy to

play00:30

understand that first come first served

play00:32

scheduling means the process that

play00:34

requests the CPU first is allocated the

play00:38

CPU first that means whoever comes first

play00:41

will be given the CPU first so the first

play00:44

process that requests the CPU will be

play00:47

given the CPU and in the second process

play00:50

will be given the second chance the

play00:52

third process that comes will be given

play00:54

the third chance and so on so it is

play00:56

fairly a very simple method that can be

play00:59

followed and it is also a very simple

play01:00

method that can be implemented and also

play01:03

it is a simple method to understand so

play01:06

the implementation of the FCFS policy or

play01:09

the first come first serve policy is

play01:11

easily managed with a FIFO queue sophie

play01:14

4 stands for first-in first-out

play01:16

so this is a normal method that we

play01:19

follow in a normal queue so what we mean

play01:21

by first-in first-out is the element

play01:24

that comes in first will be the element

play01:26

to go out first that means the element

play01:29

that comes in first will be the element

play01:31

to complete its work and go out first so

play01:33

it is very similar to the normal queue

play01:36

that we have in our day-to-day life so

play01:39

think of this simple example where you

play01:41

are standing in a queue in order to pay

play01:43

your bills or something like that in

play01:45

front of a counter so what happens is

play01:47

the person who comes first will be the

play01:50

one who will be given the chance to

play01:52

perform the activity first so let's say

play01:55

that suppose this queue is for paying

play01:57

some kind of bills so the person who

play01:59

comes first will get the first chance to

play02:02

go into the counter and pay the bill and

play02:04

the second person will be given the

play02:06

second chance and so on so whenever a

play02:09

new person comes he has to stand at the

play02:12

N

play02:12

of the queue or at the tail of the queue

play02:15

and then this pattern is continued so if

play02:18

we are following a very fair kind of

play02:20

queue then a person who comes at the end

play02:22

will never be given a chance to go in

play02:25

first or will never be given a chance to

play02:28

come in between any of the other members

play02:31

he has to stand at the end of the queue

play02:33

and also if you are following a really

play02:36

fair queue then whoever is standing in

play02:39

queue has to follow that pattern nobody

play02:42

will be allowed to take the chance until

play02:45

his chance arrives so let's say that

play02:47

there are some all people standing or

play02:49

some children standing and if you are

play02:51

having a fare queue they will not be

play02:53

given a chance to go into the counter

play02:55

until unless their chance arrives you

play02:58

cannot break this queue so I am taking

play03:01

this example because in this first come

play03:03

first served

play03:04

scheduling this is what is going to be

play03:06

exactly follow so keep this in mind and

play03:09

as I told you FCFS policy is easily

play03:12

managed by a FIFO queue so this is the

play03:15

example of a FIFO queue where elements

play03:17

are queued up like this and then the

play03:19

first element is known as the head of

play03:22

the queue and the last element is known

play03:24

as the tail of the cube so when the head

play03:27

of the queue which is this element

play03:29

completes his execution and goes out

play03:32

then this element over here will become

play03:34

the head and if some other elements come

play03:37

in then they will be added to the tail

play03:40

of the queue that means they will be

play03:41

kept over here and that element will

play03:43

become the new tail so that is how a

play03:46

FIFO queue is followed and by making use

play03:48

of this FIFO queue this FCFS policy can

play03:52

easily be implemented because it is

play03:55

going to follow a

play03:55

first-come-first-served basis so what

play03:58

happens is when a process enters the

play04:01

ready queue its PCB is linked onto the

play04:04

tail of the queue so we have studied

play04:06

about the different state that a process

play04:08

can be in and we know that when a

play04:10

process is ready to be executed it will

play04:13

be in the ready queue and when it is in

play04:15

the ready queue its PCB there is a

play04:18

process control block will be linked to

play04:21

the tail of the queue

play04:23

if we are following our first come first

play04:25

serve scheduling and then when the CPU

play04:28

is free it is allocated to the process

play04:30

at the head of the queue that means here

play04:33

we have a queue where the processes are

play04:35

queued up like this and then they are

play04:37

all waiting for the CPU so when the CPU

play04:39

is free the process which is at the head

play04:41

of the queue that means this process

play04:43

will be given the CPU for its execution

play04:46

and then the running process is then

play04:49

removed from the cube that means once

play04:52

this process gets to CPU then this

play04:54

process will be removed from this queue

play04:56

because it has already got the CPU for

play04:59

its execution so this process will be

play05:01

removed and this second process over

play05:04

here will be the new head and then if

play05:07

some other processes come in again they

play05:09

will be added to the tail and that will

play05:11

be the new tail so that is how first

play05:13

come first serve scheduling works now

play05:16

let us see if this first come first

play05:18

serve scheduling is an efficient

play05:20

scheduling or not so here it says that

play05:22

the average waiting time under the FCFS

play05:25

policy however is often quite long so if

play05:29

we are following a first-come

play05:30

first-served scheduling policy or a

play05:33

scheduling algorithm then the average

play05:36

waiting time for each process is quite

play05:38

long so let us see why that is by taking

play05:41

an example so here let's consider the

play05:43

following set of processes that arrive

play05:46

at time zero so here we have three

play05:48

processes p1 p2 and p3 that arrive at

play05:52

time zero and wants to use a cpu for its

play05:55

execution and then the burst time

play05:58

indicates the time that the process will

play06:00

take to complete its execution that

play06:03

means the time it needs to hold the cpu

play06:06

for itself to complete its execution so

play06:09

the process p1 has a burst time of 24

play06:12

milliseconds p2 has a burst time of 3

play06:15

milliseconds and P 3 also has a burst

play06:17

time of 3 milliseconds

play06:19

now if the processes arrive in the order

play06:22

p1 p2 p3 and are served in FCFS order

play06:26

then we will get the result shown in the

play06:29

following Gantt chart so here let's say

play06:32

that the processes arrive in disorder p1

play06:34

p2 p3 that

play06:36

p1 is the first process to arrive

play06:38

followed by P 2 and then P 3 so if they

play06:42

arrive in disorder and we are using the

play06:45

first come first served scheduling

play06:46

algorithm then we will see the results

play06:50

shown in this Gantt chart in which it

play06:53

shows the waiting time for each

play06:55

processes and the time each processes

play06:57

take for executing in the cpu so if you

play07:01

see here process p1 was the first one to

play07:04

arrive so it did not have to wait any

play07:07

amount of time so it is 0 and it took

play07:09

the control of the CPU so p1 is

play07:12

executing in the cpu and how long will

play07:14

p1 execute it will execute for 24

play07:17

milliseconds because that is the burst

play07:19

time for process P 1 so it will execute

play07:21

for 24 milliseconds and during the

play07:24

execution of p1 p2 is already waiting in

play07:28

queue because p2 was the second one to

play07:30

arrive and p3 is also waiting behind p2

play07:33

so how long did p2 have to wait p2 had

play07:37

to wait 24 milliseconds because p1 took

play07:40

24 milliseconds to complete its

play07:42

execution and once p1 completes his

play07:44

execution and frees the CPU then p2 will

play07:48

get hold of the CPU and how long will p2

play07:51

execute it will execute for 3

play07:53

milliseconds so p2 execute for 3

play07:55

milliseconds and meanwhile p3 is waiting

play07:59

in the queue so how long did p3 wait

play08:02

p3 had to wait for 27 milliseconds

play08:05

why is that because p1 took 24

play08:08

milliseconds then the next chance was

play08:10

given to p2 which took 3 milliseconds so

play08:13

24 plus 3 equal to 27 milliseconds so p3

play08:17

had to wait twenty seven milliseconds

play08:19

and once p3 gets the CPU at the 27th

play08:23

millisecond it does its execution in the

play08:26

cpu and it spends 3 milliseconds in the

play08:29

cpu thus showing this last time in the

play08:32

gunshot as 30 milliseconds so this is

play08:35

what happens in a first come first serve

play08:38

scheduling algorithm if the processes

play08:40

arrive in the order p1 p2 p3 and these

play08:43

are the worst times so as I said the

play08:46

waiting time for p1 was 0 milliseconds

play08:48

for p2 it was 24 million

play08:50

seconds and p3 was 27 milliseconds so

play08:53

what is the average waiting time it is

play08:55

zero plus 24 plus 27 divided by 3

play08:59

because 0 is the waiting time for p1 24

play09:03

is a waiting time for p2 27 is the

play09:05

waiting time for p3 and total there are

play09:07

three processes so we are adding up

play09:10

these values and dividing it by 3 and

play09:12

hence we get 17 milliseconds so the

play09:15

average waiting time for this particular

play09:17

scenario when we followed first come

play09:20

first serve scheduling algorithm is 17

play09:23

milliseconds all right now let's take

play09:25

another example now let's say that the

play09:28

same processes with the same burst time

play09:30

arrive in a different order so let's say

play09:33

if the processes arrive in the order P 2

play09:36

P 3 P 1 however the result will be shown

play09:39

in the following Gantt chart so here in

play09:42

this chart we are having the time each

play09:44

processor stakes and has to wait if they

play09:48

come in the order P 2 P 3 and P 1 so

play09:51

remember that the burst time is the same

play09:52

like the previous example P 1 has a

play09:55

burst time of 24 milliseconds and p2 and

play09:58

p3 has burst times of 3 milliseconds so

play10:01

here what happens is they arrive in

play10:03

disorder P 2 P 3 and P 1 so P 2 is the

play10:07

first one to arrive so it did not have

play10:08

to wait and it takes control of the CPU

play10:11

and executes for 3 milliseconds because

play10:14

the burst time of P 2 is 3 milliseconds

play10:17

then the next one that arrived was P 3

play10:20

which was waiting in queue so how long

play10:22

did it have to wait it had to wait for 3

play10:24

milliseconds because P 2 was executing

play10:27

for 3 milliseconds so at 3 milliseconds

play10:29

P 3 gets hold of the CPU and that P 3

play10:33

also executes for 3 milliseconds because

play10:36

the burst time of P 3 is 3 milliseconds

play10:38

and meanwhile P 1 is waiting in queue

play10:41

because P 1 was the next one to arrive

play10:43

after P 3 how long did it have to wait

play10:46

we see that P 3 completes his execution

play10:49

at the sixth millisecond because the

play10:52

execution time for P 3 is 3 milliseconds

play10:55

so total how much time did P 1 have to

play10:57

wait it had to wait 6 milliseconds

play10:59

because P 2 took 3 milliseconds and P 3

play11:02

also took 3 milliseconds so total of

play11:04

six millisecond p1 had to eat and then

play11:07

at the sixth millisecond p1 got hold of

play11:09

the CPU and it does his execution up to

play11:13

the thirtieth millisecond because the

play11:15

burst time of p1 is 24 milliseconds so

play11:18

in this case let us see what is the

play11:20

average waiting time so in this case as

play11:22

I said the waiting time for p1 was 6

play11:25

milliseconds as we see here and the

play11:28

waiting time for p2 was 0 milliseconds

play11:30

because it was the first process to

play11:32

arrive and the waiting time for p3 was 3

play11:35

milliseconds because it was the second

play11:37

one to arrive and p2 took 3 milliseconds

play11:39

so here if we calculate the average

play11:41

waiting time it is 6 plus 0 plus 3

play11:44

divided by 3 so if we calculate here

play11:48

it comes out to 3 milliseconds so we see

play11:51

that in this case the average waiting

play11:53

time for the processors has reduced

play11:56

substantially in the previous example we

play11:58

saw that it was 17 milliseconds for the

play12:01

same processes but in a different order

play12:03

so we see that depending upon the order

play12:06

in which they arrive and depending upon

play12:08

their birth time the average waiting

play12:11

time in case of a first-come

play12:13

first-served scheduling algorithm varies

play12:16

greatly so this reduction is substantial

play12:19

thus the average waiting time under an

play12:21

FCFS policy is generally not minimal and

play12:25

may vary substantially if the processes

play12:28

CPU bursts I'm very greatly so this

play12:31

reduction that we saw in these two

play12:33

examples is a great reduction that means

play12:35

there is a great difference between the

play12:36

first and the second example the average

play12:39

waiting times were really different 17

play12:41

and 3 has a great difference between

play12:43

them so the average waiting time in this

play12:47

is not minimal generally and it may vary

play12:51

substantially depending upon the burst

play12:53

times of the processes and the order in

play12:55

which they arrive and also the next

play12:58

thing that we have to remember is the

play13:00

FCFS scheduling algorithm is non

play13:02

pre-emptive so we have already discussed

play13:04

about pre-emptive and non pre-emptive

play13:06

scheduling so when a process is

play13:08

executing in the cpu if that process can

play13:11

be stopped by some other process and if

play13:13

some other process can get the cpu wild

play13:17

this process was still executing then

play13:20

that is called a preemptive technique

play13:22

but when one process is in hold of the

play13:25

CPU and the CPU cannot be taken away

play13:27

from it until and unless that process

play13:30

completes its execution or waits for an

play13:33

i/o then that is known as a non

play13:35

preemptive technique so the first come

play13:38

first served

play13:39

scheduling algorithm is non pre-emptive

play13:41

that means if a process gets to CPU then

play13:44

all the other processes have to wait

play13:46

until and unless that process either

play13:48

completes the institution or waits for

play13:51

an i/o so that is why I took the example

play13:54

of that queue in our day to day life so

play13:55

until and unless the person who is at

play13:57

the counter finishes his work no other

play14:01

person can come and interfere in between

play14:03

so only when he finishes his work in the

play14:06

counter and goes out of the queue the

play14:08

next person in queue will get the chance

play14:11

so FCFS scheduling algorithm follows the

play14:13

same policy so that is what is written

play14:16

here once the CPU has been allocated to

play14:18

a process that process keeps the CPU

play14:21

until it releases the CPU either by

play14:24

terminating or by requesting iowa so

play14:26

this is what I just explained now the

play14:28

problem with this FCFS scheduling is

play14:30

that the FCFS algorithm is thus

play14:33

particularly troublesome for time

play14:35

sharing systems where it is important

play14:38

that each user gets a share of the CPU

play14:40

at regular intervals so if we are having

play14:42

a time sharing system where each

play14:45

processes needs to get the CPU at

play14:48

regular intervals of time it is not

play14:50

going to be possible in this FCFS

play14:52

algorithm because only when one process

play14:55

complete is execution completely or goes

play14:58

to a waiting for IO state then the CPU

play15:01

can be given to some other process so

play15:03

until that no other process can take

play15:06

control of the CPU so if a process is

play15:09

having a very big burst time let's say

play15:12

that there is a big process with a big

play15:14

burst time that means it needs a huge

play15:16

amount of time to complete its execution

play15:18

the CPU then what will happen until and

play15:20

unless that process completes this

play15:22

execution all the other processes in the

play15:24

queue will be waiting and starving for

play15:27

the CPU they will not get it until and

play15:29

unless

play15:29

that big process completes its execution

play15:31

so it would be disastrous to allow one

play15:34

process to keep the CPU for an extended

play15:36

period so this will not be a very good

play15:38

technique because let's say that we have

play15:40

so many small processes which needs very

play15:42

less amount of time to complete but at

play15:45

the head of the queue there is a big

play15:47

process that needs a big amount of time

play15:49

to complete then until and unless that

play15:51

big process completes his execution

play15:53

all the other processes though they may

play15:55

have a very less burst time we'll have

play15:58

to wait so that is the bottleneck here

play16:00

so if you are having some big processes

play16:02

that means processes needing more time

play16:04

then they are going to delay the entire

play16:08

processes that are waiting in the queue

play16:10

so this is a disadvantage of this FCFS

play16:14

scheduling algorithm so FCFS scheduling

play16:16

algorithm is very easy to understand and

play16:19

easy to implement but it is not an

play16:22

efficient algorithm so this is the first

play16:24

CPU scheduling algorithm that we need to

play16:27

know about and moving on in the

play16:28

following lectures we will be seeing

play16:30

more CPU scheduling algorithms which may

play16:33

be better than this and which may be

play16:35

having more advantages as compared to

play16:38

this FCFS scheduling algorithm so as a

play16:41

basic this is the first algorithm that

play16:42

you need to know about and I hope this

play16:44

was clear to you thank you for watching

play16:47

and see you in the next one

play16:49

[Applause]

play16:52

[Music]

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
CPU SchedulingFCFS AlgorithmScheduling EfficiencyOperating SystemsQueue ManagementProcess WaitingNon-PreemptiveTime SharingComputing LectureAlgorithm Analysis