Lamport's Mutual Exclusion Distributed Systems Synchronization

Simplified Computer Science Concepts-Prof. Rutuja
8 Feb 202213:30

Summary

TLDRIn this informative video, Professor Rutu Shakatam introduces Lampson's mutual exclusion algorithm, a crucial concept in distributed systems where multiple processes must access a shared resource without conflict. The video explains that mutual exclusion ensures only one process can enter a critical section at a time. Since distributed systems lack shared memory and synchronized clocks, message passing is the solution. Two types of algorithms are discussed: non-token based, which uses timestamps for synchronization, and token-based, which involves a 'token' that grants access to the critical section. The video outlines the three essential messages for mutual exclusion: request, reply, and release. It also details the algorithm's steps, including sending a request with a timestamp, receiving replies from other processes, and the conditions required to enter the critical section. The performance of Lampson's algorithm is highlighted as 3n-1 messages per critical section invocation, where n is the number of processes. The video concludes with an invitation for viewers to engage with the content and seek clarification on any doubts.

Takeaways

  • 📘 Mutual exclusion is a concept where only one process can access a critical section or shared resource at a time.
  • 🌐 In distributed systems, mutual exclusion is more complex due to the lack of shared memory and synchronized clocks.
  • 🚫 Traditional synchronization techniques like semaphores are not suitable for distributed systems.
  • 💌 Message passing is the basis for achieving mutual exclusion in distributed systems.
  • 🔄 There are two types of algorithms for mutual exclusion in distributed systems: non-token based and token-based algorithms.
  • 🕒 Non-token based algorithms like Lampson's and Ricart-Agrawala use timestamps to manage message passing and synchronization.
  • 🎟️ Token-based algorithms, such as Raymen's tree and Suzuki-Kasami, use a 'token' to indicate which process has access to the critical section.
  • 🔢 Lampson's logical clock assigns timestamps to requests to determine the priority for entering the critical section.
  • 📝 Each process maintains a queue to store timestamps, assuming messages are delivered in FIFO order.
  • 🤝 Three types of messages are involved in the process: request, reply, and release.
  • ⏳ A process enters the critical section only after its timestamp is at the top of the queue and it has received replies from all other processes.
  • 📉 Lampson's mutual exclusion algorithm has a performance of 3n-1 messages per critical section invocation, where n is the number of processes.

Q & A

  • What is mutual exclusion in the context of computer science?

    -Mutual exclusion is a concept where only one process can execute in a critical section or utilize a shared resource at any given point in time, preventing simultaneous access by multiple processes to prevent conflicts and ensure data consistency.

  • Why does the concept of mutual exclusion become more complex in distributed systems?

    -In distributed systems, the lack of shared memory and synchronized clocks among different nodes makes traditional mutual exclusion techniques like semaphores infeasible. Thus, message passing becomes the basis for synchronization in such environments.

  • What are the two types of algorithms used to achieve mutual exclusion in distributed systems?

    -The two types of algorithms are non-token based algorithms, which use message passing and timestamps for synchronization, and token-based algorithms, which involve a token that is passed among processes to grant access to the critical section.

  • How does Lampson's mutual exclusion algorithm work?

    -Lamport's mutual exclusion algorithm uses message passing and timestamps to determine which process has the highest priority to enter the critical section. Processes send request messages with timestamps to all other processes, receive reply messages, and only enter the critical section if their timestamp is at the top of their queue and they have received replies from all other processes.

  • What are the three types of messages involved in Lampson's mutual exclusion algorithm?

    -The three types of messages are request, reply, and release. Request is sent by a process wanting to enter the critical section, reply is sent by other processes to grant permission, and release is sent by a process after it has finished using the critical section to allow other processes to enter.

  • What is the significance of timestamps in Lampson's algorithm?

    -Timestamps are used to determine the priority of critical section requests. The smaller the timestamp, the higher the priority. This ensures a consistent and fair order of process execution within the critical section.

  • How does the logical clock concept by Lamport contribute to the mutual exclusion problem?

    -Lamport's logical clock provides a way to assign timestamps to events in a distributed system. This helps in ordering the requests to the critical section and determining which process should be given access based on the earliest timestamp, thus preventing conflicts.

  • What is the performance of Lampson's mutual exclusion algorithm in terms of message complexity?

    -The performance of Lampson's mutual exclusion algorithm is 3n-1 messages per critical section invocation, where n is the number of processes. This includes n-1 request, n-1 reply, and n-1 release messages.

  • How does the queue maintained by each process in Lampson's algorithm function?

    -Each process maintains a queue that stores timestamps and process IDs (PIDs) of requests to enter the critical section. This queue is used to keep track of pending requests and to determine which process has the highest priority to enter the critical section based on the timestamp.

  • What are the two requirements for a process to enter the critical section in Lampson's algorithm?

    -The two requirements are: (1) The process's timestamp must be at the top of its queue, indicating it has the highest priority, and (2) The process must have received reply messages from all other processes, indicating they are not currently using or have granted permission to use the critical section.

  • How does the release message function in Lampson's mutual exclusion algorithm?

    -After a process has completed its execution in the critical section, it sends a release message to inform all other processes that the critical section is now free. This allows other processes that have requested access to the critical section to proceed.

  • What is the role of the 'first in, first out' (FIFO) order in the context of message delivery in distributed systems?

    -The FIFO order ensures that messages are delivered in the same sequence they were sent. This is crucial in distributed systems for maintaining the correct order of operations, especially when dealing with timestamps and determining the priority of processes wanting to enter the critical section.

Outlines

00:00

🤖 Introduction to Mutual Exclusion in Distributed Systems

This paragraph introduces the concept of mutual exclusion in the context of distributed systems. It explains that mutual exclusion ensures that only one process can access a shared resource or critical section at a time. The paragraph also discusses the challenges of implementing mutual exclusion in distributed systems due to the lack of shared memory and synchronized clocks. It outlines two types of algorithms to achieve mutual exclusion: non-token-based algorithms, which use message passing and timestamps, and token-based algorithms, which involve a token being passed among processes. The paragraph concludes with an introduction to the general system model and the three types of messages involved in mutual exclusion: request, reply, and release.

05:00

📨 Lampard's Mutual Exclusion Algorithm and Process Synchronization

This paragraph delves into the specifics of Lampard's mutual exclusion algorithm, which is a non-token-based algorithm that uses message passing and timestamps to achieve synchronization among processes. It describes the process of a process sending a request to enter the critical section along with its timestamp, other processes replying to the request, and the use of a queue to maintain timestamps. The algorithm uses the smallest timestamp to determine which process gets priority access to the critical section. The paragraph also illustrates how the algorithm works with three processes, showing how they communicate and synchronize to ensure mutual exclusion. It concludes by explaining the requirements for a process to enter the critical section and the importance of the release message after the critical section is used.

10:02

🔄 Execution and Release of the Critical Section in Lampard's Algorithm

This paragraph continues the discussion on Lampard's mutual exclusion algorithm, focusing on the execution and release of the critical section. It explains the steps a process must take to enter the critical section, including making a request with a timestamp, receiving replies from other processes, and ensuring its timestamp is at the top of the queue. The paragraph also details the process of releasing the critical section, which involves a process removing its timestamp from the queue and sending a release message to all other processes. The performance of Lampard's algorithm is quantified as 3n-1 messages per critical section invocation, where n is the number of processes. The paragraph concludes with a summary of the algorithm's steps and a reminder of its efficiency and process synchronization capabilities.

Mindmap

Keywords

💡Mutual Exclusion

Mutual exclusion is a fundamental concept in concurrent and distributed systems that ensures only one process can access a shared resource or critical section at a time. It is crucial for preventing data corruption and race conditions. In the video, mutual exclusion is the central theme, as it discusses how to achieve this in distributed systems where shared memory and synchronized clocks are not available.

💡Distributed Systems

Distributed systems are a collection of autonomous computers that work together to achieve a common goal. They are connected through a network and operate independently. The video emphasizes the challenges of implementing mutual exclusion in such systems due to the lack of shared memory and synchronized clocks.

💡Message Passing

Message passing is a method of communication between processes in distributed systems. It is the basis for inter-process communication and is essential for implementing mutual exclusion algorithms. In the context of the video, message passing is the mechanism used to coordinate processes' access to critical sections.

💡Lamport's Logical Clock

Lamport's logical clock is a concept used to provide a total ordering of events in a distributed system without a centralized clock. It assigns timestamps to events that can be used to determine the order in which they occurred. In the video, it is mentioned as a method to timestamp critical section requests, which helps in determining the priority of processes.

💡Timestamps

Timestamps are used to record the time at which events occur. In the context of the video, timestamps are attached to messages to ensure that the order of requests to enter the critical section is maintained. They are crucial for the Lampard's mutual exclusion algorithm to function correctly.

💡Critical Section

A critical section is a part of a program where shared resources are accessed and is protected by mutual exclusion mechanisms to prevent simultaneous access by multiple processes. The video discusses how mutual exclusion algorithms, like Lampard's, ensure that only one process enters the critical section at any given time.

💡Non-Token Based Algorithms

Non-token based algorithms are a class of mutual exclusion algorithms that do not use a token to control access to the critical section. Instead, they rely on message passing and timestamps. Lampard's mutual exclusion algorithm is an example of such an algorithm, as discussed in the video.

💡Token Based Algorithms

Token-based algorithms use a 'token', a special message or signal, to control access to the critical section. A process must possess the token to enter the critical section. The video contrasts these with non-token based algorithms, mentioning Raymen's tree algorithm and Suzuki-Kasami algorithm as examples.

💡Request, Reply, and Release Messages

These are the three types of messages used in Lampard's mutual exclusion algorithm. A process sends a 'request' message when it wants to enter the critical section. Other processes respond with 'reply' messages to grant or deny access. Once done, a 'release' message is sent to indicate the process is done with the critical section. The video explains how these messages facilitate mutual exclusion.

💡Process ID (PID)

Process ID is a unique identifier assigned to each process in a system. In the context of the video, PID is used in conjunction with timestamps to identify the process requesting access to the critical section. It helps in maintaining the order and priority of requests.

💡First-In-First-Out (FIFO)

FIFO is a principle used in queuing systems where the first item to be stored is the first to be retrieved. In the video, it is assumed that messages are delivered in FIFO order, which is essential for the proper functioning of Lampard's mutual exclusion algorithm as it relies on the order of timestamps.

Highlights

Mutual exclusion is a concept that ensures only one process can access a critical section or shared resource at a time.

In distributed systems, the lack of shared memory and synchronized clocks requires alternative approaches to mutual exclusion, such as message passing.

Lamport's mutual exclusion algorithm is based on message passing and timestamps to achieve synchronization.

There are two types of algorithms for mutual exclusion in distributed systems: non-token based and token-based algorithms.

Lamport's algorithm and the Ricart-Agrawala algorithm are examples of non-token based algorithms that use timestamps.

Token-based algorithms, such as Raymen's tree algorithm and Suzuki-Kasami algorithm, use a 'token' to determine which process can enter the critical section.

The token is analogous to a ticket, granting access to the critical section to the process that holds it.

Three types of messages are involved in achieving mutual exclusion: request, reply, and release.

Each process maintains a queue to store timestamps, assuming messages are delivered in FIFO order.

Lamport's logical clock is used to assign timestamps to critical section requests, determining the priority based on the smallest timestamp.

The execution of critical section requests is in the order of their timestamps, with the smallest timestamp being executed first.

A process sends a request message with its timestamp to all other processes when it wants to enter the critical section.

Reply messages are sent by processes not executing the critical section, indicating permission to enter.

A process can enter the critical section if its timestamp is at the top of the queue and it has received replies from all other processes.

After executing the critical section, a process sends a release message to allow other processes to use the critical section.

The performance of Lamport's mutual exclusion is 3n-1 messages per critical section invocation, where n is the number of processes.

The algorithm ensures that mutual exclusion is achieved in a distributed system without relying on shared memory or synchronized clocks.

Transcripts

play00:00

hello everyone welcome to simplified

play00:02

computer science concepts by professor

play00:05

rutu shakatam

play00:06

today we will be learning lampard's

play00:08

mutual exclusion algorithm so let us

play00:11

start

play00:12

so what is this mutual exclusion in

play00:14

distributed systems prior to that let us

play00:16

see what is mutual exclusion

play00:18

mutual exclusion uh is a concept where

play00:21

in a critical section or we call as a

play00:24

shared resource

play00:25

it is shared among various processes now

play00:28

already that particular shared resource

play00:30

is used by a particular process at the

play00:32

same time some other process let us say

play00:35

p1 wants to enter into that critical

play00:37

section however if the shared resource

play00:40

or that critical section if utilized by

play00:42

some other process

play00:43

p1 cannot enter into the critical

play00:46

section so the concept of mutual

play00:48

exclusion says that for at any given

play00:50

point of time

play00:52

only one process can

play00:54

execute into the critical section or

play00:56

only one process can utilize the

play00:58

critical section now how about the

play01:00

mutual exclusion in distributed systems

play01:03

so let us say this is a distributed

play01:04

systems or let us say this is a system

play01:07

wherein it has a own clock and it has

play01:10

its own shared memory

play01:12

but in distributed systems we do not

play01:14

have shared memory and a same clock

play01:17

because the systems are

play01:20

geographically apart from each other

play01:22

that is why the concept of shared memory

play01:24

and a local clock does not work

play01:26

so

play01:27

obviously the the mutual exclusion uh

play01:30

techniques such as semaphores will not

play01:33

work

play01:34

in distributed systems as it requires

play01:37

shared memory concept

play01:39

hence an approach based on message

play01:42

passing is used to solve mutual

play01:44

exclusion problem because the inter

play01:46

process communication that is used in

play01:48

distributed system is message passing

play01:51

so

play01:52

we have to follow a approach which is

play01:55

based on message passing

play01:57

so what all solutions are there for

play01:59

mutual exclusion or to achieve mutual

play02:01

exclusion in distributed systems so

play02:03

obviously the solution will be based on

play02:05

message passing

play02:07

so there are basically two types of

play02:09

algorithms that are there to achieve the

play02:13

mutual exclusion the first one is

play02:15

non-token based algorithms wherein the

play02:19

message passing or the synchronization

play02:21

takes place with the help of time stamps

play02:24

and these time stamps are passed using

play02:26

messages so the algorithms are lampard's

play02:28

mutual exclusion algorithm and recart

play02:30

agrawala algorithm

play02:32

and in token based algorithms there is a

play02:35

raymen's tree algorithm and suzuki

play02:37

kasami algorithm

play02:39

wherein

play02:41

a token is nothing but a

play02:44

holder or we can we can say it as a in a

play02:46

real-time example it is a ticket

play02:48

which is given to a process who is

play02:51

already into the critical section once

play02:53

it exits the critical section this

play02:55

particular token will be transferred to

play02:57

any other process which is requesting

play02:59

for that particular critical section so

play03:02

the process who has that token is the

play03:04

one who will enter into the critical

play03:06

section

play03:07

this concept is mapped to a real-time

play03:09

concept of movie tickets so

play03:13

if the if the person has the movie

play03:15

ticket then only he is allowed to enter

play03:17

into the cinema hall

play03:20

so let us see the general system model

play03:23

uh before starting lampard's mutual

play03:24

exclusion algorithm so basically there

play03:27

are three messages that are uh involved

play03:31

to achieve mutual exclusion so that is

play03:33

request reply and release request is the

play03:36

one

play03:38

is the process who will

play03:39

who wants to enter into the critical

play03:41

section

play03:42

that particular process will request all

play03:44

other processes in the network saying

play03:46

that i want to enter into the critical

play03:48

section

play03:49

with its own timestamp reply

play03:52

will be sent by the other

play03:55

sites or other processes in the network

play03:57

who

play03:59

who is not executing the critical

play04:01

section

play04:02

uh

play04:03

saying that

play04:04

uh

play04:06

we do not have problem you are entering

play04:08

into the critical section and the last

play04:10

one is release message the release

play04:12

message

play04:13

is once the critical section is

play04:15

used

play04:16

it should be released so that some other

play04:18

process can execute

play04:20

into that critical section so these are

play04:22

the three messages that are required for

play04:25

synchronization

play04:26

uh via mutual exclusion

play04:30

so each process will maintain a queue

play04:32

and it will store its timestamps into it

play04:35

the assumption is that the messages that

play04:38

are delivered are delivered in fifa

play04:40

order first in first out order that is

play04:42

why a queue is maintained

play04:45

then a time stamp is given to

play04:48

critical section request

play04:50

and it is given with the help of

play04:52

lampard's logical clock which we have

play04:54

already discussed in the earlier class

play04:57

then

play04:57

timestamp is used to determine priority

play05:00

of critical section request smaller the

play05:02

timestamps gets higher priority

play05:05

so if timestamp is one

play05:08

and other other process timestamp is

play05:09

true then which one which time stamp is

play05:12

smaller one so one will get the higher

play05:14

priority and it will enter into the

play05:15

critical section

play05:17

now the execution of critical section

play05:19

request is always in the order of their

play05:21

timestamp so whichever timestamp is the

play05:23

smaller timestamp will be executed first

play05:26

now let us look at this illustration and

play05:28

then let us map the algorithm so these

play05:31

are the three processes that ah

play05:34

are into communication with each other

play05:36

and we need to achieve synchronization

play05:38

for three processes so initially a

play05:41

process p3 wants to enter into the

play05:43

critical section so first what it will

play05:45

do it will send a request message to all

play05:48

the other sides that is process p2 and

play05:50

p3 with its own timestamp

play05:52

now this timestamp

play05:54

and this timestamp is maintained in a

play05:56

queue each process will have a queue

play05:58

this time this queue

play06:01

has two fields one is the timestamp and

play06:03

the other one is pid that is the process

play06:05

id ts stands for timestamp pid stands

play06:08

from process id now what is maintained

play06:10

in the queue of q3 it says that at time

play06:13

stamp one process three is requesting

play06:15

for the critical section now the same

play06:18

queue

play06:19

contents are passed to process p2 and

play06:22

process p1 saying that the current uh

play06:26

current request for the critical section

play06:28

is done at timestamp1 by process p3

play06:32

now after some time process p1 also

play06:34

wants to enter into the critical

play06:36

section

play06:38

right so p1 will also send request

play06:41

message with its timestamp

play06:43

now it will uh

play06:45

it will append its own timestamp

play06:48

to the queues so see initially at its

play06:51

own

play06:52

queue it will have timestamp 2 and

play06:54

process id 1 because it is requested by

play06:57

the critical section is requested by p1

play07:00

and this stamp will be sent to p2 and p3

play07:03

now when it comes to p2 already the

play07:06

queue had timestamp 1 and pid3 so its

play07:10

own p2 p1 timestamp will be

play07:13

appended in the q2's q

play07:17

process p 2 is q and at the same time

play07:20

time stamp 2 and

play07:22

process id 1 also gets appended or

play07:24

concatenated at the process p 3 skew

play07:28

now what happens is that

play07:31

a reply message see after sending the

play07:33

request a reply message is necessary a

play07:36

reply message is sent by the other

play07:38

processes so p2 which is not involved in

play07:40

accessing the critical section

play07:42

immediately sends a reply message that

play07:44

yes you can use the critical section

play07:46

from my end because i am not

play07:48

going to use the critical section in

play07:50

your future

play07:52

same happens with p1 however p1 has

play07:55

already requested for the critical

play07:58

section but it checks in its own queue

play08:01

that yes uh

play08:03

i also want to enter into the critical

play08:05

section so why should i send reply

play08:06

message but it checks in the queue and

play08:09

checks that times

play08:11

process p3 has timestamp one which is

play08:13

smaller one and that means it has the

play08:15

highest priority so i have to send a

play08:18

reply message hence process p1 sends a

play08:21

reply message to process p3

play08:25

again

play08:26

uh

play08:28

for the uh process p1 also gets reply

play08:31

from q p3 and p2 p2 sends same thing

play08:35

that i am not involved in the critical

play08:36

section so you can use the critical

play08:38

section

play08:39

at the same time p3 also sends

play08:42

the process

play08:44

the p3 also sends the reply

play08:46

but then now if both the processors are

play08:50

requ which are requesting for the

play08:51

critical section are getting replies

play08:54

from the other processes which one will

play08:56

enter so for this there is a uh there

play08:59

are two requirements to enter into the

play09:00

critical section so what is the

play09:02

requirement the first requirement is

play09:04

that in the queue

play09:07

the the timestamp should be on the top

play09:12

the process which wants to enter into

play09:13

the critical section

play09:15

that particular processes

play09:17

timestamp should be on the top of the

play09:19

queue and the second requirement is that

play09:21

it should re receive the reply messages

play09:24

from all the other processes so here

play09:27

which are the all the other processes

play09:28

that is process p2 and p1 if we consider

play09:31

p3 so p3 is getting responses from p2

play09:34

and p1 and its timestamp is at the top

play09:38

in the queue that is why the critical

play09:40

section will be allocated to process p3

play09:44

right once the critical section is

play09:46

allocated to process p3 it executes it

play09:49

after execution it sends the release

play09:51

message

play09:52

why release message so that some other

play09:54

process which is already requesting it

play09:56

can use that critical section

play09:58

and

play10:00

release it later on so

play10:02

after

play10:03

p3 completes or process 3 completes its

play10:06

execution it removes its time stamp from

play10:08

the queue and now the updated queue is

play10:11

left with only

play10:14

process id 1 which means that the

play10:17

critical section must be allocated to

play10:19

process id 1. now again here we will

play10:21

check for the condition that uh

play10:24

if it wants to enter into the critical

play10:26

section two conditions are there the

play10:28

timestamp of

play10:30

process one is at the top of the queue

play10:32

and it has received the reply from other

play10:35

processes so it has received here reply

play10:36

from p3 and here it has received the

play10:39

reply from p2

play10:43

so

play10:45

hence

play10:47

q1 will go into the critical section

play10:49

after q1 enters into the critical

play10:51

section and utilizes it util

play10:54

utilizes it it will

play10:58

release the critical section

play11:03

and now the queue contents are empty so

play11:06

here if you see q1 q2 and q3 are now

play11:09

empty which means no process is

play11:11

requesting for the critical section

play11:14

so this is the algorithm

play11:16

so to enter into the critical section

play11:19

first a process has to make a request

play11:21

along with the timestamp and its process

play11:23

id

play11:24

that it has to be maintained in a queue

play11:27

then after receiving the request message

play11:29

a reply message needs to be sent to the

play11:31

particular process and again that

play11:33

timestamp is maintained in the queue to

play11:35

execute a particular critical section

play11:38

uh a particular process two requirements

play11:40

are necessary that a particular process

play11:43

should receive the message from all the

play11:45

other sites and its timestamps is at the

play11:47

top of the queue

play11:49

and to release the party uh release the

play11:52

critical section

play11:53

uh so whenever a process ah

play11:56

exists that particular critical section

play11:58

then it has to remove its own timestamp

play12:00

from the queue and send release messages

play12:03

to all the other sites

play12:05

so this was the algorithm

play12:08

and the performance of lampard's mutual

play12:10

exclusion is 3 n minus 1 messages per

play12:12

critical section invocation which means

play12:14

that by 3 n minus 1 so n minus 1 request

play12:17

n minus 1 reply and n minus 1 release

play12:20

messages so ok so that is why this 1 two

play12:23

three so three n minus one messages per

play12:25

cs invocation why so

play12:28

so see if i want to invoke one critical

play12:30

section

play12:31

so uh this n is nothing but the number

play12:33

of processes so how many number of

play12:36

processes involved in our illustration

play12:38

three so three minus one two so for each

play12:41

critical section two request two reply

play12:43

and two release messages so we will go

play12:45

back and just see

play12:47

so for each critical section uh that it

play12:50

wants to enter there are two requests

play12:53

two replies coming

play12:55

and two release messages that are going

play12:58

out

play12:59

so that is why three n minus one is the

play13:01

performance of

play13:04

lampard's mutual exclusion algorithm

play13:09

so that's it about lampard's mutual

play13:11

exclusion thank you for watching this

play13:13

video

play13:15

uh please like share and subscribe to

play13:16

the channel simplified computer science

play13:18

concepts are professor rutuja

play13:21

for any doubts you can post your

play13:23

comments post in the comments

play13:25

thank you for watching

Rate This

5.0 / 5 (0 votes)

Связанные теги
Mutual ExclusionDistributed SystemsLampson's AlgorithmMessage PassingCritical SectionProcess SynchronizationTimestampsQueue ManagementInterprocess CommunicationSystem ModelComputer Science
Вам нужно краткое изложение на английском?