Lab Exp07:: Prevention of Race condition using mutex in C programming in multithreaded process

Pushpendra Sir Classes
16 Apr 202308:36

Summary

TLDRThe video script discusses the issue of race conditions in multi-threaded programming, demonstrating how a shared variable can be incorrectly incremented due to improper context switching. It then shows how to resolve this using a mutex, a synchronization mechanism that ensures mutual exclusion, thus preventing race conditions by allowing only one thread to access a shared resource at a time. The script concludes with a demonstration that, with mutex in place, the final value of the shared variable consistently reaches the expected two million, regardless of thread execution order.

Takeaways

  • πŸ”’ The script discusses the problem of race conditions in multi-threaded programming.
  • πŸ“ The issue arises when two threads try to increment a shared variable, expecting a final value of two million but getting inconsistent results.
  • πŸ‘·β€β™‚οΈ The script demonstrates how to resolve race conditions using mutexes, which provide mutual exclusion for shared resources.
  • πŸ”¨ Mutexes are declared and initialized using specific pthread functions to ensure safe access to shared data.
  • πŸ”’β€πŸ”’ Before accessing the shared variable, threads must lock the mutex with `pthread_mutex_lock`.
  • πŸ”“ After updating the shared variable, threads must unlock the mutex with `pthread_mutex_unlock`.
  • πŸ› οΈ The script shows the implementation of mutex locking within the threads' increment loop to prevent race conditions.
  • πŸ”„ The use of mutex ensures that only one thread can update the shared variable at a time, blocking others until the lock is released.
  • πŸ’‘ The script illustrates that with mutex locking, the final value of the shared variable consistently reaches the expected two million.
  • πŸ”„ The script mentions that the order of thread execution may vary, but the use of mutex ensures a consistent outcome.
  • πŸš€ The script concludes by stating that mutexes and semaphores can effectively address race conditions, with a follow-up session planned to discuss semaphores.

Q & A

  • What is the primary issue discussed in the video script?

    -The primary issue discussed in the video script is the race condition in multi-threaded programming, where two or more threads access a shared variable simultaneously and lead to unpredictable results.

  • What was the expected outcome if the shared variable was incremented correctly by two threads one million times each?

    -The expected outcome would be that the final value of the shared variable should be two million, as each thread increments the value one million times.

  • What is the actual result observed when the race condition occurs?

    -When the race condition occurs, the final value of the shared variable is not consistent and varies with each execution, deviating from the expected two million.

  • How is a mutex used to solve the race condition problem?

    -A mutex (mutual exclusion) is used to ensure that only one thread can access a shared resource at a time. It is used to lock and unlock access to the shared variable, preventing simultaneous access and thus avoiding the race condition.

  • What function is used to declare a mutex variable in the script?

    -The function used to declare a mutex variable in the script is pthread_mutex_t.

  • What function is used to initialize a mutex variable?

    -The function used to initialize a mutex variable is pthread_mutex_init, which takes the address of the mutex variable and an optional attributes argument.

  • What are the two functions used to control access to the shared variable using a mutex?

    -The two functions used to control access to the shared variable using a mutex are pthread_mutex_lock and pthread_mutex_unlock.

  • What happens when a thread calls pthread_mutex_lock?

    -When a thread calls pthread_mutex_lock, it attempts to acquire the lock on the mutex. If the lock is already held by another thread, the calling thread will be blocked until the lock becomes available.

  • How does using a mutex ensure the correct final value of the shared variable?

    -Using a mutex ensures that only one thread can update the shared variable at a time, preventing simultaneous updates that could lead to incorrect values. This sequential execution guarantees that the final value will be the expected two million.

  • What is the next topic the speaker mentions will be discussed in the next session?

    -The next topic the speaker mentions will be discussed in the next session is semaphores, which are another synchronization mechanism to handle similar issues as mutexes.

Outlines

00:00

πŸ”’ Introduction to Mutex for Solving Race Conditions

This paragraph introduces the concept of race conditions in concurrent programming, where two or more threads access shared data simultaneously, leading to unpredictable results. The speaker demonstrates the issue using a code example where two threads are incrementing a shared variable one million times each, expecting a total of two million but getting inconsistent results due to race conditions. To address this, the speaker suggests using a mutex (mutual exclusion) to ensure that only one thread can access the shared resource at a time. The mutex is declared globally and initialized within the main function using specific functions. The speaker explains the process of locking and unlocking the mutex around the critical section of code to prevent race conditions.

05:01

πŸ›  Implementing Mutex to Prevent Race Conditions

In this paragraph, the speaker continues the discussion on mutexes, explaining how to apply them in a multi-threaded environment to prevent race conditions. The speaker provides a step-by-step guide on using mutex functions such as 'pthread_mutex_lock' and 'pthread_mutex_unlock' to lock and unlock the mutex, respectively, ensuring that threads access the shared variable in a mutually exclusive manner. The speaker also discusses the importance of the order of operations: locking the mutex before updating the shared variable and unlocking it afterward. The paragraph concludes with the speaker executing the modified code with mutex implementation, which successfully produces the expected result of two million, demonstrating that mutexes effectively solve the race condition issue.

Mindmap

Keywords

πŸ’‘Race Condition

A race condition is a type of software bug that occurs when a system's behavior is dependent on the sequence or timing of uncontrollable events. It is a critical issue in concurrent computing where multiple processes or threads access shared data and they try to change it at the same time. In the script, the race condition is demonstrated through a shared variable being incremented by two threads, leading to unexpected results instead of the expected 2 million final value due to improper context switching.

πŸ’‘Shared Variable

A shared variable is a variable that can be accessed by multiple threads in a concurrent system. It is a common source of race conditions when not properly managed. In the script, the shared variable is the central point of contention between two threads, where each thread attempts to increment its value one million times, but due to race conditions, the final result varies with each execution.

πŸ’‘Thread

A thread is the smallest unit of processing that can be scheduled by an operating system. It allows for concurrent execution of code, which can lead to increased efficiency but also introduces complexity in managing shared resources. The script describes two threads that are incrementing a shared variable, which is a typical scenario where race conditions can occur.

πŸ’‘Mutex

A mutex, short for 'mutual exclusion', is a synchronization primitive used to protect code sections from being executed by multiple threads simultaneously. It is a key mechanism for avoiding race conditions. In the script, a mutex is introduced to ensure that only one thread can access and modify the shared variable at a time, thus preventing the race condition.

πŸ’‘Context Switching

Context switching is the process where a computer's operating system switches the CPU's attention from one process or thread to another. Inappropriate context switching can lead to race conditions, as demonstrated in the script where the shared variable's value is not consistently 2 million due to the threads being interrupted and resumed at unpredictable times.

πŸ’‘Pthread_mutex_t

Pthread_mutex_t is a data type used in the POSIX thread library for representing a mutex. It is declared and used in the script to create a global mutex variable that will be used to synchronize access to the shared variable, preventing multiple threads from modifying it at the same time.

πŸ’‘Pthread_mutex_init

Pthread_mutex_init is a function used to initialize a mutex. In the script, it is called with the address of the mutex variable and a default attribute to set up the mutex before use, which is essential for ensuring that the mutex works correctly to manage access to the shared resource.

πŸ’‘Pthread_mutex_lock

Pthread_mutex_lock is a function that a thread must call before accessing a shared resource to acquire the mutex lock. In the script, it is used to ensure that when a thread is incrementing the shared variable, no other thread can access it until the lock is released, thus avoiding the race condition.

πŸ’‘Pthread_mutex_unlock

Pthread_mutex_unlock is a function that releases the mutex lock, allowing other threads to acquire the lock and access the shared resource. In the script, it is called after the shared variable has been updated to ensure that the mutex is not held indefinitely and other threads are not blocked unnecessarily.

πŸ’‘Sequential Execution

Sequential execution refers to the process where tasks are performed one after another in a specific order. In the context of the script, sequential execution of threads on shared resources is achieved by using a mutex, which ensures that only one thread can execute the critical section of code at a time, thus preventing race conditions.

πŸ’‘Semaphore

A semaphore is a signaling mechanism used for controlling access to common resources by multiple processes in a concurrent system. Although not elaborated on in the script, it is mentioned as an alternative to mutex for dealing with race conditions. Semaphores can be used to manage access to shared resources, similar to mutexes, but offer more flexibility in certain scenarios.

Highlights

Demonstration of the issue of race condition in multi-threaded programming.

Explanation of shared variable and its manipulation by two threads to increment its value one million times each.

Observation of inconsistent final values due to inappropriate context switching, leading to race conditions.

Introduction of mutex as a solution to handle race conditions in shared resources.

Declaration of a global mutex variable to ensure mutual exclusion access.

Initialization of the mutex variable using pthread_mutex_init function.

Use of pthread_mutex_lock to acquire the mutex lock before accessing shared data.

Implementation of mutex locking within the thread function to prevent simultaneous access to the shared variable.

Unlocking the mutex with pthread_mutex_unlock after updating the shared variable to allow other threads.

Compilation and execution of the modified code with the pthread library.

Observation of consistent final values of the shared variable after implementing mutex locks.

Demonstration of sequential execution on shared resources due to mutex locking.

Explanation of how mutex prevents race conditions by ensuring one thread's execution before another.

Execution of the program multiple times to show the fixed final value of the shared variable.

Introduction of the next session's topic: dealing with race conditions using semaphores.

Conclusion of the session with a summary of how mutex and semaphores can address race conditions.

Thank you message and closing of the session with music.

Transcripts

play00:02

foreign

play00:05

[Music]

play00:33

I have demonstrated the issue or problem

play00:37

of risk condition

play00:39

so in this session I am going to

play00:41

demonstrate you how to deal with the

play00:44

race kind of situation

play00:47

so let's look at the previous code which

play00:50

we have already seen

play00:53

in this code actually we have

play00:56

defined one shared variable named as

play00:58

shared

play01:00

and then I have created two threads of

play01:03

the same process

play01:05

and this two threads were trying to

play01:08

increment the value of shared variable

play01:11

and it was trying to increment the value

play01:14

of the shared variable one million times

play01:17

okay

play01:19

and we got to know that because of the

play01:22

context switching in appropriate context

play01:24

switching there was a problem of race

play01:26

condition

play01:27

so in this situation we were expecting

play01:30

if there are two threads and both are

play01:32

incrementing the value of shared

play01:33

variable one million times if both will

play01:36

execute successfully the final value of

play01:38

this shared variable need to be two

play01:41

million right

play01:42

but if we execute this code as we have

play01:45

already discussed in the previous

play01:48

session

play01:50

so let me compile this and let's say we

play01:52

executed

play01:54

it is giving not 2 million it is giving

play01:56

something else so if I again

play01:58

compile it the final value of the shared

play02:01

variable it is displaying different

play02:04

different values right

play02:06

so it is displaying different value then

play02:08

again different value then again

play02:10

different value and so on so that means

play02:12

there is a raise condition the final

play02:14

value which is we are expecting here it

play02:16

is not same every time right

play02:19

so to deal with this situation what we

play02:22

can do is

play02:24

let's modify the previous code

play02:27

okay

play02:28

so we can

play02:30

deal with such kind of situation with

play02:32

the help of either mutex or Shima power

play02:36

so till now I haven't elaborated ishma

play02:39

for so let's deal with the race

play02:41

condition with the help of mutex

play02:43

so I need to declare a global kind of

play02:46

mutex variable so

play02:49

we know that how we can declare

play02:53

the mutex kind of variable you can use P

play02:55

thread

play02:57

underscore mutex

play02:59

underscore

play03:01

T function or t variable P thread

play03:05

underscore mutex underscore T and let's

play03:07

see I am giving this name as mutex okay

play03:12

so this is the mutex

play03:13

which has been defined as the global

play03:16

mutex variable

play03:18

now this mutex let's say inside the main

play03:20

function

play03:21

I would initialize so let's say I

play03:24

initialize the mutex so P thread

play03:27

underscore mutex

play03:30

underscore init function we can use to

play03:32

initialize the mutex

play03:34

and this function takes two arguments

play03:36

one is the pointer of mutex types and

play03:39

another

play03:39

some default argument if you want to set

play03:42

so the very first thing is the address

play03:44

of mutex variable which we want to

play03:46

initialize

play03:47

and the second variable we can set as

play03:49

null

play03:51

okay

play03:51

so it will initialize the mutex variable

play03:53

with default

play03:55

attributes right

play03:57

now how we are going to use this

play04:00

actually mutex

play04:02

we used to provide Mutual exclusive

play04:05

access of shared resources like critical

play04:09

sets so if there are

play04:11

multiple threads or n number of threads

play04:15

are there they are operating on the same

play04:16

shared data

play04:18

before accessing the shared data we can

play04:20

call

play04:21

mutex

play04:23

underscore you can say log okay so P

play04:26

thread underscore mutex underscore log

play04:28

is one of the function which every

play04:30

thread need to call before acquiring or

play04:33

before operating on the shared resource

play04:35

here the shared resources actually the

play04:37

shared variable

play04:38

so in this Loop which I have written in

play04:41

this thread underscore function this

play04:44

Loop is actually incrementing the value

play04:46

of shared variable

play04:47

so let us look at here we need to apply

play04:52

P thread underscore mutex underscore log

play04:56

before

play04:58

updating the shared variable

play05:00

the mutex need to be acquired by every

play05:03

function or you can see every thread

play05:05

right

play05:06

so here we need to pass the address of

play05:08

mutex variable and the name of the mutex

play05:10

variable is one this that mutates only

play05:13

right so I am passing ampersand mutex

play05:16

and once we finish the updation of the

play05:19

shared variable we can unlock develop so

play05:22

P thread underscore mutex

play05:25

underscore unlock

play05:28

okay

play05:29

and then

play05:30

address of mutex variable I have passed

play05:34

okay so I hope you are already familiar

play05:36

with these two functions P thread

play05:38

underscore mutex underscore lock and P

play05:40

thread underscore mutex and S4 unlock

play05:44

now once we do this

play05:45

well a function calls this particular

play05:48

when a thread calls this particular

play05:50

function

play05:51

it will first acquire the lock on the

play05:53

mutex so in the same duration when let's

play05:56

say thread one is executing and updating

play05:58

the shared variable

play06:00

one million times if another thread also

play06:03

want to acquire the lock on the same

play06:07

mutex it will be blocked okay

play06:09

so this will solve the problem of

play06:12

rage condition

play06:14

because

play06:15

whatsoever the thread will acquire the

play06:19

log first second well second thread will

play06:22

be able to acquire the lock after the

play06:24

execution of first one so because of

play06:27

that sequential execution will be done

play06:29

on the shared resources

play06:30

and it will not lead to any of the

play06:33

press condition so let's execute it

play06:37

now

play06:39

for compiling this we can use GCC and we

play06:43

need to link the library so we can use

play06:44

LP thread

play06:47

Library

play06:48

it is executed fine no issue and then

play06:51

dot slash a DOT out to run this

play06:54

particular program when we are running

play06:56

you can check it

play06:57

thread 0 is updated shared variable till

play07:01

1 million and the shared one updated the

play07:04

shared variable up to 2 million right so

play07:07

if you execute it

play07:11

the sequence the values and everything

play07:13

is fine now in these two run the thread

play07:17

one will execute it first dense thread

play07:19

one thread zero executed first then

play07:21

thread one but in this third time when I

play07:24

have executed thread one executed first

play07:26

then thread zero executed second time

play07:28

but the final value of the shared

play07:31

variable is fixed

play07:32

so in whatsoever

play07:35

the situation is whether thread 0 is

play07:38

executing fast or thread one final value

play07:41

of the shared variable will be fixed

play07:42

right so I am executing it multiple

play07:44

times

play07:45

and the final value of set variable it

play07:47

is 2 million

play07:49

so that means with the help of mutex and

play07:52

semaphores

play07:54

this situation which we consider a risk

play07:57

condition can be

play08:00

dealed with right

play08:02

so this is sufficient for this

play08:04

particular session right in next session

play08:06

I will talk about this Mr for variable

play08:08

and the similar kind of situations

play08:10

we will deal with the help of semapher

play08:12

well right

play08:16

[Music]

play08:24

[Music]

play08:28

thank you

play08:31

[Music]

play08:33

foreign

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
Multi-threadingRace ConditionMutex LocksProgrammingConcurrencyThread SafetyCode ExecutionShared ResourcesCritical SectionsSoftware Development