Multithreading Models & Hyperthreading

Neso Academy
30 May 201917:57

Summary

TLDRThis lecture delves into multi-threading, contrasting single-threaded with multi-threaded processes and highlighting the advantages of the latter. It introduces user and kernel threads, and the critical relationship between them in operating systems. The three primary multi-threading models—many-to-one, one-to-one, and many-to-many—are explored, each with its benefits and limitations. The many-to-one model is efficient but can block the entire process with a single blocking call. The one-to-one model allows for more concurrency but can be costly in terms of performance due to kernel thread creation. The many-to-many model offers flexibility and better utilization of multiprocessor systems. Additionally, the lecture covers hyper-threading, or simultaneous multi-threading, a technique by Intel that allows a single physical core to handle multiple threads, enhancing performance. Practical methods to check for hyper-threading on a system are also provided, offering viewers a deeper understanding of modern computing technology.

Takeaways

  • 🧵 Multi-threading involves the use of multiple threads within a single process to improve performance and resource utilization.
  • 📚 There are two types of threads: user-level threads, managed by the application, and kernel-level threads, directly managed by the operating system.
  • 🔗 The relationship between user and kernel threads can be established in three models: many-to-one, one-to-one, and many-to-many.
  • 🚫 In the many-to-one model, multiple user threads are mapped to a single kernel thread, which can lead to blocking issues and limit parallel execution on multiprocessors.
  • 🔄 The one-to-one model maps each user thread to a unique kernel thread, allowing for better concurrency but can be costly due to the overhead of creating kernel threads.
  • ➡️ The many-to-many model offers flexibility by allowing many user threads to be mapped to a smaller or equal number of kernel threads, improving concurrency and parallel execution.
  • 💡 Hyper-threading, also known as simultaneous multi-threading, is a technology that allows a single physical processor core to handle two or more threads simultaneously, improving performance.
  • 🔧 Hyper-threading can be identified by checking the number of logical processors against the number of physical cores; more logical processors indicate hyper-threading.
  • 🛠️ The many-to-many model is often considered the best for establishing the relationship between user and kernel threads, as it combines the advantages of the other models while minimizing their drawbacks.
  • 🔎 To determine if a system supports hyper-threading, one can use the 'wmic cpu get number of cores, number of logical processors' command in the command prompt on Windows.
  • 🌟 Hyper-threading is a proprietary term by Intel, but the concept of simultaneous multi-threading can be found in various processor technologies to enhance performance.

Q & A

  • What are the two types of threads discussed in the lecture?

    -The two types of threads discussed are user threads and kernel threads. User threads operate at the user level and are managed without direct kernel support, while kernel threads are managed directly by the operating system.

  • What is the definition of a many-to-one threading model?

    -In a many-to-one threading model, many user-level threads are mapped to a single kernel thread. This model allows for efficient thread management in user space but has limitations such as blocking the entire process if a single thread makes a blocking system call.

  • What are the limitations of the many-to-one threading model?

    -The limitations of the many-to-one model include blocking the entire process if one thread makes a blocking system call and the inability to run multiple threads in parallel on a multiprocessor system due to the one-to-one mapping with kernel threads.

  • How does the one-to-one threading model differ from the many-to-one model?

    -The one-to-one threading model maps each user thread to a separate kernel thread, providing more concurrency than the many-to-one model. It allows other threads to continue running even if one thread makes a blocking system call and enables the use of multiprocessor systems.

  • What is the main disadvantage of the one-to-one threading model?

    -The main disadvantage of the one-to-one model is the overhead associated with creating kernel threads. This can be costly and may burden the performance of an application, leading to a restriction on the number of threads supported by the system.

  • Can you explain the many-to-many threading model?

    -The many-to-many threading model multiplexes many user-level threads to a smaller or equal number of kernel threads. It allows developers to create as many user threads as necessary, which can then run in parallel on a multiprocessor system and continue execution even when one thread performs a blocking system call.

  • What is hyper-threading, and how does it relate to multi-threading?

    -Hyper-threading, also known as simultaneous multi-threading, is a technology that allows a single processor core's resources to be virtually divided into multiple logical processors, enabling the execution of multiple threads at the same time. It is a proprietary name given by Intel for this technology.

  • How does hyper-threading improve system performance?

    -Hyper-threading improves system performance by allowing multiple logical processors to execute threads in parallel. This can make efficient use of a processor's resources and increase throughput, as it is almost like having multiple separate processors working together.

  • How can you determine if your system supports hyper-threading?

    -You can determine if your system supports hyper-threading by checking the number of logical processors compared to the number of physical cores. If the number of logical processors is greater than the physical cores, hyper-threading is enabled.

  • What command can be used in Windows to check the number of cores and logical processors?

    -In Windows, you can use the 'wmic cpu get number of cores' command to check the number of physical cores and 'wmic cpu get number of logical processors' to check the number of logical processors, which will help you determine if hyper-threading is enabled.

Outlines

00:00

🧵 Introduction to Multi-threading Models

This paragraph introduces the concept of multi-threading and the types of threads: user-level and kernel threads. It explains that user threads operate at the user level and are managed by developers, while kernel threads are directly managed by the operating system. The paragraph sets the stage for a deeper dive into multi-threading models, which are essentially the relationships between user and kernel threads. It outlines three common models: many-to-one, one-to-one, and many-to-many, and promises a detailed exploration of each model's characteristics, benefits, and limitations.

05:02

🔗 Many-to-One Model and Its Limitations

The many-to-one model is explored in this paragraph, where multiple user threads are mapped to a single kernel thread. This model is efficient in thread management at the user level, but it has significant limitations. If one thread makes a blocking system call, the entire process is blocked, and the model does not support parallel execution on a multiprocessor system due to the one-to-one mapping with the kernel thread. These constraints make it less suitable for systems requiring high concurrency and parallelism.

10:04

🔄 Advantages and Disadvantages of the One-to-One Model

The one-to-one model maps each user thread to a dedicated kernel thread, providing more concurrency than the many-to-one model. It allows for continued execution of other threads even if one makes a blocking system call and supports parallel execution on multiprocessor systems. However, creating a user thread requires the creation of a corresponding kernel thread, which can be costly and impact application performance. Most implementations restrict the number of threads due to the overhead of creating kernel threads, which limits the scalability of this model.

15:07

🤹 Many-to-Many Model: The Optimal Multi-threading Approach

The many-to-many model is presented as an improvement over the previous models, allowing for a flexible mapping of many user threads to a smaller or equal number of kernel threads. This model supports the creation of as many user threads as needed, with corresponding kernel threads capable of running in parallel on a multiprocessor system. It addresses the limitations of the many-to-one and one-to-one models by preventing the entire process from being blocked by a single thread's blocking system call and enabling better utilization of multiprocessor systems. This model is widely implemented and considered optimal for multi-threading environments.

🚀 Hyper-Threading: Simultaneous Multi-Threading in Practice

This paragraph delves into hyper-threading, also known as simultaneous multi-threading, a technology that allows a single processor core to handle multiple threads by virtually dividing it into multiple logical processors. Hyper-threading enhances performance by enabling the execution of multiple threads in parallel, akin to having multiple processors. The speaker demonstrates how to check for hyper-threading support in a system using the Windows Management Instrumentation (WMIC) commands, revealing that their Intel Core i3 processor supports hyper-threading with two physical cores and four logical processors.

Mindmap

Keywords

💡Multi-threading

Multi-threading refers to the execution of multiple threads of execution in parallel, potentially improving the utilization of CPU resources and enhancing the performance of applications. In the video, multi-threading is the central theme, with an in-depth discussion of how it allows for concurrent processing and the different models that facilitate the relationship between user threads and kernel threads.

💡User Threads

User threads are the threads that operate at the user level and are managed by the application or user-level libraries. They are not directly managed by the operating system's kernel. In the context of the video, user threads are contrasted with kernel threads, and the script discusses the various models that determine how user threads interact with kernel threads in a multi-threaded environment.

💡Kernel Threads

Kernel threads are threads that are managed directly by the operating system's kernel. They have a direct association with the system's resources and are more privileged compared to user threads. The video explains that kernel threads are a critical component in the multi-threading models, as they form the basis for the relationship with user threads.

💡Many-to-One Model

The many-to-one model is one of the multi-threading models where multiple user threads are mapped to a single kernel thread. This model is efficient in terms of thread management at the user level but has limitations, such as the entire process blocking if a single thread makes a blocking system call, as mentioned in the script.

💡One-to-One Model

The one-to-one model is another multi-threading model where each user thread is mapped to a dedicated kernel thread. This model allows for more concurrency and prevents blocking issues that occur in the many-to-one model. However, it can be costly in terms of performance due to the overhead of creating and managing kernel threads, as discussed in the video.

💡Many-to-Many Model

The many-to-many model is a flexible multi-threading model where multiple user threads can be mapped to a varying number of kernel threads, which can be less than or equal to the number of user threads. This model offers advantages such as the ability to run threads in parallel on a multiprocessor system and not blocking the entire process during a blocking system call, as highlighted in the script.

💡Hyper-Threading

Hyper-Threading, also known as simultaneous multi-threading, is a technology developed by Intel that allows a single physical processor core to handle two threads at the same time, effectively doubling the number of logical processors. The video explains how this technology can enhance performance by allowing more tasks to be executed in parallel.

💡Logical Processors

Logical processors are the virtual processors created by the hyper-threading technology. They allow for more threads to run concurrently than the number of physical cores present in the CPU. In the video, the concept of logical processors is used to illustrate how hyper-threading can provide additional processing capabilities.

💡Blocking System Call

A blocking system call is a type of system call that suspends the execution of a thread until a requested service is completed. In the context of the video, it is explained that making a blocking system call can have different impacts on the process depending on the multi-threading model in use, potentially causing the entire process to block in some models.

💡Concurrency

Concurrency in computing refers to the ability of a system to handle multiple tasks or threads at the same time. The video discusses how different multi-threading models, such as the one-to-one model, provide varying levels of concurrency, allowing for more efficient use of system resources and improved application performance.

Highlights

Introduction to multi-threading models and hyper-threading.

Difference between user-level threads and kernel-level threads.

User-level threads operate without direct kernel support.

Kernel threads are directly managed by the operating system.

Necessity of a relationship between user and kernel threads for system functionality.

Three common multi-threading models: many-to-one, one-to-one, and many-to-many.

Many-to-one model maps many user threads to one kernel thread.

Efficiency of thread management in user space in the many-to-one model.

Limitations of the many-to-one model, such as process blocking on a blocking system call.

Inability to utilize multiprocessor systems fully in the many-to-one model.

One-to-one model maps each user thread to a single kernel thread.

Increased concurrency in the one-to-one model, allowing for parallel execution on multiprocessors.

Disadvantages of the one-to-one model include the overhead of creating kernel threads.

Many-to-many model allows for multiplexing of user threads to a smaller or equal number of kernel threads.

Advantages of the many-to-many model include better utilization of multiprocessor systems and handling of blocking system calls.

Hyper-threading, also known as simultaneous multi-threading, allows for multiple logical processors from a single physical core.

Hyper-threading enables processors to execute two threads simultaneously, enhancing performance.

Practical method to check for hyper-threading support using Windows Management Instrumentation (WMIC).

Demonstration of checking for hyper-threading on an Intel Core i3 processor.

Conclusion summarizing the importance of understanding multi-threading models and hyper-threading in modern computing.

Transcripts

play00:00

in the previous lecture we started

play00:02

studying about threats and we saw the

play00:04

difference between single threaded

play00:05

processes and multi threaded processes

play00:08

and we also saw the benefits of

play00:10

multi-threading so in this lecture we'll

play00:12

be studying about multi-threading models

play00:14

and hyper threading so before we go into

play00:17

multi threading models let us understand

play00:19

the type of threads that we have so

play00:22

basically there are two types of threats

play00:24

the first one is user threats and the

play00:26

second one is kernel threats so user

play00:29

threads are the threats that are

play00:30

supported above the kernel and are

play00:32

managed without the kernel support so

play00:35

these are the threads that are operating

play00:37

in user level or which are created by

play00:40

the users or the developers and then we

play00:42

have the kernel threats so the kernel

play00:45

threads are supported and managed

play00:46

directly by the operating system so

play00:49

kernel threads are the threads which are

play00:50

managed directly by the operating system

play00:52

and not by the user so these are the two

play00:55

types of threats that we have so when we

play00:57

started studying about operating system

play00:59

we saw that the users are constantly

play01:01

interacting with the system and

play01:03

operating system is what allows this to

play01:05

happen so since we have seen that there

play01:08

are two types of threats the user

play01:09

threats and kernel threats so for these

play01:11

two threats to be able to function

play01:14

together

play01:15

there must exist a relationship between

play01:17

the user thread and kernel threats so

play01:19

ultimately for the system to function

play01:21

there must exist a relationship between

play01:23

the user threats and the kernel threats

play01:26

so we will see how we can establish this

play01:29

relationship between the user threats

play01:32

and the kernel threats so that is the

play01:34

thing that we are going to study in

play01:35

multi-threading models so

play01:37

multi-threading models are nothing but

play01:40

the type of relationships that can be

play01:42

there between the user threads and

play01:44

kernel threads so we will see what they

play01:46

are so there are three common ways of

play01:48

establishing this relationship and what

play01:50

is this relationship the relationship

play01:51

between the user thread and the kernel

play01:54

threads so let's see what they are the

play01:56

first one is many to one model the

play01:59

second one is one to one model and the

play02:01

third one is many to many models so we

play02:04

will see each of this models one by one

play02:06

and we will see how do they function

play02:08

what are their limitations and which one

play02:11

among these are the best so

play02:13

coming to the first model we have the

play02:15

many to one model so from the name

play02:17

itself we can understand that there is a

play02:19

many to one relationship established

play02:22

between the user threads and the kernel

play02:24

threads so from this diagram we can see

play02:26

these things on top they represent the

play02:28

user threads and then the circle over

play02:31

here it represents the kernel thread so

play02:34

we see that many user threads are

play02:36

associated to one kernel thread or many

play02:39

user threads are accessing one kernel

play02:41

thread so that is what we mean by the

play02:43

many to one model many user threads to

play02:46

one kernel thread so as I told you in

play02:49

this model it Maps many user level

play02:52

threads to one kernel thread and then

play02:54

the thread management is done by the

play02:56

thread library in user space so it is

play02:59

efficient thread management is done in

play03:01

the user level not in the kernel level

play03:03

so in that way this is efficient because

play03:05

we are able to manage the threads in the

play03:09

user space now let us see what are the

play03:11

disadvantages or the limitations of this

play03:13

model so here are the limitations the

play03:16

entire process will block if a thread

play03:18

makes a blocking system call so we see

play03:21

that many user threads are associated or

play03:23

mapped to one kernel thread and if one

play03:26

of the thread makes a blocking system

play03:28

call then the entire process will be

play03:30

blocked because let's say that all these

play03:33

threads are associated to one single

play03:34

process they are the threads of one

play03:37

single process doing a certain task and

play03:39

all these threads are mapped to this

play03:42

kernel thread in the operating system so

play03:44

let's say that one of the thread makes a

play03:46

blocking system call so if that thread

play03:49

makes a blocking system call this kernel

play03:51

thread will be blocked and if this is

play03:53

blocked then all of them will be blocked

play03:55

because they are all mapped to this

play03:57

single kernel thread so that is one of

play03:59

the limitations or disadvantage of this

play04:01

model so if one thread makes a blocking

play04:04

system call the entire process will be

play04:05

blocked and then the second limitation

play04:08

is that because only one thread can

play04:10

access the kernel at a time multiple

play04:13

threads are unable to run in parallel on

play04:16

a multiprocessor so as you see here

play04:18

since these multiple threads in the user

play04:21

level are mapped to just one kernel

play04:23

thread even if we are having a multiple

play04:25

processor system if you are

play04:27

a multiprocessor we are not going to be

play04:29

able to make use of that multiprocessor

play04:31

system because even though we have

play04:33

multiprocessor that means even though we

play04:35

are having more than one processor one

play04:37

kernel thread will run only in one of

play04:39

the processor one kernel thread cannot

play04:41

run in two processors so even though we

play04:44

have many processors this entire thing

play04:46

will run only in one of the processors

play04:48

because they are all mapped to one

play04:50

kernel thread and one kernel thread will

play04:52

run only on one of the processor so that

play04:54

is another limitation of the many-to-one

play04:57

model all right so now let's go to the

play04:59

next model and see how is that better

play05:01

than this and if that has any

play05:03

limitations as well so the second model

play05:05

that we have is one-to-one model so from

play05:08

the name itself even here we can

play05:10

understand here that one user thread is

play05:13

mapped to exactly one kernel thread

play05:15

unlike the Mini to one model here one

play05:18

user thread is mapped to only one kernel

play05:20

thread so here this user thread is

play05:22

mapped to this kernel thread this one to

play05:24

this one and so on so let us see how

play05:27

does this work and what are its benefits

play05:29

and what are its limitations so as I

play05:32

told you in this model it Maps each user

play05:35

thread to one kernel thread and then it

play05:38

provides more concurrency than the

play05:40

many-to-one model by allowing another

play05:42

thread to run when a thread makes a

play05:45

blocking system call so in this case

play05:47

even if one user thread makes a blocking

play05:49

system call the entire process will not

play05:52

be affected unlike the many to one model

play05:54

so here let's say for example that these

play05:57

four threads belongs to one process and

play05:59

then let's say that one of the user

play06:01

thread makes a blocking system call so

play06:03

if it does so only this kernel thread

play06:06

associated with this user thread will be

play06:08

affected and only this part will be

play06:10

blocked so the other part of the process

play06:12

that means these three threats

play06:13

associated with these three kernel

play06:15

threads can still run even though this

play06:17

hat made a blocking system call so that

play06:19

is one advantage as compared to the many

play06:22

to one model and also it allows multiple

play06:24

threads to run in parallel on a multi

play06:27

processor so in the previous one we saw

play06:29

that we cannot make use of the

play06:31

multiprocessor system but in this case

play06:33

we are able to make use of

play06:35

multiprocessor systems why because each

play06:38

user thread is associated to one

play06:41

so each of this part can run on one of

play06:44

the processors that we have so suppose

play06:46

we are having four processors in our

play06:48

system then each of these threads can

play06:51

run on one of the four processors so

play06:53

this one can run on one this on another

play06:55

one and so on so we are able to make use

play06:58

of the multi processor architecture that

play07:00

we have in case of the one-to-one model

play07:02

so so far we saw that one-to-one model

play07:05

is having some advantages as compared to

play07:07

the many to one model now let's see if

play07:09

it has some disadvantages and if they

play07:12

have what are they so here are the

play07:13

disadvantages creating a user thread

play07:16

requires creating the corresponding

play07:18

kernel thread so here we see that each

play07:21

user thread is mapped to one kernel

play07:23

thread so whenever you are creating a

play07:26

user thread you have to create the

play07:28

kernel thread as well so that may become

play07:30

costly sometimes and because the

play07:33

overhead of creating kernel threads can

play07:35

burden the performance of an application

play07:37

most implementations of this model

play07:40

restrict the number of threads supported

play07:42

by the system so the overhead of

play07:44

creating kernel threads can sometimes be

play07:47

very heavy on the system or the

play07:48

application that is running so what

play07:51

happens is that most of the applications

play07:53

they will restrict the number of kernel

play07:55

threads that can be created because in

play07:57

one system there is a limit of how many

play08:00

threads can run at a time so if you are

play08:02

having a multiprocessor system with four

play08:04

processors then at a time only four

play08:07

threads can run because each thread will

play08:09

run on one of the processor so if you

play08:11

are having a processor with 4 cores that

play08:13

means you're having 4 processors in your

play08:15

system and if you are having 5 threads

play08:18

and trying to make those 5 threads run

play08:20

at the same time it may not work so the

play08:21

application may have to restrict the

play08:23

number of threads that are supported so

play08:25

that is another disadvantage of this

play08:27

one-to-one model now let us go to the

play08:30

next model and see if that is better

play08:32

than these two that we have discussed

play08:34

till now so here we come to the last

play08:36

model which is some many to many model

play08:38

so here again from the name we can

play08:40

understand that many user threads are

play08:43

associated or mapped to many kernel

play08:45

threads so from the diagram it is very

play08:47

clear here we have the user threads and

play08:49

here we have the kernel threads and

play08:51

these user threads are associated with

play08:54

this corner treads or their map to this

play08:56

kernel threat so there is some

play08:58

many-to-many relationship in this model

play09:01

so let us see what are the advantages of

play09:03

this model and if this is better than

play09:05

the other two that we have discussed so

play09:07

here as I told you in this model it

play09:09

multiplexes many user level threats to a

play09:12

smaller or equal number of kernel

play09:14

threats so as we see here there is a

play09:17

mapping between many user level threats

play09:19

to a smaller or equal number of kernel

play09:21

threats so here we have four user

play09:23

threats and they may be mapped to four

play09:27

or lesser number of kernel threats that

play09:29

is what we mean by this and then the

play09:31

number of kernel threats may be specific

play09:33

to either a particular application or a

play09:35

particular machine so as I told you

play09:37

there is a limitation of the number of

play09:39

threats that we can have in a system so

play09:41

the number of kernel tress that we can

play09:43

have in this model it may be specific to

play09:46

a particular application or a particular

play09:48

machine depending upon the number of

play09:50

threats that they support and in this

play09:52

one developers can create as many user

play09:55

threats as necessary and the

play09:56

corresponding kernel threads can run in

play09:59

parallel on a multiprocessor so here as

play10:01

we see the developers can create as many

play10:03

number of user threats as they want and

play10:06

they will be mapped to the corresponding

play10:07

number of kernel threats and they can

play10:10

run in parallel on a multiprocessor so

play10:13

we have talked about how threads

play10:14

function in a multiprocessor system so

play10:16

we can clearly see that in this they can

play10:19

run on a multiprocessor system because

play10:21

we are having multiple kernel threads

play10:23

and also when a thread performs a

play10:25

blocking system call the kernel can

play10:27

schedule another thread for execution

play10:30

here we see that when a user thread

play10:32

performs a blocking system call the

play10:34

entire process will not be blocked the

play10:36

remaining things can be scheduled for

play10:38

execution when one user thread performs

play10:40

a blocking system call so these were the

play10:42

limitations that we had in DES many to

play10:44

one model and one to one model so in

play10:47

many to one model we saw that when a

play10:49

blocking system call is performed the

play10:51

entire process was blocked but that

play10:53

problem is solved in many to many model

play10:55

it was also solved in the one to one

play10:57

model but in one to one model there were

play10:59

other problems that we faced like each

play11:01

user thread could be associated or map

play11:02

to only one kernel thread but in this

play11:05

one there is a many-to-many relationship

play11:07

so in that way

play11:08

so many too many model is better than

play11:10

the one-to-one model so we see that this

play11:13

many-to-many model it is having many

play11:15

advantages and it is far better than the

play11:17

many to one model and one-to-one model

play11:19

so this is the model that is implemented

play11:22

in most of the systems and this is the

play11:23

best model that we can have in a

play11:25

multi-threading system to establish the

play11:27

relationship between the user thread and

play11:29

kernel threads now we will discuss

play11:31

another topic that is hyper threading

play11:33

which is also known as simultaneous

play11:35

multi-threading so we have been studying

play11:37

about multi-threading and we saw how

play11:40

multi-threading is much better than a

play11:42

single threaded process we saw its

play11:44

benefits and we also saw the models in

play11:46

which the relationship between user

play11:48

threads and kernel threads are

play11:49

established in a multi-threading system

play11:51

so what we mean by simultaneous

play11:53

multi-threading is that we are having

play11:55

more than one multi-threading going on

play11:59

in the same system multi-threading means

play12:01

multiple threads at the same time and

play12:03

simultaneous multi-threading means many

play12:05

of this multi-threading zs-- going on at

play12:08

the same time so that is what

play12:09

simultaneous multi-threading is and

play12:11

hyper threading is the same thing it is

play12:13

just the proprietary name given by Intel

play12:15

so intercompany they call it hyper

play12:17

threading so let us see what is the

play12:19

advantage of hyper threading or

play12:21

simultaneous multi-threading and how it

play12:23

actually works what happens is that in a

play12:27

hyper threaded system it allows their

play12:29

processors cores resources to become

play12:32

multiple logical processors for

play12:35

performance so what we have is we are

play12:38

having a micro processor that means we

play12:39

are having a processor in our system

play12:41

where all the processing is happening

play12:42

and in our processors we are having

play12:45

different cores right we have heard of

play12:47

single core systems dual core system

play12:49

quad core systems and so on what happens

play12:51

is if it is a single core system that

play12:53

means there is only one processor where

play12:55

only one processing can take place at a

play12:58

time that means only one thread can run

play13:00

at a time and if we are having a dual

play13:02

core processor that means in your

play13:03

processor you are having two cores where

play13:06

two processing can happen at the same

play13:08

time that means it will support two

play13:10

threads at the same time so in the same

play13:12

way quad core means for coarse so four

play13:15

units of processing that means four

play13:16

threads can be supported at the same

play13:18

time so physically depending upon the

play13:20

number of course you have that

play13:21

many number of threads it will support

play13:23

at one time so in this hyper threading

play13:25

or simultaneous multi-threading what

play13:27

happens is that the physical cores of

play13:30

your processors they are virtually or

play13:33

logically divided into multiple

play13:36

processors so if you are having one core

play13:39

it may be logically divided into two

play13:41

physically it is only one but logically

play13:43

it may be divided into two so that two

play13:46

threads may be supported at the same

play13:47

time similarly if you are having a dual

play13:49

core system then those two cores may be

play13:52

logically divided into two each so you

play13:54

will have a total of four cores where

play13:57

four different threads can run and the

play13:59

same time so that is what we mean by

play14:01

simultaneous multi-threading or hyper

play14:03

theory so what happens it enables the

play14:06

processors to execute two threads or a

play14:08

set of instructions at the same time

play14:10

since hyper threading allows two streams

play14:13

to be executed in parallel it is almost

play14:15

like having two separate processors

play14:17

working together so this is what I just

play14:19

explained to you now let us see how can

play14:22

we find out if your system that you are

play14:24

using supports simultaneous

play14:26

multi-threading or hyper threading and

play14:28

let us see how we can find out if hyper

play14:30

threading is running in our system

play14:31

because it is always interesting to

play14:33

practically see what is happening

play14:35

instead of just learning the theory so

play14:37

let us try to find out how this works so

play14:39

here I just want to show you the

play14:40

properties of the system that I am using

play14:42

right now so here I am having a

play14:45

processor which is of Intel Core i3 and

play14:47

it belongs to the 2 3 7 0 M model so

play14:51

this is the processor that I am having

play14:53

in my system right now so in order to

play14:55

find out whether my system is

play14:57

hyper-threaded or not what you have to

play14:59

do is you have to open your command

play15:01

prompt so you open up your command

play15:03

prompt and here you type WM IC so wmic

play15:07

stands for windows management

play15:08

instrumentation which is a management

play15:11

infrastructure that provides you access

play15:13

to control over a system so if you type

play15:16

wmic it will open up the windows

play15:18

management instrumentation console so if

play15:21

you press ENTER you enter the command

play15:23

line interface of W M I so here I will

play15:26

show you the command which will help us

play15:28

know how many cores are we having in our

play15:30

system so here there is a command called

play15:33

CPU

play15:34

get number of course if you type this

play15:41

command it will show you the number of

play15:42

course that you have in your system that

play15:44

means the number of course in your

play15:46

processor so we know there are different

play15:48

types of processors having different

play15:49

cores and as I told you if you are

play15:51

having multiple cores means it is a

play15:53

multiprocessor system so here you see

play15:56

that I am having two cores in my system

play15:58

that means there are two threads that

play16:00

can be supported at the same time so it

play16:02

is just like having two processors now

play16:04

we will find out how many logical

play16:07

processors do I have now if the number

play16:09

of logical processors is equal to the

play16:11

number of physical cores that means

play16:13

there is no hyper threading happening in

play16:15

my system because I am just having that

play16:17

many number of course as a physical

play16:19

course that I have but if I am having

play16:22

more number of logical cores as compared

play16:24

to the physical course that we saw here

play16:26

then that means hyper threading is

play16:28

happening in our system so in order to

play16:30

do that there is another command so let

play16:33

us extend this command here we said get

play16:35

number of course now we will also say

play16:38

number of logical processors now if I

play16:47

press ENTER here what do I see I see

play16:49

that number of course is 2 which we saw

play16:51

before and number of logical processors

play16:53

is 4 so we see that physically we had

play16:57

only 2 cores but they are divided into

play17:00

two each and as a total I am having 4

play17:05

number of logical processors so here we

play17:07

clearly see that there is hyper

play17:09

threading I am having four logical cores

play17:11

that means I can run four threads at the

play17:14

same time in my system so clearly my

play17:16

system is hyper threaded so that is how

play17:19

we can find out how many course we have

play17:21

and also we can find out if our system

play17:23

is hyper threaded or not so with that I

play17:26

hope you understood the concept of hyper

play17:28

threading so this topic may not be

play17:29

present in your syllabus or in your

play17:31

textbooks but this is something good to

play17:33

know because this is the kind of

play17:34

technology that we are using today so we

play17:37

have seen the models of multi-threading

play17:39

and we also saw hyper threading and how

play17:41

it works in our system

play17:43

thank you for watching and see you in

play17:44

the next one

play17:45

[Applause]

play17:50

[Music]

Rate This

5.0 / 5 (0 votes)

Related Tags
Multi-threadingHyper-threadingOperating SystemsThread ManagementConcurrencyUser ThreadsKernel ThreadsMany-to-OneOne-to-OneMany-to-ManySimultaneous Multi-threadingIntel Technology