CONCURRENCY IS NOT WHAT YOU THINK

Core Dumped
5 Apr 202416:59

Summary

TLDRThis video script explores the concept of concurrency in computing, which allows multiple processes to run simultaneously on a single CPU. It traces the evolution from mainframes to personal computers, explaining how early operating systems enabled multitasking. The script delves into CPU operation, detailing how processes are managed through queues and scheduling. It contrasts cooperative and preemptive scheduling, highlighting the importance of hardware support for effective multitasking. The video concludes by discussing the role of multi-core systems in enhancing parallelism and the future of complex computing concepts.

Takeaways

  • πŸ’‘ Multitasking has been possible since the era of single-core CPUs, challenging the misconception that it's solely a feature of multicore systems.
  • πŸ›οΈ Early mainframe computers were massive, expensive, and limited in number, leading to the development of multitasking to maximize their usage.
  • πŸ”„ The process of running a program in early computers involved multiple steps, including loading compilers and assemblers, which was time-consuming and inefficient.
  • πŸ‘₯ The advent of time-sharing operating systems like Multics allowed multiple users to connect to a single computer, improving resource utilization.
  • πŸ”„ The concept of concurrency was pivotal in allowing the execution of multiple processes by interleaving them, giving the illusion of simultaneous execution.
  • πŸ–₯️ Personal computers initially could only run one program at a time, but with the influence of systems like Unix, they evolved to support multitasking.
  • πŸ€– CPUs handle instructions in a cycle of fetch, decode, and execute, with jump instructions allowing for non-linear execution paths.
  • πŸ•’ Operating systems manage process execution through a scheduler, which assigns CPU time to processes in a queue, ensuring fairness in resource allocation.
  • ⏰ Preemptive scheduling, enabled by hardware timers, prevents any single process from monopolizing CPU resources, enhancing system security and responsiveness.
  • πŸ”© Multi-core systems enhance processing power by allowing true parallelism, but concurrency remains essential for efficient resource management when there are more processes than cores.

Q & A

  • What is the main focus of the video script?

    -The main focus of the video script is to explain the fundamentals of concurrency, a technique that allows computers to run multiple processes simultaneously, even with a single CPU.

  • How did early computers manage to run multiple programs despite having a single processor?

    -Early computers managed to run multiple programs by breaking down programs into smaller segments and interleaving them so that the CPU could execute them in an alternate order, creating the illusion of multitasking.

  • What is the significance of the invention of transistors in the context of the script?

    -The invention of transistors allowed mainframes to become smaller and more affordable, which eventually led to the development of personal computers and the need for multitasking capabilities.

  • Why were early computers designed to be used by one user at a time?

    -Early computers were designed for single-user use due to their high cost and limited availability. They were massive and expensive, making it impractical for multiple users to access them simultaneously.

  • What is the role of an operating system in managing multiple processes on a single CPU?

    -An operating system manages multiple processes by using a scheduler to allocate CPU time to each process, ensuring that each process gets a fair share of processing time and that the CPU is never idle.

  • What is the difference between cooperative scheduling and preemptive scheduling as explained in the script?

    -Cooperative scheduling relies on processes voluntarily giving up control of the CPU, whereas preemptive scheduling uses hardware timers to forcibly take control of the CPU from processes that do not relinquish it in a timely manner.

  • Why was the development of multi-core systems significant for multitasking?

    -Multi-core systems allow for true parallelism and simultaneous execution of multiple processes, as each core can handle a separate process, reducing the need for context switching and improving overall performance.

  • How does the CPU handle instructions as described in the script?

    -The CPU handles instructions through a cycle of fetch, decode, and execute. It uses special registers like the instruction register and the address register to manage the flow of instructions.

  • What is the purpose of a hardware timer in the context of process scheduling?

    -A hardware timer is used to implement preemptive scheduling by setting a time limit for a process's CPU usage. Once the timer expires, it triggers an interruption, forcing the process to relinquish control of the CPU.

  • What is the role of interruptions in the interaction between user programs and the operating system?

    -Interruptions serve as signals to the CPU, allowing user programs to request services from the operating system, such as file operations or memory allocation, and to regain control of the CPU when necessary.

  • Why did personal computers initially struggle with multitasking?

    -Personal computers initially struggled with multitasking because they were designed for single-tasking and lacked the hardware and software mechanisms to efficiently manage multiple processes simultaneously.

Outlines

00:00

πŸ’» The Evolution of Computer Multitasking

This paragraph introduces the concept of multitasking in computers, challenging the common belief that multicore systems are the only solution for running multiple programs simultaneously. It highlights that even single-core computers have been capable of multitasking since the days of early computing systems like the Commodore Amiga and Apple Macintosh. The paragraph sets the stage for an exploration of concurrency, a technique that allows multiple processes to run at the same time on a single CPU. It also touches on the history of mainframe computers, their evolution, and the challenges faced in the early days of computing, such as the complexity of programming and the high costs associated with computer operation and maintenance.

05:00

πŸ”„ The Mechanism of Task Switching

This section delves into how task switching is achieved in operating systems. It explains that while computers execute instructions sequentially, they can be broken down into smaller segments and interleaved to create the illusion of simultaneous execution. The paragraph introduces the concept of time-sharing operating systems, like Multics, which was one of the first to implement this technique. It also discusses the transition of computers from mainframes to personal computers and how the latter initially could not multitask but eventually adopted concurrency techniques to allow a single user to run multiple programs at once. The explanation includes a brief overview of CPU registers and the process of fetching, decoding, and executing instructions, which is fundamental to understanding how task switching is managed by the operating system.

10:01

πŸ•’ CPU Scheduling and Process Management

This paragraph focuses on the inner workings of CPU scheduling and process management within an operating system. It describes the process queue managed by the scheduler and the dispatcher, which selects processes for execution when the CPU is available. The paragraph explains how the operating system regains control of the CPU through interruptions, which are signals that pause the current task and allow the operating system to handle essential tasks, such as I/O operations. It also introduces the concept of cooperative scheduling, where processes voluntarily relinquish control, and preemptive scheduling, which uses a hardware timer to ensure the operating system can regain control even if a process does not release the CPU voluntarily. The paragraph emphasizes the importance of hardware support for implementing preemptive scheduling and touches on the historical use of these scheduling methods in operating systems like Windows and Multics.

15:03

πŸš€ Multi-core Systems and True Parallelism

The final paragraph discusses the advent of multi-core systems and their impact on multitasking and concurrency. It explains that while rapid process switching can create the illusion of simultaneous execution, the increasing number of processes can lead to lag. To address this, both software and hardware solutions have been developed. On the software side, more complex schedulers can manage processes more efficiently, while hardware solutions, such as increasing CPU speed or adding multiple processing units to a chip, enable true parallelism and simultaneous execution. The paragraph concludes with a distinction between concurrency and parallelism, emphasizing that concurrency deals with managing many tasks at once, whereas parallelism is about executing many tasks at once. It also encourages viewers to explore more complex concepts like scheduling, threads, and race conditions in future episodes.

Mindmap

Keywords

πŸ’‘Concurrency

Concurrency refers to the ability of a system to handle multiple tasks or processes simultaneously. In the context of the video, concurrency is a fundamental concept that allows computers to run multiple processes at the same time, even with a single CPU. It's crucial for efficient multitasking and is exemplified by early operating systems like Multics, which managed multiple user processes.

πŸ’‘CPU (Central Processing Unit)

The CPU is the primary component of a computer that performs most of the processing inside the computer. The video explains how CPUs handle instructions through a cycle of fetch, decode, and execute, which is fundamental to understanding how concurrency is managed. The CPU's role in executing instructions is central to the discussion of how multitasking is achieved.

πŸ’‘Multicore Systems

Multicore systems are computer processors that have two or more independent processing units called cores. The video discusses how multicore systems enhance performance by allowing true parallelism and simultaneous execution of multiple processes, which is an advancement over single-core systems that rely on rapid context switching to create the illusion of multitasking.

πŸ’‘Operating System

An operating system is the software that manages computer hardware and software resources and provides common services for computer programs. The video highlights the role of operating systems in managing access to the CPU and other hardware, enabling multitasking and concurrency by scheduling processes and handling interruptions.

πŸ’‘Multitasking

Multitasking is the ability of a system to switch between multiple tasks or processes, giving the appearance that they are running simultaneously. The video explains how early computers managed multitasking through techniques like time-sharing, where the CPU rapidly switched between tasks to create the illusion of simultaneous execution.

πŸ’‘Compiler

A compiler is a program that translates code written in a high-level programming language into machine code that a computer's processor can execute. The video script mentions compilers in the context of early computing, where programmers had to load compilers onto the computer to convert source code into executable machine code.

πŸ’‘Assemble

To assemble in the context of computing refers to the process of converting assembly language into machine code. The video script describes how, after the compiler translated source code into assembly language, an assembler was needed to further convert this into machine code that the CPU could execute.

πŸ’‘Interruptions

Interruptions in computing are signals sent to the CPU to temporarily pause its current task and switch to a different task. The video explains how interruptions are used by programs to request services from the operating system, such as IO operations, and how they allow the operating system to regain control of the CPU for scheduling and other tasks.

πŸ’‘Process

A process is an instance of a program in execution. The video discusses how the operating system manages processes, including how it schedules them for execution on the CPU and how processes interact with the operating system through interruptions and other mechanisms.

πŸ’‘Scheduling

Scheduling in the context of operating systems refers to the method by which the operating system allocates CPU time to processes. The video touches on different scheduling methods, including cooperative and preemptive scheduling, and how they affect the ability of the operating system to manage multiple processes efficiently.

πŸ’‘Preemptive Scheduling

Preemptive scheduling is a method where the operating system can forcibly take control of the CPU from a running process to allocate it to another process. The video explains how preemptive scheduling, supported by hardware timers, prevents any single process from monopolizing the CPU and enhances the security and efficiency of multitasking.

Highlights

Multicore systems are not the only way computers can run multiple programs simultaneously; concurrency allows it even with a single CPU.

Multitasking has been possible since the days of single-CPU computers, as seen in the Commodore Amiga and Apple Macintosh from the mid-80s.

Early mainframe computers were massive, expensive, and limited in access, leading to the development of time-sharing systems.

The evolution of computers from mainframes to personal computers saw a shift from multi-user to single-user systems.

Operating systems like MultiCS and Unix enabled multitasking by interleaving program segments for execution.

Modern personal computers have adopted multitasking capabilities, allowing a single user to run multiple programs at once.

CPUs handle instructions through a cycle of fetch, decode, and execute, which is fundamental to understanding multitasking.

The CPU's address register is crucial for the operating system to switch between different processes.

Interruptions are a hardware-level signal that allows the operating system to regain control of the CPU for essential tasks.

Cooperative scheduling, where processes voluntarily relinquish control, can lead to security risks if a process enters an infinite loop.

Preemptive scheduling, which relies on hardware timers, ensures that the operating system can regain control even if a process is uncooperative.

Multi-core systems allow for true parallelism by distributing processes across multiple processing units.

Concurrency is about managing many tasks at once, while parallelism is about executing many tasks simultaneously.

The video promises future episodes exploring more complex concepts like scheduling, threads, and race conditions.

The sponsor, Brilliant, offers interactive learning experiences to enhance problem-solving skills in computer science.

The video concludes by emphasizing the importance of understanding concurrency and parallelism for grasping more advanced computer science topics.

Transcripts

play00:00

have you ever wondered how your computer

play00:01

is able to run thousands of programs at

play00:03

the same time you are probably thinking

play00:05

about multicore systems which is not

play00:08

unexpected given the recent Push by CPU

play00:10

manufacturers suggesting that more cores

play00:13

means better performance what some

play00:15

people ignore is that multitasking has

play00:17

been around since those days when

play00:19

computers had a single CPU today we are

play00:22

going to learn the fundamentals about

play00:24

concurrency a technique that lets

play00:26

computers run multiple processes

play00:28

simultaneously even if they're equipped

play00:30

with just one

play00:31

CPU hi friends my name is George and

play00:34

this is core

play00:36

dumped this video is sponsored by

play00:38

brilliant more about them

play00:40

later back in the mid 80s computers like

play00:43

the Commodore Amiga and Apple Macintosh

play00:46

already could run multiple programs at

play00:48

once despite having only one processor

play00:51

so how did they do it to understand we

play00:54

need to go back further in computer

play00:56

history as is well known early computers

play00:58

were very primitive these monsters also

play01:01

known as mainframes were massive

play01:03

machines that filled entire rooms they

play01:05

were so expensive that only governments

play01:08

big companies and some universities

play01:09

could afford them and considering that

play01:12

these things were top technology in the

play01:14

entire world not even these wealthy

play01:16

entities could afford many of them not

play01:18

to mention the operation and maintaining

play01:20

costs I mean just imagine having one of

play01:23

these things for each accountant in your

play01:24

company or each researcher in a

play01:27

university access to computers was very

play01:29

limited back then often requiring

play01:31

appointments scheduled weeks in

play01:33

advance mostly due to the invention of

play01:35

transistors eventually main frames

play01:37

became smaller though still pricey but

play01:40

efforts were not only focused on

play01:41

shrinking the size but also on boosting

play01:44

processing speed although computers were

play01:46

becoming faster operating them was still

play01:49

complicated if you have no idea what I'm

play01:52

talking about let me give you an

play01:54

example to run a program written in some

play01:56

programming language Fortran for

play01:59

instance the computer couldn't just take

play02:01

the source code and execute it as we do

play02:03

with languages like python in modern

play02:06

days before the compiler needed to be

play02:08

loaded onto the computer at that time it

play02:11

was common to store programs in magnetic

play02:13

tapes so it was the programmer's

play02:16

responsibility to mount this tape and

play02:18

wait for the compiler to be loaded in

play02:19

main

play02:22

memory only when the compiler was ready

play02:25

to run the programmer would input the

play02:27

source code to be translated to Assembly

play02:29

Language which still couldn't be

play02:30

executed by the computer because it

play02:32

needed to be compiled again down to

play02:34

machine code by another program called

play02:36

an assembler so temporarily it was

play02:39

necessary to store the assembly output

play02:41

whether on tapes punched cards or other

play02:43

storage methods of the time the

play02:45

procedure required mounting another tape

play02:47

with the assembler requiring the

play02:49

programmer to wait once

play02:50

again once loaded the assembler would

play02:53

take the compiler output and translate

play02:55

it into machine code that the computer

play02:56

can run only then the program could be

play02:59

executed

play03:01

as you can see a significant amount of

play03:03

setup time could be involved in the

play03:04

running of a single program and if

play03:06

something went wrong it might have meant

play03:08

starting all over

play03:10

again this setup time was a real problem

play03:13

while tapes were being mounted or the

play03:15

programmer was typing commands the CPU

play03:17

sat idle but even though it was doing

play03:20

nothing other programmers still couldn't

play03:22

use it because computers were designed

play03:23

to be used by one user at a time and

play03:26

remember in the early days few computers

play03:28

were available as they cost Millions

play03:31

thus computer time was extremely

play03:33

valuable and owners wanted their

play03:35

computers to be used as much as possible

play03:37

to get as much as they could from their

play03:39

Investments this is when early operating

play03:41

systems were born the rest of this story

play03:43

deserves its own video what's pertinent

play03:45

for us right now is that researchers

play03:47

eventually conceive the notion of

play03:49

enabling multiple users to connect to a

play03:51

single computer simultaneously but with

play03:53

the operating system arbitrating access

play03:55

to the

play03:56

hardware under this approach each user

play03:59

accessed the computer through some type

play04:00

of input and output device such as a

play04:03

teletype writer or a dumb terminal by

play04:05

the late '70s dumb terminals were the

play04:07

preferred IO device don't let this thing

play04:10

confuse you it might seem like a vintage

play04:12

computer but it is just a screen with a

play04:14

keyboard used to send commands to the

play04:15

computer and display the

play04:17

output when these things were

play04:19

disconnected from the computer they were

play04:21

completely useless hence the interesting

play04:23

name the story of terminals is kind of

play04:26

fascinating as an interesting fact this

play04:28

is why that little program you use to do

play04:30

something on your computer by writing

play04:32

commands is called a

play04:34

terminal this Arrangement enabled the

play04:36

computer to execute one user's programs

play04:38

While others were either loading theirs

play04:40

or awaiting user

play04:45

input but what if multiple users were

play04:47

ready to run their programs

play04:50

simultaneously well in that situation

play04:52

things get more complicated as the

play04:54

computer disposes of one Processing Unit

play04:57

users would have to share it somehow

play05:00

remember that a program is just a

play05:01

sequence of instructions for the

play05:03

computer's CPU to execute one by

play05:05

one if multiple programs are ready for

play05:08

execution the straightforward approach

play05:10

would be to execute them

play05:13

sequentially however even though the CPU

play05:16

would always be utilized here users

play05:18

would have to wait a long time to start

play05:19

seeing results from their

play05:22

programs to address this problem

play05:24

programs can be broken down into smaller

play05:26

segments and interleaved so that the CPU

play05:29

can execute them in an alternate

play05:33

order now you might think that this

play05:35

makes no sense since everything is still

play05:37

being executed sequentially but keep in

play05:40

mind that computers even early ones were

play05:42

thousands of times faster than humans at

play05:44

performing calculations so these tiny

play05:47

fractions of a program would ideally be

play05:49

executed very fast so fast that we as

play05:53

persons would be under the illusion that

play05:55

all programs are being executed at the

play05:57

same

play05:58

time

play06:01

this kind of operating system received a

play06:02

special name multic was one of the first

play06:05

time sharing operating systems of all

play06:07

time later in the late '70s early 80s

play06:11

computers started arriving on the Home

play06:13

Market as they were intended to be used

play06:15

by one user at a time they got the name

play06:18

personal computers which somehow remains

play06:20

until

play06:22

today however when personal computers

play06:24

first came into play they were designed

play06:26

to run one program at a time and were

play06:29

uncapable of

play06:30

multitasking but multics which inspired

play06:33

the creation of Unix had been around for

play06:35

quite a while at that point since

play06:37

concurrency was already a known

play06:39

technique it didn't take too much time

play06:41

until PC manufacturers started shipping

play06:43

personal computers capable of

play06:45

multitasking instead of using

play06:47

concurrency to allow multiple users to

play06:49

use the same computer at the same time

play06:51

it was used to allow a single user

play06:52

running multiple programs at the same

play06:56

time if you know a bit about how

play06:58

computers run in instructions you

play07:00

probably understand that we can't just

play07:02

divide programs and mix them together in

play07:04

this animation I'm not saying programs

play07:06

are actually divided and loaded into

play07:08

memory like that instead it demonstrates

play07:11

the sequence of processes being carried

play07:13

out how this is achieved is exactly what

play07:16

we'll discuss next but before getting

play07:18

into it we need to understand how CPUs

play07:20

handle instructions which is exactly

play07:22

what I'm going to show you after a quick

play07:24

message from our sponsor brilliant if

play07:27

you're following this channel it's

play07:28

probably because here here we don't just

play07:30

talk we explain concepts with intuitive

play07:32

animations which make the learning

play07:34

process enjoyable traditional studying

play07:37

sometimes can feel like endless reading

play07:38

with no real engagement but with

play07:40

brilliant learning isn't just about

play07:42

reading text it's about diving in and

play07:44

interacting with the

play07:46

courses this can help you a lot if you

play07:48

want to become a better Problem Solver

play07:49

which is what distinguishes good from

play07:51

average

play07:53

developers you can find all kinds of

play07:55

computer science related courses like

play07:57

the latest course on how large language

play07:59

models work work where you can get

play08:00

Hands-On with real language models learn

play08:02

the fundamentals about this technology

play08:04

that is currently changing the world

play08:06

learn why the training data is important

play08:08

and even learn how to tune an llm by

play08:10

yourself to generate different kind of

play08:12

output you can get for free 30 days of

play08:14

brilliant premium and a lifetime 20%

play08:17

discount when subscribing by accessing

play08:19

brilliant.org cord dumped or using the

play08:22

link in the description

play08:25

below and now back to the video inside

play08:28

the CPU there are special registers like

play08:31

the instruction register and the address

play08:33

register the address register holds the

play08:35

memory location of the next instruction

play08:37

the CPU will

play08:39

execute when the CPU is ready for the

play08:41

next instruction it fetches this value

play08:43

and copy it to the instruction register

play08:46

the CPU then decodes this to know what

play08:48

to do next an addition a subtraction a

play08:51

copy operation whatever after the

play08:54

instruction is executed the address

play08:56

register value increases pointing to the

play08:57

next instruction

play09:00

this is basically how CPUs go through

play09:02

instructions step by step repeating this

play09:05

cycle fetch decode and

play09:13

execute there are also jump instructions

play09:16

they change the address register value

play09:19

making the CPU jump to a specific

play09:21

instruction instead of the next one in

play09:22

line This is key for dealing with

play09:25

conditions and Loops in

play09:28

programs

play09:30

because of this programs don't have to

play09:32

be literally split and mixed instead

play09:34

they're loaded normally and the

play09:36

operating system makes the CPU switch

play09:38

between them by changing the address

play09:39

register value our concern right now is

play09:43

when is the operating system itself

play09:45

executed because remember the operating

play09:48

system is also software it needs the CPU

play09:51

to do its

play09:53

job when a program starts running it's

play09:56

labeled as a process the operating

play09:59

system system employs a specific data

play10:00

type to store the details of each

play10:02

running process these processes are

play10:04

lined up in a queue which is overseen by

play10:06

a special part of the operating system

play10:08

known as auler when the CPU becomes

play10:11

available another component called the

play10:13

dispatcher steps in it selects the

play10:15

topmost element from the queue reads the

play10:17

process information and configures the

play10:19

address register so that the CPU can

play10:22

access that process in memory keep in

play10:24

mind that I'm omitting a lot of

play10:25

information here this is actually way

play10:27

more complicated and I'm only only

play10:29

telling you what's necessary for this

play10:31

video but don't worry because a

play10:33

dedicated video about CPU scheduling is

play10:35

already planned okay but we still

play10:37

haven't answered the question how does

play10:39

the operating system manage to run

play10:41

itself to do all of this

play10:43

work when we write programs we don't

play10:46

explicitly code instructions to give the

play10:48

CPU back to the operating system right

play10:51

well that's what we might

play10:53

think in practice programs depend on the

play10:56

operating system to perform essential

play10:58

tasks when we use functions to open a

play11:00

file read and write to it or things like

play11:03

requesting memory we are interacting

play11:05

with the operating system these

play11:07

interactions occur through interruptions

play11:08

at the hardware level interruptions act

play11:11

as signals to the CPU when an

play11:13

interruption occurs the CPU pauses its

play11:16

current task saves its state by taking a

play11:18

snapshot saving it in memory and

play11:20

immediately jumps to a predefined

play11:22

location in memory where the interrupt

play11:24

service routine associated with that

play11:26

specific Interruption resides this

play11:28

routine is somewhere in the memory

play11:30

region allocated to the operating system

play11:33

itself programs use interruptions

play11:35

extensively especially for Io operations

play11:38

as only the operating system kernel can

play11:40

handle interactions with

play11:41

Hardware this is how the operating

play11:43

system regains control of the CPU now

play11:46

that the address register is pointing to

play11:48

the operating system code the operating

play11:50

system can use the CPU not only to

play11:52

handle the interruption but to attend

play11:54

other tasks including scheduling

play11:58

processes

play12:00

since IO operations often take time the

play12:03

process that initiated the interruption

play12:04

is temporarily placed back in the queue

play12:06

while waiting for the hardware response

play12:09

but before the process's captured state

play12:12

is stored within the process

play12:14

information then the dispatcher selects

play12:17

another process from the queue and sets

play12:19

the CPU to execute

play12:21

it now this process can utilize the CPU

play12:24

until at some point it needs some sort

play12:26

of IO operation requiring giving control

play12:28

back to the operating

play12:31

system this cycle repeats continuously

play12:34

until it's the turn of our process

play12:37

again when the process is taken from the

play12:39

queue its state is read and restored in

play12:41

the CPU jumping to the correct location

play12:44

in the user program to resume exactly

play12:46

where it left

play12:47

off and this is the process that allows

play12:50

the operating system to alternate CPU

play12:52

usage among multiple

play12:55

processes but there is a huge problem

play12:58

with this approach

play12:59

once the CPU is allocated to a process

play13:02

it retains control until the process

play13:04

voluntarily releases either by

play13:05

terminating or entering a waiting state

play13:08

by invoking the operating

play13:10

system consider an infinite Loop

play13:14

scenario if there are no interruptions

play13:16

inside the loop the CPU will endlessly

play13:19

execute those instructions if a process

play13:21

intentionally or unintentionally avoids

play13:23

making interruptions the operating

play13:25

system will never regain

play13:27

control in modern modern times this

play13:30

poses a serious security risk as

play13:32

malicious programs can exploit this

play13:34

vulnerability to monopolize CPU

play13:36

resources preventing other programs from

play13:38

accessing

play13:39

them this scheduling method reliant on

play13:42

process cooperation is known as

play13:44

Cooperative scheduling or non-preemptive

play13:47

scheduling unfortunately there's no

play13:49

software fix for this issue so Hardware

play13:51

intervention is

play13:53

necessary to prevent any user program

play13:55

from completely taking over the CPU a

play13:58

hardware timer is deployed its function

play14:00

is straightforward a time limit is set

play14:03

and it begins counting down once the

play14:05

time expires the timer triggers an

play14:08

interruption the timer is typically

play14:10

implemented within the CPU itself so

play14:12

before allocating the CPU to any process

play14:15

the operating system dispatcher uses a

play14:17

privileged instruction to set and start

play14:19

the

play14:25

timer while processes can still

play14:27

relinquish the CPU by triggering

play14:31

interruptions if a process takes too

play14:33

long to do so the timer ensures that the

play14:35

operating system will eventually regain

play14:39

control this mechanism known as

play14:41

preemptive scheduling offers increased

play14:43

security however an operating system can

play14:45

only implement it if the hardware

play14:47

supports

play14:48

it if you look at the history of Home

play14:50

Computing you'll find out that Windows

play14:52

used Cooperative scheduling up until

play14:54

version 3 from Windows 95 and subsequent

play14:57

versions Windows has been used using

play14:59

preemptive

play15:00

scheduling interestingly enough multic

play15:03

was designed to support preemptive

play15:04

scheduling right from the start this is

play15:07

particularly fascinating given that

play15:09

we're discussing systems developed back

play15:10

in the

play15:12

1960s and finally what's the deal with

play15:15

multi-core

play15:16

systems well it's simple rapidly

play15:19

switching between processes can create

play15:21

the illusion of simultaneous execution

play15:24

but as the number of running processes

play15:25

increases the time it takes for each

play15:27

process to regain C puu access also

play15:30

increases leading to noticeable

play15:33

lag to address this there are both

play15:35

software and Hardware Solutions on the

play15:38

software side a more complex scheduler

play15:40

can help manage processes more

play15:41

efficiently we will discuss this in a

play15:43

future episode as for Hardware Solutions

play15:47

one option is to increase the CPU speed

play15:49

allowing processes to regain CPU time

play15:51

faster however this approach has its

play15:54

limits due to physical

play15:56

constraints the Breakthrough came with

play15:58

the addition of multiple processing

play15:59

units to the same chip creating

play16:01

multi-core systems with multiple cores

play16:04

the Schuler can allocate different

play16:05

processors to different programs

play16:07

enabling true parallelism and

play16:09

simultaneous

play16:10

execution despite this there can still

play16:13

be more processes than cores requiring

play16:15

concurrency to efficiently distribute

play16:17

processing resources among numerous

play16:19

processes in multi-core

play16:21

systems and just for reaching this part

play16:23

of the video I'll leave you with this

play16:25

beautiful phrase concurrency is about

play16:27

dealing with lots of things that at once

play16:29

but parallelism is about doing lots of

play16:31

things at once and this is all you need

play16:34

to understand this topic now you are

play16:36

ready for more complex Concepts like

play16:38

scheduling threads and race conditions

play16:41

which we will explore in future episodes

play16:43

this video has been quite lengthy but I

play16:45

hope you found it enjoyable if you did

play16:47

please hit the like button that would

play16:49

help me a lot and if you want to learn

play16:51

more don't let the AI voice scare you I

play16:53

really try to produce quality content so

play16:55

consider subscribing and following me on

play16:57

other platforms

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
MultitaskingConcurrencyComputer HistoryCPUOperating SystemsMulticore SystemsSoftware DevelopmentHardware LimitationsScheduling AlgorithmsPreemptive Scheduling