Concurrency in Go

Jake Wright
4 Mar 201818:39

Summary

TLDRThis video delves into concurrency in Go, explaining how it differs from parallelism and why it's a key feature of the language. The speaker demonstrates Go routines, channels, wait groups, and worker pools to illustrate how Go handles concurrent tasks. Key concepts include blocking operations, synchronization, and efficient CPU usage across multiple cores. The video also introduces advanced constructs like buffered channels and select statements, showing how to manage concurrent tasks effectively. It's a practical guide for Go programmers to write efficient, concurrent code.

Takeaways

  • 🖥️ Go supports concurrency, which allows programs to run tasks independently, but it’s not the same as parallelism.
  • 🔄 Concurrency involves breaking a program into tasks that can potentially run simultaneously, but parallelism specifically refers to running multiple tasks at the exact same time on different CPU cores.
  • 🐑 By adding 'go' before a function in Go, you create a goroutine, which runs concurrently with the main program without waiting for the function to finish.
  • ⏳ When the main goroutine finishes, the entire program terminates, even if other goroutines are still running, so you need to handle this to avoid premature termination.
  • ⌛ A WaitGroup can be used to wait for multiple goroutines to finish before the main program continues, making it a practical way to synchronize concurrent processes.
  • 🛠️ Channels in Go provide a way for goroutines to communicate with each other, acting as a pipeline for sending and receiving messages between them.
  • 🚫 Sending and receiving on channels are blocking operations; the sender waits until a receiver is ready and vice versa, which can cause deadlocks if not handled properly.
  • 🔀 The Select statement in Go lets you work with multiple channels at once, ensuring the program doesn’t block while waiting for slower processes.
  • 👷 Worker pools, a common concurrency pattern in Go, allow multiple workers to handle tasks from a job queue concurrently, improving program efficiency.
  • ⚙️ Buffered channels can hold a certain number of messages without blocking, allowing asynchronous operations but may cause deadlocks if overfilled.

Q & A

  • What is the difference between concurrency and parallelism in Go?

    -Concurrency is the ability to break a program into independently executing tasks that can run simultaneously, but it doesn't guarantee that they will. Parallelism, on the other hand, means running multiple tasks at exactly the same time on multiple CPU cores. Go focuses on concurrency by letting you write programs that could run in parallel if the system supports it.

  • What is a Go routine, and how do you create one?

    -A Go routine is a lightweight thread managed by Go’s runtime. It allows functions to run concurrently. To create a Go routine, you simply place the 'go' keyword before a function call, which runs the function concurrently in the background.

  • What happens if the main Go routine finishes before other Go routines?

    -If the main Go routine finishes, the program terminates, regardless of whether other Go routines are still running. To prevent this, you can use a synchronization mechanism like a `WaitGroup` to wait for all Go routines to finish before exiting the program.

  • How does a `WaitGroup` work in Go?

    -A `WaitGroup` in Go is used to wait for a collection of Go routines to finish executing. You increment the WaitGroup counter when a Go routine starts and decrement it when the Go routine finishes using `wg.Add()` and `wg.Done()` respectively. The `wg.Wait()` method blocks the main Go routine until the counter reaches zero.

  • What is a channel in Go, and how is it used?

    -A channel in Go is a communication mechanism that allows Go routines to send and receive messages. Channels have a type, and you can send or receive values of that type using the arrow syntax. Channels are blocking by default, meaning the sender waits until the receiver is ready and vice versa.

  • What does closing a channel do in Go?

    -Closing a channel signals that no more values will be sent on it. This is useful for preventing deadlocks when receivers are waiting for values. Receivers can check whether a channel is closed by receiving a second value (a boolean) from the channel.

  • What causes a deadlock in Go, and how can it be avoided?

    -A deadlock occurs when a Go routine is waiting for something that will never happen, like trying to receive from an empty channel without a sender. Deadlocks can be avoided by closing channels when they are no longer needed, or by ensuring that every send operation has a corresponding receive.

  • What is the role of buffered channels in Go?

    -Buffered channels allow sending a specified number of values without blocking, even if no receiver is ready. The send operation only blocks when the buffer is full. This is useful for situations where a producer needs to send multiple values before a consumer is ready.

  • What is a select statement in Go, and how is it used?

    -A `select` statement in Go lets you wait on multiple channel operations. It blocks until one of its cases can proceed. This allows you to receive from whichever channel is ready first, making it useful when handling multiple concurrent Go routines communicating via channels.

  • What is a worker pool in Go, and how does it function?

    -A worker pool in Go is a design pattern where multiple Go routines (workers) are used to perform tasks from a job queue concurrently. Each worker pulls a job from the queue, processes it, and then sends the result back to a results channel. This improves efficiency by distributing work across multiple CPU cores.

Outlines

00:00

🔍 Introduction to Concurrency in Go

The first paragraph introduces concurrency in the Go programming language, highlighting its importance and how it's often a major reason developers choose Go. It explains the distinction between concurrency and parallelism, noting that while parallelism involves tasks running simultaneously, concurrency focuses on breaking tasks into independent units that can potentially run at the same time. The paragraph also introduces the concept of 'Go routines,' which allow concurrent execution in Go programs, and demonstrates a simple Go routine example with the 'count' function.

05:01

⚙️ Handling Go Routine Termination

This paragraph discusses the behavior of Go routines, particularly how the main Go routine's termination causes the entire program to exit, even if other routines are still running. It explains the need to manage the termination of Go routines properly using techniques like the 'wait group.' The example provided demonstrates how to use a wait group to block the main function until all Go routines finish executing. The paragraph also introduces the use of anonymous functions to wrap Go routine calls for cleaner code.

10:04

📡 Communication Between Go Routines Using Channels

The third paragraph delves into channels in Go, a mechanism for communication between Go routines. It explains how channels allow Go routines to send and receive data, and how sending and receiving are blocking operations, which ensures synchronization. The paragraph covers how to implement channels, send and receive messages, and avoid common issues like deadlocks by properly managing when a channel is closed. The range feature is introduced as a way to iterate over a channel until it’s closed, providing cleaner syntax.

15:06

⚖️ Buffered Channels and Select Statement

This paragraph explains the concept of buffered channels, which allow sending messages without immediate blocking until the buffer is full. It demonstrates how to create and use buffered channels to prevent deadlock when there’s no immediate receiver. Additionally, it introduces the 'Select' statement, which enables Go routines to handle multiple channels simultaneously, picking whichever channel is ready first. This section also discusses how select helps optimize concurrent processes by preventing one slow channel from blocking faster ones.

🔄 Worker Pools and Efficient Parallelism

The final paragraph introduces the concept of worker pools, where multiple Go routines (workers) are used to concurrently pull tasks off a job queue. An example is provided using a Fibonacci calculation, showing how adding multiple workers can help utilize multi-core processors efficiently. It also discusses how Go can take advantage of multiple CPU cores, leading to significant performance improvements in concurrent tasks, though it does not guarantee ordered results due to the concurrent nature of the process.

Mindmap

Keywords

💡Concurrency

Concurrency in programming refers to the structuring of a program to execute multiple tasks simultaneously or in overlapping time periods, rather than one after another. In the video, it is emphasized as a key feature of Go, allowing multiple operations to run concurrently while still ensuring correct outcomes. The example given is breaking up a program into independently executing tasks that run concurrently, such as counting 'sheep' and 'fish'.

💡Parallelism

Parallelism is the process of running multiple tasks exactly at the same time, typically on a multi-core processor. The video distinguishes it from concurrency, noting that while concurrency deals with structuring tasks to run independently, parallelism involves physically running these tasks on separate cores. Go abstracts this complexity, letting the runtime manage parallel execution, while developers focus on writing concurrent code.

💡Goroutines

Goroutines are lightweight, concurrent functions in Go that are created using the 'go' keyword. The video explains how adding 'go' before a function call allows it to run concurrently with the main program without blocking it. Goroutines are highly efficient, and Go can handle thousands of them simultaneously. The example in the video shows how goroutines can count 'sheep' and 'fish' concurrently, improving program efficiency.

💡WaitGroup

WaitGroup is a synchronization primitive in Go used to ensure that all goroutines finish before the main program exits. The video introduces this concept as a way to prevent premature program termination, by blocking the main function until all goroutines have completed. The script uses a WaitGroup to control the flow and prevent the program from ending before concurrent tasks have completed.

💡Channels

Channels are Go's method for communication between goroutines. They are like pipes through which data can be sent and received. The video explains how channels are typed, and messages sent between goroutines must match the channel's type. Channels are a powerful tool for synchronizing tasks, as demonstrated in the example where the counting function sends data over a string channel.

💡Blocking

Blocking occurs when a program's execution halts until a particular condition is met. In Go, sending and receiving data via channels are blocking operations, meaning the program waits until both the sender and receiver are ready. The video uses this concept to explain how goroutines can be synchronized, as each routine waits for the other to send or receive before proceeding.

💡Buffered Channels

Buffered channels in Go allow for sending multiple messages without an immediate corresponding receiver. Unlike unbuffered channels, where each send must wait for a receive, buffered channels store a fixed number of messages. The video uses a simple example where two messages are sent into a buffered channel with a capacity of two, preventing deadlock and allowing the program to continue running.

💡Select Statement

The Select statement in Go is used to handle multiple channel operations at once, allowing the program to choose whichever channel is ready to communicate. This construct avoids blocking on a specific channel and improves efficiency. In the video, the Select statement is demonstrated with two channels sending messages at different intervals, allowing the program to handle them dynamically.

💡Deadlock

Deadlock occurs when two or more tasks in a concurrent system wait indefinitely for each other to complete, causing the program to hang. The video demonstrates deadlock in Go when a goroutine attempts to send a message on a channel, but no corresponding receive operation exists, causing the program to block. Go's runtime can detect and alert the user about such issues during execution.

💡Worker Pool

A worker pool is a design pattern in Go where multiple worker goroutines pull tasks from a shared queue and process them concurrently. The video explains how a worker pool improves performance by distributing tasks among multiple goroutines, as seen in the example of calculating Fibonacci numbers. The pattern takes advantage of Go's concurrency model, utilizing multiple CPU cores efficiently.

Highlights

Concurrency is a big part of Go and a key reason why many choose to use the language.

Concurrency is about breaking up a program into independently executing tasks that could potentially run at the same time.

Go routines are lightweight and efficient; you can create tens, hundreds, or even thousands of them.

When the main Go routine finishes, the program exits, regardless of any other Go routines still running.

The 'sync.WaitGroup' is a useful Go feature that lets you wait for Go routines to complete before terminating the program.

Go routines communicate using channels, which allow messages to be passed between them in a thread-safe way.

Blocking operations: Sending and receiving on channels are blocking operations, meaning the sender will wait until the receiver is ready, and vice versa.

Deadlocks can occur if no other Go routine is available to receive a message or perform a task.

Buffered channels can store a limited number of messages, allowing senders to proceed without immediate receivers.

The 'select' statement allows a program to listen on multiple channels at once and process whichever one is ready first.

Worker pools in Go allow multiple Go routines (workers) to pull tasks from a job queue concurrently, improving efficiency.

Worker pools maximize CPU utilization by distributing work across multiple cores.

Go provides syntactic sugar, like iterating over channels with 'range', which automatically closes the loop when the channel is closed.

The video emphasizes that understanding underlying computer science topics, like memory management and CPU architecture, can greatly improve code efficiency.

Brilliant.org is recommended as a platform to dive deeper into computer science concepts, offering courses from fundamentals to advanced topics like neural networks.

Transcripts

play00:00

in the first video learn go in 12

play00:02

minutes we had a very quick look at the

play00:04

main features of the language so you

play00:05

could get going

play00:06

straight away one thing I didn't talk

play00:08

about though was the support for

play00:10

concurrency that NGO has this is a

play00:12

really big part of go it's a big selling

play00:14

point and a reason why a lot of people

play00:16

choose the language so I thought it

play00:18

deserves its own video

play00:20

it's worth understanding that

play00:21

concurrency is not quite the same as

play00:23

parallelism to run things in parallel

play00:25

means to run two things at exactly the

play00:28

same time this is what happens on a

play00:29

multi-core processor you have one core

play00:31

doing one thing and another core doing a

play00:34

different thing both simultaneously

play00:35

typically though the lines of code that

play00:38

make up a program have to run in the

play00:40

right order which makes it hard to

play00:42

parallelize and execute two lines at the

play00:44

same time so concurrency is about

play00:46

breaking up a program into independently

play00:49

executing tasks that could potentially

play00:51

run at the same time and still getting

play00:53

the right result at the end so a

play00:55

concurrent program is one that can be

play00:57

parallelized we're not going to concern

play00:59

ourselves with what is happening at the

play01:01

CPU level and whether something is

play01:03

running on multiple cores or not the

play01:05

goal runtime and the operating system

play01:07

will take care of it for us we can

play01:10

concentrate on the structure of our

play01:11

program and using the tools that go

play01:13

gives us to make our code concurrent so

play01:16

I've got a text editor I'm gonna write a

play01:19

really simple function called count it's

play01:23

gonna take a string as an argument and

play01:25

in an infinite for loop that starts at

play01:27

one and just loops forever counting up

play01:29

I'm gonna output the the number that

play01:31

we're at and the string that we passed

play01:33

in and then I'm just going to sleep for

play01:36

half a second and I can count sheep and

play01:42

then I'll make a call to count to fish

play01:44

afterwards so this is a synchronous

play01:46

program there's no concurrency here it's

play01:48

going to execute the the count function

play01:50

and it's gonna wait for it to finish

play01:52

before moving on to the next line but

play01:54

the count function never finishes so

play01:57

it's just gonna count sheep forever and

play01:58

never get to the fish so just do that

play02:06

until I kill it if however we call the

play02:09

function with the word go in front of it

play02:10

it won't wait for it to finish before

play02:12

moving on to the neck

play02:13

line it'll say go and run this function

play02:15

in the background and then continue to

play02:18

the next line immediately and this

play02:19

creates what is called a go routine and

play02:21

that go routine will run concurrently so

play02:24

we now actually have to go routines the

play02:26

main function with the main execution

play02:28

path of the program is a go routine and

play02:30

now this new one that we've created

play02:32

explicitly so these will both run side

play02:35

by side and we see now it counts fish

play02:37

and it counts sheep go routines are very

play02:42

efficient it's okay to make tens

play02:43

hundreds even thousands of go routines

play02:46

but bear in mind that you can't make a

play02:48

program infinitely fast by adding more

play02:50

and more concurrent go routines because

play02:51

ultimately you are constrained by how

play02:53

many calls that your CPU has I'm gonna

play02:56

make one tiny change to this program and

play02:58

run both count functions as go routines

play03:00

now you might expect this to do exactly

play03:03

the same as before and you'd be almost

play03:05

right but we get a very different result

play03:08

we don't get anything so what has

play03:10

happened well in go when the main goal

play03:12

routine finishes the program exits

play03:15

regardless of what any other go routines

play03:17

might be doing previously the main go

play03:19

routine never finished because it would

play03:21

have this infinite for loop in it but

play03:24

now we've pushed that loop into its own

play03:26

goal routine so the main function will

play03:28

continue immediately to the next line

play03:30

but there are no more lines of code so

play03:32

it's done and the program terminates and

play03:33

the go routines that we've created

play03:35

ourselves haven't had time to do

play03:37

anything if we were to sleep for two

play03:41

seconds here you'll see it now outputs

play03:44

for two seconds and then it terminates

play03:46

you'll often see people add a call to F

play03:48

Mt

play03:49

scan line at the end of the main

play03:50

function to fix this problem

play03:52

and this will stop the main function

play03:54

from immediately terminating because

play03:56

it'll wait this is a blocking call it'll

play03:59

wait for user input so this gives our go

play04:02

routines time to execute and it's gonna

play04:04

keep doing this until I press ENTER and

play04:06

at that point it'll move on on the main

play04:09

functional finish and the program will

play04:10

exit again in reality though this is not

play04:12

a very useful solution because it does

play04:14

require manual user input what we can do

play04:16

instead is use a wait group I'm gonna

play04:20

import the sync package from the

play04:22

standard library LS it alter our program

play04:24

so we have one call to count

play04:27

and let's just count up to five to use a

play04:32

weight group I'm first going to create

play04:34

one and there's nothing scary it's just

play04:38

a counter and I'm gonna increment it by

play04:40

one to say that I have one goal routine

play04:43

to wait for and it doesn't do any magic

play04:45

here it's up to me to increment it the

play04:47

next step is to decrement the counter

play04:49

when the goal routine finishes so after

play04:52

this for loop I want to decrement the

play04:53

counter now I could pass a pointer to

play04:56

the weight group to the count function

play04:59

but I don't think it's really the

play05:00

responsibility of count to deal with

play05:03

this so instead I'm gonna wrap the cult

play05:05

account in an anonymous function this

play05:08

syntax creates a function and then

play05:09

immediately invokes it so this will

play05:13

still run as a go routine and inside

play05:14

here I'm gonna call count again and then

play05:17

afterwards I'm gonna call WG dot done

play05:21

since we've created this function in

play05:22

line we have access to that WG variable

play05:25

which is convenient and done literally

play05:27

just decrement the counter by one so all

play05:30

we have so far as a counter of how many

play05:32

go routines are running the useful bit

play05:33

now is to call wait at the end of the

play05:37

main function this will block until the

play05:40

counter is zero so of any go routines

play05:42

haven't finished it'll wait so now it's

play05:46

gonna count to five count will return

play05:48

will call done which will decrement the

play05:50

counter and then this weight will be

play05:53

like oh the counters at zero and allow

play05:55

the code to continue and the program

play05:58

terminates

play05:58

really easy to use so that's how to

play06:00

create a go routine really simple but

play06:02

not massively useful so far what we need

play06:04

next are channels a channel is a way for

play06:06

go routines to communicate with each

play06:08

other so far the count function has just

play06:10

been outputting directly to the terminal

play06:13

but what if we wanted to communicate

play06:14

back to the main goal routine well we

play06:16

can accept a channel as an argument and

play06:19

it's like a pipe through which you can

play06:21

send a message or receive a message

play06:23

channels have a type as well so this one

play06:26

will be a string channel and we'll only

play06:28

be able to pass messages that are

play06:30

strings any type works though you can

play06:32

even send channels through channels so

play06:35

instead of outputting thing to the

play06:37

terminal I'm gonna use this arrow

play06:40

in tax to send the value of thing over

play06:42

the channel so an arrow pointing in to

play06:44

the channel name will send a message

play06:49

gotta get rid of the waste group stuff

play06:52

now so a nice simple concurrent call to

play06:57

account and now we need to pass the

play06:59

channel in so first we can make one

play07:01

using the make function pass that to

play07:05

count and then we can use an arrow

play07:07

coming out of the channel name to

play07:10

receive a message from the channel so

play07:15

this is going to receive one message

play07:16

output sheep wants and then terminate

play07:19

and it's important to understand that

play07:21

sending and receiving are blocking

play07:23

operations when you try to receive

play07:25

something you have to wait for that to

play07:27

be a value there to receive similarly

play07:29

when you're sending a message it'll

play07:31

waste until a receiver is ready to

play07:33

receive so you can see it does what we

play07:34

expect at output strip and this blocking

play07:37

nature of channels allows us to use them

play07:40

to synchronize go routines imagine you

play07:43

have two independent go routines each

play07:45

line here is a line of code we don't

play07:47

really care what it is except down here

play07:49

we receive on a channel and over in this

play07:52

score routine we send on the channel

play07:54

when they both execute sometimes one

play07:56

will stop and sauce and the other will

play07:57

stop and start there might be executing

play07:59

different code they won't stay

play08:01

synchronized at all but when this go

play08:03

routine on the left tries to receive on

play08:05

the channel it'll stop and wait until

play08:07

something is sent and at some point the

play08:09

other go routine will reach the line

play08:11

where it tries to send and then they'll

play08:13

be able to communicate through this

play08:15

channel so this precise moment they both

play08:17

at this communication point so a

play08:19

communicating and was synchronizing

play08:21

which is an important concept back to

play08:23

the code this just receives one message

play08:26

if we wanted to receive all of them then

play08:28

we could wrap this in a for loop so this

play08:34

is what we expect but then it gets a

play08:36

fatal error we get deadlock this is

play08:38

because the count function is finished

play08:40

but the main function is still waiting

play08:42

to receive on the channel but nothing

play08:45

else is ever gonna send a message on the

play08:46

channel so we'll be waiting forever the

play08:48

program will never terminate go was able

play08:50

to detect this problem at runtime not a

play08:52

compile time at

play08:53

doesn't solve the halting problem but

play08:55

when it actually happens it can see that

play08:57

go routines aren't making any progress

play08:59

to solve this we can close the channel

play09:03

as a sender if we're finished sending

play09:05

and we don't need the channel anymore we

play09:07

can close it if you are the receiver you

play09:09

shouldn't ever close the channel because

play09:11

you don't know whether the sender is

play09:12

finished or not if you close the channel

play09:14

prematurely and then the sender tries to

play09:17

send on that closed channel it will

play09:19

cause an error it'll panic but it's okay

play09:21

for the count function here to close the

play09:23

channel because it knows that it's done

play09:24

and it's not gonna use it anymore when

play09:26

we receive on the channel we can

play09:27

actually receive a second value which

play09:31

tells us whether the channel is still

play09:32

open if it's not open if it's been

play09:35

closed then we can break out of this for

play09:37

loop so now we don't get the the

play09:42

deadlock anymore and there's actually a

play09:44

slightly nicer way we can do this in go

play09:45

by iterating over the range of a channel

play09:49

so this will keep receiving messages and

play09:52

putting the value in to this message

play09:55

variable here until the channel is

play09:57

closed so then we don't need to manually

play09:59

check that it's closed anymore exactly

play10:04

the same result just a bit of a

play10:05

syntactic sugar so we've seen so far

play10:08

that sending to a channel is a blocking

play10:10

operation to demonstrate the constraints

play10:12

of this I'm gonna do something really is

play10:14

simple I'm gonna make a channel of

play10:17

strings

play10:18

I'm gonna send hello across the channel

play10:20

and then I'm going to try to receive

play10:22

from the channel and output it to the

play10:24

terminal naively we might expect this to

play10:30

work and just output the word hello but

play10:33

we're actually going to get deadlock

play10:34

again this is because the send will

play10:36

block until something is ready to

play10:38

receive but the cold never progresses to

play10:40

the receive line because we're blocked

play10:42

at sent to make this work we'd need to

play10:44

receive in a separate go routing

play10:46

alternatively we can make a buffered

play10:49

channel by giving a capacity when we

play10:52

make the channel you can fill up a

play10:54

buffered channel without a corresponding

play10:55

receiver and it won't block until the

play10:58

channel is full so with a capacity of

play11:00

two this will work in the lab port hello

play11:03

we can even put two things into the

play11:05

channel before having to read anything

play11:06

back

play11:07

out so we put two things in and then we

play11:15

read them and nothing box here if we try

play11:17

to send a third time though the channels

play11:19

gonna be full so that call will actually

play11:23

block and we'll get deadlock again the

play11:25

final construct that go has is the

play11:27

Select statement if I have to go

play11:29

routines I'll just create them in line

play11:34

like this I'm gonna make two channels

play11:36

which will send them receive strings the

play11:40

first go routine is going to send on the

play11:42

first channel and it's going to be ready

play11:46

to send every 500 milliseconds so I'm

play11:49

just gonna alice sleep here for half a

play11:52

second the second go routine is gonna

play11:54

send on the second channel and it's

play11:57

gonna do that every two seconds and of

play12:01

course to make it do this infinitely I'm

play12:02

gonna wrap each one in a for loop back

play12:07

in the main go routine I could similarly

play12:09

have an infinite for loop and I could

play12:13

receive from channel one and I could

play12:21

receive from channel two and then loop

play12:23

and do that over and over again but will

play12:29

always get one and then the other and

play12:30

then one and then the other even though

play12:32

this first go routine is ready to send

play12:35

much sooner and this is because we're

play12:38

gonna block each time waiting for the

play12:40

slow one so every time we try to receive

play12:42

from channel 2 we're gonna have to wait

play12:43

two seconds so it's really slowing down

play12:46

that this first call routing instead we

play12:49

could use a select statement which

play12:51

allows us to receive from whichever

play12:53

channel is ready so in the case that

play12:55

channel 1 has a message we can output

play12:58

that but in the case that channel 2 has

play13:00

a message sorry that should be message 1

play13:04

in the case that channel 2 has a message

play13:06

then we can output message 2 and then

play13:08

we're just going to loop over this so

play13:12

this time we see that we're able to

play13:14

receive a lot more quickly from channel

play13:17

1 because this is only sleeping for half

play13:18

a second and that select statement

play13:21

keep picking channel 1 because it's

play13:22

available finally I want to demonstrate

play13:24

a common pattern called worker pools

play13:27

this is where you have a queue of work

play13:29

to be done and multiple concurrent

play13:31

workers pulling items off the queue I'm

play13:35

gonna write a really simple Fibonacci

play13:37

algorithm it's going to calculate the

play13:40

nth Fibonacci number and return it if n

play13:43

is 0 or 1 then just return n otherwise

play13:48

return the sum of the previous two

play13:51

Fibonacci numbers and then gonna write a

play13:53

worker which takes two channels one

play13:57

channel of jobs to do and one channel to

play14:00

send results on instead of specifying

play14:03

bi-directional channels we can actually

play14:05

say that will only ever receive from the

play14:09

jobs channel and will only ever send on

play14:11

the results channel and this just

play14:12

reduces the chance of bugs because now

play14:14

if we tried to send on the jobs channel

play14:17

we'd get a compile-time error so jobs is

play14:20

going to be a queue of numbers and we're

play14:24

going to use the range feature to

play14:25

consume items from this queue so this is

play14:28

going to receive on-the-job strangle so

play14:30

we're gonna receive n from the channel

play14:32

we're then gonna calculate the nth

play14:33

Fibonacci number under send it on the

play14:38

results channel in the main function

play14:40

obviously create the two channels I'm

play14:43

gonna make them buffered channels and

play14:45

give them a size of 100 no particular

play14:50

reason why I'm picking a hundreds of so

play14:51

so nice round number and then going to

play14:53

create a worker as a concurrent go

play14:55

routine give it the the two channels

play14:57

that it needs I'm then gonna fill up the

play15:00

jobs channel with 100 numbers so we're

play15:03

just gonna iterate from 0 to 99 and put

play15:05

all of those numbers on this jobs

play15:07

channel and since it buffered we're not

play15:09

gonna block that's gonna be fine once

play15:11

they're on there the worker will

play15:13

concurrently start pulling one off at a

play15:15

time and calculating the the Fibonacci

play15:18

number and then put it back onto the

play15:20

results channel I'm gonna close jobs

play15:22

because we're finished put and stuff on

play15:24

to that channel now we have a sender

play15:25

here so it's okay to close it I then

play15:27

expect 100 items to eventually appear on

play15:32

the results channel

play15:34

so that's gonna be the first 100

play15:36

Fibonacci numbers cuz I'm just gonna

play15:37

receive each one of those and output it

play15:40

to the terminal so this works this is

play15:45

fine it does the first batch really

play15:47

quickly and it's gonna get progressively

play15:49

slower it's quite an inefficient

play15:50

algorithm and if we look in Activity

play15:53

Monitor

play15:54

it's almost maxing out the CPU it's very

play15:56

close to 100% trying its best to

play15:59

calculate each Fibonacci number as the

play16:02

worker pulls one off the job skew so

play16:05

that's cool

play16:05

but what we can do now is add more

play16:08

workers so I can just copy this line so

play16:14

now we have four concurrent workers all

play16:16

pulling items off the jobs queue and

play16:18

then all pushing back onto the results

play16:21

queue at the end and if we look in

play16:22

Activity Monitor now it's using almost

play16:25

400 percent CPU because it's using

play16:27

multiple calls so the work will get done

play16:30

faster like I said at the beginning I

play16:32

don't wanna get too involved with how

play16:33

this works and how much faster it makes

play16:35

things because you don't get a massive

play16:37

amount of control over it but it is

play16:39

pretty cool to see that we're taking

play16:40

advantage of the multi-core processor

play16:43

obviously this version doesn't guarantee

play16:44

that the Fibonacci numbers will come out

play16:46

in order but that's the the gist of how

play16:49

worker pools work and that is a quick

play16:52

tour of concurrency

play16:53

in go really easy to do hopefully it

play16:56

wasn't too difficult to understand if

play16:58

you're interested in a career in

play17:00

software development or you just want to

play17:01

improve your skills then you might find

play17:03

it useful to dive further into computer

play17:05

science there's a lot more to computer

play17:07

science than just programming it's a

play17:09

very broad field and it covers maths

play17:11

with topics like linear algebra

play17:13

probability it covers hardware which

play17:15

goes all the way down to how the CPU

play17:17

works algorithms like this Fibonacci

play17:19

algorithm that I wrote and you'd learn

play17:21

how to analyze the efficiency and how to

play17:24

write better versions which don't max

play17:25

out your CPU usage so having an

play17:27

understanding of these topics really

play17:29

helps when you're writing code it can

play17:31

greatly simplify coding projects

play17:33

brilliant org is a great place to learn

play17:35

more about computer science they offer

play17:37

curated courses on many things from the

play17:40

fundamentals all the way up to cool

play17:42

stuff like artificial neural networks

play17:45

the guided courses go in

play17:47

a great detail to build up your

play17:48

knowledge and then walk you through

play17:50

various problems to help your practice

play17:51

and really understand what you're

play17:53

learning understanding how memory works

play17:55

for example it's gonna help you write

play17:57

more efficient code and it'll help you

play17:58

reason about the code because you'll

play18:00

understand what's happening at the

play18:01

operating system and the CPU level

play18:04

pointers in goal will suddenly make a

play18:06

lot more sense if this sounds

play18:07

interesting got a brilliant org slash

play18:09

Drake write the link is in the

play18:10

description you can sign up for free

play18:12

about the first 200 people who go to

play18:15

that link will get 20% off the annual

play18:16

premium subscription if you found this

play18:19

video useful click the like button hit

play18:21

subscribe if you want to see more

play18:22

tutorials like this one and I'll see you

play18:24

next time thanks for watching

play18:34

you

Rate This

5.0 / 5 (0 votes)

Etiquetas Relacionadas
Go concurrencygoroutineschannelsworker poolsparallel programmingmulti-core CPUsGo languageprogramming tutorialcode efficiencysoftware development
¿Necesitas un resumen en inglés?