The Node.js Event Loop: Not So Single Threaded

node.js
16 Oct 201731:54

Summary

TLDRこのトークでは、Brian HughesがNode.jsのイベントループと非同期処理の仕組み、特にマルチスレッドとの関連性について解説しています。過去のマルチタスクとスレッドの進化を振り返り、Node.jsがどのように単一スレッドであっても並列処理を可能にしているかを説明。また、イベントループがリクエストをどのように管理し、Node.jsのパフォーマンスにどのような影響を与えるかを詳細に説明しています。

Takeaways

  • 📜 イベントループはNode.jsの非同期処理の中心であり、パフォーマンスに大きな影響を与えます。
  • 🌟 Node.jsは基本的にシングルスレッドで動作し、マルチスレッドの複雑さを避けることができます。
  • 🔄 ただし、Node.jsはC++のコードを用いて内部で非同期処理を行い、必要に応じてスレッドプールを利用しています。
  • 💡 ファイルシステムやDNSルックアップなどの操作はスレッドプールで行われ、これがパフォーマンスのボトルネックになることがあります。
  • 🚀 ネットワークやパイプ操作はカーネルの非同期メカニズムを利用し、スレッドプールの制限に直面しないことがあります。
  • 🤖 C++のコードがバックグラウンドスレッドで実行される場合もありますが、これによりNode.jsは自動的に並列処理を可能にします。
  • 📊 イベントループはリクエストのディスパッチやタイマー管理、シャットダウンなどの多くの役割を果たしています。
  • 🛠 Node.jsでのパフォーマンス問題は、イベントループとスレッドプールの理解が重要です。
  • 📈 複数のリクエストを処理する際のパフォーマンス特性は、実行される操作の種類によって異なります。
  • 🔧 イベントループのメカニズムを理解することで、Node.jsアプリケーションのパフォーマンスを最適化できます。
  • 🎓 Node.jsのイベントループについて深く理解するには、専門家のトークやブログ記事を参考にすることが役立ちます。

Q & A

  • イベントループとは何ですか?

    -イベントループは、Node.jsにおいてすべてのリクエストの中央ディスパッチ役として機能します。JavaScriptのコードからC++コードへとリクエストが渡される際に、イベントループはそのリクエストを管理し、必要に応じてスレッドプールや非同期プリミティブを使用して処理を行います。また、イベントループはタイマー管理やシャットダウンのタイミングなど、他にも多くの役割を担っています。

  • Node.jsはどのようにして非同期処理を実行するのですか?

    -Node.jsは、イベントループを介して非同期処理を実行します。イベントループは、JavaScriptのコードから来的たリクエストをC++コードに渡し、同期リクエストの場合はそのまま実行し、非同期リクエストの場合はC++の非同期プリミティブを使用して処理を実行します。また、スレッドプールを使用することで、複数のリクエストを並行して処理することができます。

  • スレッドプールとは何ですか?

    -スレッドプールは、Node.jsが自動的に生成し、再利用する一連のワーカースレッドの集合体です。ファイルシステムやDNSルックアップなどのI/Oバウンドな操作に対しては、スレッドプール内のワーカースレッドを使用して非同期的に処理を行います。Node.jsのスレッドプールはデフォルトで4つのスレッドを持ちますが、必要に応じて変更することができます。

  • Node.jsのcryptoモジュールの同期と非同期メソッドの違いは何ですか?

    -cryptoモジュールの同期メソッドは、呼び出し元のスレッドですぐに実行され、完了するまで待機します。一方、非同期メソッドは、バックグラウンドで実行されるため、呼び出し元のスレッドはブロックされず、他の操作を続けることができます。このように非同期メソッドを使用することで、CPUインテンシブな操作を効率的に処理することができ、全体的なパフォーマンスが向上します。

  • Node.jsでHTTPリクエストを非同期的に処理する場合、どのようにしてパフォーマンスを最適化するのですか?

    -Node.jsでHTTPリクエストを非同期的に処理する場合、パフォーマンスを最適化するためには、OSが提供する非同期プリミティブを使用することが重要です。これにより、リクエスト処理が_kernel space_で行われ、アプリケーションのスレッドプールの制限に依存することなく、高速で効率的なI/O操作が可能になります。

  • Node.jsにおけるrace conditionという問題はどのように生じるのですか?

    -Node.jsにおいて、複数のスレッドが同じメモリ空間を共有する場合、スレッド間のデータ共有が容易になりますが、同時に複数のスレッドがデータにアクセスしようとすると、race conditionという問題が発生する可能性があります。これは、複数のスレッドが同じデータに同時に読み書きを行う際に、実行順序によって結果が変わることを意味します。これを防ぐためには、スレッド間の同期処理が必要であり、適切なロック機構を使用してデータの整合性を確保する必要があります。

  • マルチコアプロセッサが普及した後のパフォーマンス向上のアプローチは何でしたか?

    -マルチコアプロセッサが普及した後、パフォーマンス向上を図るために、対称型マルチスレッド(SMT)やハイパースレッディングなどの技術が研究・開発されました。これらの技術は、1つのCPUコアで複数のスレッドを並列に実行することで、処理能力を向上させることを目的としています。特に、ハイパースレッディングは、新しい命令を介してOSにより多くの情報を提供し、プロセッサがパラレルに実行可能であるコードを識別し、効果的に実行することができます。

  • Brian Hughesが演讲で説明したNode.jsの主なポイントは何ですか?

    -Brian Hughesは、Node.jsのイベントループと非同期処理の仕組み、そしてそれがパフォーマンスに与える影響について説明しました。特に、Node.jsがスレッドプールを使用して非同期リクエストを処理し、CPUインテンシブな操作を効率的に実行する方法について詳細に説明しました。また、Node.jsがどのようにしてマルチコアプロセッサを効果的に利用し、パフォーマンスを向上させるかについても触れました。

  • Node.jsがスレッドを利用しない理由は何ですか?

    -Node.jsはスレッドを利用しない理由として、スレッドによる並列処理が複雑であり、正確なマルチスレッドコードを書くことが困難であることが挙げられます。また、スレッド間のデータ共有には競合条件という問題が生じる可能性があり、これを解決するためには複雑な同期処理が必要となります。Node.jsはこれらの問題を避けるために、イベントループと非同期I/Oを中心としたシングルスレッドモデルを採用しています。

  • Brian Hughesが演讲で触れた「cooperative multitasking」と「preemptive multitasking」の違いは何ですか?

    -「cooperative multitasking」は、アプリケーションが自主的に処理をゆずることで複数のプログラムを交互に実行する方式です。一方、「preemptive multitasking」は、OSがアプリケーションを強制的に中断し、他アプリケーションにCPUの制御権を渡す方式です。Preemptive multitaskingは、システムの安定性と応答性を向上させることができ、不正なアプリケーションの動作が他のシステムに影響を与えないことを保証することができます。

Outlines

00:00

📖 イベントループと非同期処理の基礎

Brian Hughes が Microsoft の Technical Evangelist として、Node.js のイベントループと非同期処理の仕組み、特にマルチスレッドに関連するパフォーマンスについて説明します。歴史的なコンピューティングの進化と共に、タスクとスレッドの違い、プロセスとメモリ空間の管理、そして Node.js がマルチスレッドをどのように扱うかについても触れられています。

05:01

🔄 マルチタスクとマルチスレッドの進化

マルチタスクとマルチスレッドの違いと、それぞれの進化について説明します。初期の Windows と Mac OS が協力型マルチタスクを採用し、その後 Pre-emptive マルチタスクが登場し、システムの安定性とパフォーマンスが向上しました。さらに、Intel の Hyper-Threading 技術が並列処理を向上させる方法として登場しました。

10:02

🤖 Node.js とマルチスレッドのアプローチ

Node.js がシングルスレッドであることが発表されましたが、実際には C++ コードを含むことで内部的にマルチスレッドを利用しています。Node.js はファイルシステムや DNS 検索などの一部の操作でスレッドプールを使用し、イベントループがこれらのリクエストを管理しています。また、Node.js は非同期メソッドを推奨することで、並列処理を可能にしています。

15:04

🛠️ イベントループとパフォーマンスの関係

イベントループが Node.js でのリクエストのディスパッチにどのように関与し、パフォーマンスに影響を与えるかについて説明します。イベントループは様々なタスクを管理し、非同期メソッドや C++ アセンブリ言語のプリミティブを使用して、効率的な処理を可能にします。また、イベントループは Node.js 内のコールバックを適切に処理し、結果を返します。

20:04

🔗 イベントループと非同期メカニズムの使い分け

Node.js 内でどの API がどの非同期メカニズムを使用するかについて説明します。ネットワークやパイプ、DNS 解決はカーネル非同期メカニズムを使用しており、スレッドプールの制限に依存しません。一方、ファイルシステムや DNS ルックアップはスレッドプールで処理され、これがリソースの制約を意味します。

25:06

🙌 まとめと質問の時間

Brian Hughes は、このプレゼンテーションで述べた内容の概要を締めくくり、イベントループと非同期処理の重要性について強調します。また、Node.js でのパフォーマンス問題に遭遇した場合のアドバイスや、さらなる学習のためのリソースを提供しています。最後に、質問を受け付けるaccordsを告知し、参加者に対して感謝の言葉を述べます。

Mindmap

Keywords

💡Node.js

Node.jsは、JavaScriptをサーバーサイドで実行するためのランタイム環境です。本動画では、Node.jsのイベントループと非同期処理がどのように動作し、パフォーマンスにどのような影響を与えるかについて説明されています。

💡イベントループ

イベントループは、Node.jsの中心となるコンポーネントで、非同期処理を管理し、呼び出しが完了するのを待ってからコールバックを実行します。イベントループは、Node.jsの非同期I/O操作やタイマー、シャットダウンなどの様々なタスクを調整します。

💡非同期処理

非同期処理は、プログラムの実行をブロックせずにタスクを実行する方法です。Node.jsでは、非同期処理が標準であり、イベントループを介して管理されます。これにより、I/O操作などの時間がかかる処理がバックグラウンドで行われ、主スレッドは他のタスクを続けることができます。

💡マルチスレッド

マルチスレッドは、複数のスレッドを同時に実行することで、プログラムのパフォーマンスを向上させる方法です。Node.jsは単一スレッドで動作しますが、C++のコードの一部では内部的にマルチスレッドが使用されます。

💡パフォーマンス

パフォーマンスは、プログラムがタスクを実行する速度や効率を指すものです。Node.jsでは、非同期処理とイベントループを適切に使用することで、パフォーマンスを最適化することができます。

💡タスク

タスクは、プログラムが実行する作業の単位です。Node.jsでは、イベントループがタスクの実行を管理し、非同期処理により複数のタスクを同時に処理することができます。

💡I/O操作

I/O操作は、入力と出力の操作のことを指します。Node.jsでは、I/O操作が非同期的に実行され、イベントループによって管理されます。これにより、I/Oが原因でプログラムがブロックされることを防ぐことができます。

💡コールバック

コールバックは、特定の処理が完了した後に実行される関数です。Node.jsでは、非同期処理の結果をハンドルするためにコールバックがよく使用されます。イベントループは、タスクの完了を監視し、適切なタイミングでコールバック関数を実行します。

💡スレッドプール

スレッドプールは、バックグラウンドで待機しているスレッドのグループです。Node.jsでは、スレッドプールを使用して、非同期I/O操作を効率的に処理することができます。ただし、スレッドプールのサイズに制限があるため、リクエストの数が多すぎるとキューに入ることがあり、パフォーマンスに影響を与えることがあります。

💡同期処理

同期処理は、プログラムの実行をブロックする処理です。Node.jsでは、同期処理は主スレッドで実行され、I/O操作などの時間がかかるタスクが同期的に処理されると、他の操作がブロックされることがあります。

Highlights

Brian Hughes, a Technical Evangelist at Microsoft, discusses the Node.js event loop and asynchronous operations in Node.js.

The talk focuses on the performance implications of asynchronous operations, especially in relation to multi-threading.

A historical overview of multitasking and multi-threading is provided, from single-process systems to cooperative and pre-emptive multitasking.

The limitations of cooperative multitasking, such as reliance on applications to yield control, are discussed.

Pre-emptive multitasking is introduced as a solution to the flaws of cooperative multitasking, allowing the OS to pause and switch between applications.

The evolution of operating systems, like Windows NT and Mac OS X, to include pre-emptive multitasking for stability and performance, is highlighted.

Symmetric multi-threading (SMT), also known as hyper-threading, is explained as a way to improve performance on multi-core processors.

The difference between processes and threads is clarified, emphasizing the single-threaded nature of Node.js in terms of JavaScript execution.

Node.js uses a thread pool for certain CPU-intensive operations, managing multiple requests through a preset number of worker threads.

Asynchronous methods in Node.js can leverage the thread pool and C++ asynchronous primitives to run operations in parallel, improving performance.

The event loop in Node.js acts as a central dispatcher for requests, managing both synchronous and asynchronous operations across the main thread and worker threads.

The performance of Node.js applications can be affected by the limitations of the thread pool and the nature of the operations (CPU-bound vs I/O-bound).

Examples using the crypto and HTTP modules demonstrate the practical performance differences between synchronous and asynchronous operations in Node.js.

The talk concludes with recommendations to use asynchronous methods in Node.js for better performance and the importance of understanding the event loop for optimizing applications.

Resources for further learning about the event loop and Node.js performance are mentioned, including talks by Bert Belter and Sam Roberts, as well as a blog post by Daniel Kahneman.

Transcripts

play00:00

alright hey everyone thanks for coming

play00:02

in join me we'll go ahead and get

play00:03

started my name is Brian Hughes I'm a

play00:06

Technical Evangelist at Microsoft these

play00:08

days and today we're gonna talk about

play00:11

the node.js event loop and we're

play00:12

specifically going to talk about how

play00:14

like asynchronous works in nodejs and

play00:18

what that means for performance

play00:19

especially how it relates to

play00:20

multi-threading so first we're gonna go

play00:23

through kind of a history of

play00:24

multitasking and talk about what it

play00:26

really is I think a lot of us have you

play00:28

know at least a vague idea of what

play00:29

multitasking and multi-threading is but

play00:32

there's some nuance that I think is

play00:34

important to really understand for the

play00:35

purpose of this talk so if we go way

play00:39

back in time we only had a single

play00:41

process at least in the personal PC

play00:44

world so if we think back to the days of

play00:46

ms-dos or the original Apple OS unlike

play00:49

the Apple to see and things like that

play00:51

before the Mac now these are

play00:52

command-line interfaces and in these

play00:54

they only had the ability to run a

play00:57

single thing at a time there was no

play00:58

concept of running more than one piece

play01:01

of code at the same time there were no

play01:02

background tasks there was no running

play01:04

multiple programs you know you would

play01:06

start up dust and dust yeah every

play01:08

operating system is a program in and of

play01:09

itself so that program would kind of run

play01:11

and you would tell it to run another

play01:13

program so that would stop the OS would

play01:15

actually stop running and you would

play01:16

start running another application now

play01:18

what Rondon when it was done it would

play01:20

actually start back up the OS again and

play01:22

so this is super super limited and we

play01:25

know we wanted the ability to run

play01:26

multiple things at a time so he created

play01:28

this concept called cooperative

play01:30

multitasking and this made the world

play01:32

quite a bit better and so this is a

play01:35

model of being able to run more than one

play01:37

program at the same time we first saw

play01:40

this introduction in kind of the early

play01:43

PC computing days with the early days of

play01:45

both Windows and the early Mac OS

play01:48

systems it's the way that cooperative

play01:50

multitasking works is you have an

play01:52

application and this is kind of going

play01:54

along and is kind of running doing its

play01:56

thing and eventually get to a point

play01:58

where the app will say all right I can

play01:59

go ahead and take a break now and this

play02:01

would happen that in the application you

play02:03

would usually call a method called yield

play02:05

there's a few other variants that would

play02:07

do the same thing but basically you know

play02:08

the application would actually have code

play02:11

that was written into it they would say

play02:12

okay I can

play02:13

now and we can let something else run

play02:15

and so when an application called yield

play02:17

you know that would signal back to the

play02:19

operating system who'd start up running

play02:21

again and be like okay this one is done

play02:22

who else who needs to run next and even

play02:24

go and run something else there's

play02:26

nothing else to run then you would give

play02:27

the operating system itself a chance to

play02:29

run but of course there is a flaw in

play02:31

this that you may have already noticed

play02:33

this is dependent on the user's

play02:35

application actually calling yield if

play02:38

the application didn't call yield then

play02:40

what that mean is that single

play02:41

application would just keep running and

play02:43

running and running and it wouldn't give

play02:44

anything else a chance to run so for

play02:47

those of you who remember the windows

play02:48

like 95 and 98 days you know you would

play02:51

get an application that would start in

play02:52

this behaving say the application would

play02:54

crash or something like that when that

play02:56

happened though it wouldn't just cause

play02:58

the app to crash you would take your

play02:59

entire system down with it you probably

play03:02

never be able to grab a window that was

play03:03

frozen and like move it around on the

play03:05

screen it would ghost and just

play03:06

completely destroy your entire display

play03:07

well this is actually the reason why

play03:10

under the hood it's because all of the

play03:13

versions of Windows that were based on

play03:14

DOS as well as all of the original

play03:17

versions of Mac OS up through Mac OS 9

play03:19

use this system and so when an app

play03:21

misbehaved there's no way for the

play03:23

operating system to recover from that

play03:26

and so like now this is an improvement

play03:27

we can run multiple things but you know

play03:29

it had problems iam instability being

play03:31

the primary reason so we wanted to do

play03:35

something better and so that's whenever

play03:36

we came up with this idea of pre-emptive

play03:38

multitasking and so a pre-emptive

play03:41

multitasking it works a little different

play03:42

so no longer are we reliant on an

play03:45

application saying hey I can go on pause

play03:47

now instead the operating system itself

play03:50

runs in such a way that it has the

play03:52

ability to pause any given application

play03:55

at any time so it will pause an

play03:58

application it'll save its state it will

play04:00

take all of the memory that's like you

play04:01

know and your CPU registers and things

play04:02

like that and save it somewhere else you

play04:05

know it'll pull that out and then it'll

play04:06

load another application in its place

play04:08

and so now with this the operating

play04:10

system is handling everything it's not

play04:12

dependent on user code at all and a

play04:15

preemptive multitasking have been around

play04:16

for a long time in like the UNIX world

play04:18

and especially in the mainframe world

play04:19

but it's made its way into the personal

play04:22

computing world a little bit later you

play04:24

know so Microsoft first introduced this

play04:26

with the windows in

play04:27

he colonel indeed this is one of the big

play04:29

selling points of Windows NT for that

play04:31

made it really popular among businesses

play04:33

is that you could say you know a

play04:34

misbehaving application won't crash your

play04:36

operating system you made a lot more

play04:38

secure and a lot more stable and so like

play04:40

Windows NT 4 had this at Windows 2000

play04:43

and then most importantly was Windows XP

play04:45

so when Windows XP was released it was a

play04:47

consumer OS targeted towards everyone

play04:49

but it actually used this server kernel

play04:51

you know I used the NT kernel that came

play04:53

from 2000 and NT 4 not from windows 95

play04:57

98 nme and so it all of a sudden windows

play05:00

got a lot more stable and this the same

play05:03

story in the Mac world whenever Apple

play05:05

decided to completely rewrite their OS

play05:07

for Mac OS version 1004 inten point o

play05:11

was a complete rewrite you know they got

play05:12

rid of all of the old Mac OS and they

play05:14

replaced it with what was basically next

play05:17

step which was an evolution of FreeBSD

play05:19

and so now with these OS as we finally

play05:21

had printed multitasking and things got

play05:23

quite a bit more stable and more

play05:25

performant and safer and all of these

play05:27

things that happen around it and so

play05:31

whenever we really know the CPU is like

play05:32

you know doing preempted multitasking

play05:34

you know the US has got like pausing one

play05:35

app saving its state allowing another to

play05:37

run and actually like flip back and

play05:39

forth between two or more applications

play05:40

and does this pretty regularly

play05:42

and it causes these applications to

play05:44

become sort of interleaved so even

play05:46

though this could be running on a single

play05:47

CPU because when these os's were written

play05:49

you know there was only single core CPUs

play05:51

it still made it look like we were

play05:54

running a whole bunch of applications at

play05:56

the same time it was basically a way of

play05:57

faking it and so like this like she

play06:00

works pretty well and we still have

play06:01

pre-emptive multitasking kernels today

play06:03

but there was another evolution that

play06:05

came on a little later that made things

play06:06

even better

play06:07

so a lot of people have been researching

play06:09

you know how can we improve performance

play06:11

you know especially once we got

play06:12

multi-core processors which AMD released

play06:14

in the mid-2000s you know we started

play06:16

getting this question of how can we

play06:18

harness these multiple cores for better

play06:19

performance so it was a lot of research

play06:21

into this area and we came up with

play06:23

symmetric multi-threading Intel was the

play06:26

first to market with this technology and

play06:27

they branded it as hyper threading so if

play06:30

you've heard of hyper threading it's

play06:31

really a SMT so what happens here is

play06:35

that you know the operating system is

play06:37

able to take advantage of new

play06:38

instructions like this is a new assembly

play06:40

level in

play06:40

instructions in the x86 processor itself

play06:42

where the OS can actually give some more

play06:46

information to the processor on how to

play06:47

run things in parallel so inside of a

play06:50

processor a modern processor we execute

play06:53

an instruction in stages you know we'll

play06:55

give it some assembly instruction that

play06:56

says you know load this value or

play06:58

multiply these things together things

play06:59

like that it'll break it down into steps

play07:01

and inside some of those steps in a

play07:04

processor which is called a pipeline

play07:05

they actually have multiple copies of

play07:08

the parts of the process that do the

play07:09

thing we want to do so like modern

play07:11

processors you know a single processor

play07:13

will have a thing called a

play07:14

floating-point unit and this is for

play07:16

doing like floating point multiplication

play07:18

but there's actually more than one

play07:20

there's usually like between two and six

play07:22

kind of depends on the processor and so

play07:24

by using these new instructions

play07:26

you know the OS is able to tell a

play07:28

processor that hey there's these two

play07:30

things of code coming in they're

play07:31

actually from different threads so you

play07:32

don't have to worry about doing all the

play07:34

normal safety checks just run them in

play07:36

parallel if you can now this isn't two

play07:38

completely separate CPU cores or

play07:41

separate processors so you don't get

play07:43

like a two-time speed-up but you get a

play07:45

little bit you know it's ranges from

play07:47

basically no speed-up up to about like

play07:49

15 to 20 percent it depends on the kind

play07:51

of code you're writing and so like with

play07:54

these systems you know we're finally

play07:55

able to really run a lot of different

play07:57

code simultaneously now you might notice

play08:00

I did a little bit of a switch I was

play08:02

talking about multitasking and I

play08:04

switched sucking up multi-threading so

play08:06

two different words and they're actually

play08:08

do mean different things so when we say

play08:11

a task a multitasking is an tasks are

play08:15

basically the same thing as a process we

play08:17

basically use those terms

play08:17

interchangeably and the task is kind of

play08:20

the more generic concept a process is a

play08:21

more specific concept in the kernel but

play08:23

they're basically the same thing but

play08:25

threads are actually very different and

play08:27

it's really important to understand the

play08:28

differences between these at least if

play08:30

you're looking at this kind of like

play08:31

parallel performance so a process is a

play08:34

top level execution container we can

play08:36

think of this as like an application

play08:38

like an application is a process is

play08:40

technically possible for an application

play08:42

have more than one process but usually

play08:44

it's about one to one and so inside of a

play08:47

process they have their own memory space

play08:49

that is dedicated just for them like the

play08:51

operating system will start up one of

play08:53

these processes

play08:54

and I'll give that chunk of memory and

play08:55

says like this is the memory that you're

play08:57

allowed to use and you're and these

play08:59

processes can't actually talk to memory

play09:02

given to any other process unless

play09:05

there's a bug in the operating system at

play09:07

which point we get all kinds of things

play09:09

and this is actually how viruses worked

play09:10

by the way as they try to break out of

play09:12

this little memory container but you

play09:14

know assuming you're not a virus writer

play09:16

which hopefully none of us are you know

play09:18

we're playing safely inside of this

play09:19

memory space now what this means is if

play09:22

we do happen to have two or more

play09:24

processes and we want to have them

play09:25

communicate to each other that's

play09:27

actually we have to kind of do some work

play09:29

to do that so we have to use a thing

play09:32

that is simply called inter process

play09:33

communication or IPC now there's a

play09:35

variety of ways of doing it but it's

play09:37

typically done using like a socket you

play09:39

can use a TCP socket there typically a

play09:43

lot of overhead so we'll use something

play09:45

else so a thing called a UNIX two main

play09:47

socket it's basically the same thing

play09:49

though it works the same way and the key

play09:51

way that they're similar that's

play09:52

important to remember is that whenever

play09:55

we're going to send a message we first

play09:56

have to take it we have to like bundle

play09:57

it up you know we have to convert this

play09:59

into a buffer put it inside of a packet

play10:01

and transmit it somewhere else who will

play10:04

then take that packet and do you

play10:05

assemble it just like whenever we do a

play10:07

networking requests and this all takes

play10:09

time it also has limitations on what you

play10:11

can do with it so this is in the

play10:13

JavaScript world and you want to

play10:15

communicate between processes usually we

play10:17

have to call Jason dot stringify you

play10:19

know if we want to send an object across

play10:20

and if you use Jason dot stringify a lot

play10:23

you may have noticed that well it can be

play10:25

kind of slow depending on you know what

play10:27

you're trying to stringify and also

play10:29

there are certain things that you can't

play10:31

stringify like if you try that if you

play10:32

have a function inside of your object

play10:34

jason touching if i will throw an

play10:35

exception so this is kind of limited and

play10:38

the performance of it isn't very good

play10:39

but processes give us a lot of safety on

play10:42

the flip side there are threads so a

play10:45

thread always runs inside of a process

play10:47

like every thread has a parent process

play10:49

that it is attached to processes can

play10:52

have multiple threads a single process

play10:54

can have multiple threads inside of it

play10:56

or just one by default years you get one

play10:58

and so they were going to say that

play11:00

process but because it's inside of a

play11:01

process that means that all of these

play11:03

multiple threads share the same memory

play11:06

so

play11:07

let's say you want to share data back

play11:09

and forth between two different threads

play11:10

you actually don't have to do anything

play11:12

because that variable is just sitting in

play11:14

memory and you both threads just

play11:15

reference the same variable so you

play11:17

create a global variable you know from

play11:18

one thread and you could just directly

play11:20

read it from the other so it's really

play11:22

really performant but there is a bit of

play11:25

a catch here it turns out we actually

play11:27

still have to do some synchronization

play11:28

whenever we're trying to you know share

play11:30

data between threads so as a thought

play11:33

experiment let's say we have two threads

play11:35

thread a wants to write to a variable

play11:38

and you know to a global variable we'll

play11:39

call it foo and then thread B wants to

play11:42

read from that variable let's say we

play11:44

this is a modern system which has

play11:45

multiple cores in it so both of these

play11:47

threads are actually running at the

play11:48

exact same time so the question is what

play11:52

happens and the answer is we actually

play11:54

don't know like the first time you run

play11:56

it it might be that thread a is gonna

play11:57

write to that variable before thread B

play11:59

reads it but then you rerun the exact

play12:01

same code on the exact same machine and

play12:04

it might happen the other way around

play12:05

thread B might read that variable before

play12:07

thread a writes to it and so by doing

play12:09

that you actually get a different result

play12:10

every time you run it and it makes your

play12:12

application unpredictable so this is a

play12:15

bug in your code specifically this is a

play12:17

type of bug called a race condition and

play12:19

so in order to avoid race conditions we

play12:21

have to actually write some manual code

play12:23

that sort of synchronizes when these two

play12:26

threads access it and we have to do a

play12:27

thing where we say alright thread B I'm

play12:30

gonna wait until thread a tells me that

play12:32

it's safe to read this variable so we're

play12:34

kind of almost back to like the

play12:35

cooperative multitasking days where I

play12:37

happen to write this manual code to

play12:39

coordinate between threads so it's

play12:41

actually more complicated than even

play12:43

cooperative multitasking for any modern

play12:45

app that does multi-threading like this

play12:48

kind of coordination actually can be

play12:49

really tricky it is hard to write

play12:51

correct multi-threaded code that is bug

play12:53

free like even for a seasoned developer

play12:55

this can be tricky and so you know if we

play12:58

look at like modern languages and

play12:59

runtimes there's been a lot of

play13:00

experimentation to try to make threads a

play13:03

lot easier to use Apple has done some

play13:06

interesting work as a have some others

play13:08

and nodejs actually has a very specific

play13:10

answer to this as well and the answer

play13:13

that note has for how do we deal with

play13:15

multi-threading ziz we're just not going

play13:17

to do it we're just not even gonna allow

play13:19

you to have multiple threads to begin

play13:20

with

play13:21

and so this is saying that no js' is

play13:23

single-threaded you know was we don't

play13:25

want to open that can of worms however

play13:27

the reality is actually a little more

play13:29

tricky than that so we say you know just

play13:31

a single thread and this is true except

play13:32

for when it's not actually true which

play13:35

does happen so what I mean by that is

play13:39

that all JavaScript's like all of the

play13:42

dramas could you have every single

play13:43

javascript file that you wrote that your

play13:45

mot that are in your modules and

play13:46

everything and also the javascript that

play13:48

is in node.js itself and nosiest does

play13:51

have javascript as part of it in

play13:53

addition to v8 itself and then also the

play13:56

event loop like all of this code runs in

play13:58

one single threat which we typically

play14:01

call the main thread and so this is what

play14:03

we mean when we say javascript is single

play14:05

thread is that all of these things are

play14:06

running inside of the same thread

play14:09

however there's a little bit more to no

play14:11

js' there's actually a fair amount of

play14:13

C++ code in nodejs to I forget exactly

play14:16

what the ratio is but I think it's about

play14:17

2/3 JavaScript to 1/3 C++ last time I

play14:20

looked you know something like that so

play14:22

that's a pretty good chunk and C++ is

play14:25

different because C++ has access to

play14:27

threads but it depends on how it's being

play14:29

run so if you have a JavaScript method

play14:31

that you're calling from node and it's

play14:33

backed by a c-plus most method if it's a

play14:36

synchronous JavaScript call then that

play14:38

C++ code will always run on the main

play14:41

thread

play14:42

however if you're calling an

play14:45

asynchronous method from JavaScript and

play14:47

this method is backed by some C++

play14:49

sometimes it runs on the main thread and

play14:51

sometimes it doesn't it actually depends

play14:53

on the context in which you are making

play14:55

this function call so to talk about this

play14:58

a little more we're gonna give some

play14:59

examples we're gonna kind of work from

play15:00

the outside in so first we're going to

play15:03

look at the crypto module so I chose the

play15:05

crypto module because the crypto module

play15:07

has a lot of methods in it some of which

play15:09

are synchronous some of which are

play15:10

asynchronous and they are very CPU

play15:12

intensive they do a lot of math and it

play15:14

takes a lot of time so we'll start by

play15:17

looking at the pbkdf2

play15:19

method so pbkdf2 which I always struggle

play15:23

to say correctly this is a method for

play15:26

hashing so we take some random string we

play15:29

feed it into this and it'll give us a

play15:30

hash out so this is really important for

play15:33

a lot of secure

play15:34

types of coating it this is used in

play15:37

parts of doing like a TLS communication

play15:40

yet you know HTTP is like secure

play15:42

certificate type stuff this is also used

play15:44

whenever we have say a password from a

play15:46

user and we want to store that in the

play15:47

database you know I think everyone knows

play15:50

it or I hope everyone knows that you

play15:51

know you never want to store a debt a

play15:53

password directly in a database that is

play15:55

a major security hole because you know

play15:57

if an attacker manages to compromise

play15:59

that database all of a sudden they have

play16:00

everyone's passwords so instead of

play16:02

storing that password directly we hash

play16:04

it we actually passed it through this

play16:05

method right here placed this is the

play16:07

currently recommended method that we use

play16:09

for hashing passwords now in order to

play16:12

make the secure part what part of the

play16:14

things that makes it secure is it's

play16:15

actually meant to be really hard to

play16:17

compute it intentionally takes a long

play16:19

time to create an answer that way you

play16:21

can't just sit there and make guesses

play16:23

all day so I use this for an example

play16:25

this by the way the sample code is more

play16:27

or less straight from the nodejs Docs

play16:29

with a few of my own little tweaks to it

play16:31

and so what we're gonna do is we're

play16:32

gonna start by calling the synchronous

play16:34

version of this method I'm going to call

play16:36

it two times so it's gonna call it once

play16:38

and then the time after that so when we

play16:40

run this code we get an execution time

play16:43

line that looks like this and this is

play16:46

kind of what we would expect for secret

play16:47

Ness code you know we call it once it's

play16:49

gonna start it's gonna run to completion

play16:50

and then once it's done we're gonna call

play16:52

the next one and it's gonna run until

play16:54

completion and we see that this took

play16:56

about 275 milliseconds cool

play17:00

so that's what synchronous code looks

play17:02

like now we're gonna make one single

play17:05

change so this the exact same code that

play17:07

we saw earlier except in call instead of

play17:09

calling the synchronous version pbkdf2

play17:11

we are calling the asynchronous version

play17:14

you know everything else is exactly the

play17:16

same except we swapped those out and so

play17:18

when we run this code we get an

play17:21

execution time line that looks like this

play17:23

we can see that you know we did those

play17:25

two same calls and they took about the

play17:27

same time for each one but was actually

play17:29

able to run them in parallel and so we

play17:32

can see the whole thing took about 125

play17:34

milliseconds you know that is quite a

play17:35

bit faster than the synchronous version

play17:38

and so what this kind of tells us is

play17:40

that you know we didn't write any

play17:41

threading code inside of JavaScript we

play17:43

just wrote normal regular old JavaScript

play17:45

and yet it was actually able to run

play17:47

these to operate

play17:48

in parallel it and it turns out under

play17:50

the hood it actually ran these in

play17:52

separate threads because they're like

play17:54

there's some c++ methods that node uses

play17:56

to actually compute this and by the way

play17:59

so you probably hopefully heard with

play18:01

node that there's a recommendation you

play18:02

always use the asynchronous methods

play18:04

whenever possible this is exactly why

play18:07

right here it's because by using the

play18:09

asynchronous methods in a lot of cases

play18:11

node is able to automatically run things

play18:13

in parallel for you but if you use the

play18:16

synchronous methods you never give node

play18:17

the chance to do that so you always want

play18:19

to use asynchronous because you can get

play18:21

some pretty big performance benefits a

play18:23

lot of the time so alright so this is

play18:26

two requests we saw for both synchronous

play18:28

and asynchronous now let's say we

play18:29

increase this from two requests to four

play18:31

requests we're this took 125

play18:33

milliseconds for requests well now this

play18:37

took 250 milliseconds so this is the

play18:39

exact same asynchronous code but we just

play18:41

changed the number of requests you know

play18:42

that low constant I had at the top and

play18:45

this took a lot longer now the reason

play18:48

for this is I ran this code right here I

play18:50

ran on this exact laptop this laptop is

play18:52

a dual-core laptop as a dual-core

play18:55

processor in it so anytime you're doing

play18:58

something that requires a lot of math

play18:59

you're doing a lot of stuff in the CPU

play19:01

you know you're gonna be bound on how

play19:02

fast the CPU can actually do those

play19:04

computations and given that there's only

play19:06

two threads this is our bottleneck so I

play19:08

did this four times but because only

play19:11

then only two processors what no one

play19:12

ended up doing or what the processor

play19:14

actually ends up doing is it takes those

play19:16

four threads it's gonna assign two of

play19:18

those to one core in the other tooth to

play19:20

the other and inside of that core it's

play19:22

actually going to be doing just typical

play19:24

pre-emptive multitasking so it's gonna

play19:25

like run one thread for a little bit

play19:26

positive run the other thread for a

play19:28

little bit posit and just kind of like

play19:29

ping-pong back and forth until they're

play19:31

both done so it makes it look like we

play19:33

ran them in parallel and that's why they

play19:35

start at the same time and end at the

play19:36

same time but because it's constantly

play19:38

having to pause to switch back and forth

play19:39

it took double the amount of time and by

play19:43

the way this is true in any language

play19:44

this is not specific to no js' if you

play19:46

write you know Java code or C++ or

play19:48

anything you'll see this exact same

play19:49

performance profile now let's say we

play19:52

increase this from four requests to six

play19:54

requests all right all right okay this

play19:57

is a little more interesting graph this

play19:59

is no

play20:00

her uniform we had this like weird

play20:01

little tail that's sitting at the end so

play20:04

if I superimpose these together you know

play20:06

we hopefully are starting to see a bit

play20:07

of a trend here and we notice that

play20:09

there's these four threads that ran

play20:10

exactly like before you know the the

play20:12

first four threads in the sixth read

play20:14

request operated exactly the same as

play20:16

when we only had four and then those

play20:19

last two it's almost like we took the

play20:20

time when we did only two requests and

play20:22

sort of stuck that to the end and

play20:25

there's actually reason for this and

play20:27

that's because you know these a hashing

play20:29

reoperation xin c++ are done in a

play20:32

background thread but no doesn't spin up

play20:35

a new thread for each request instead

play20:37

nodejs whenever it first starts up or

play20:40

well technically win it whenever you

play20:41

first make a request for something

play20:43

that's going to go on a thread it will

play20:44

automatically spin up and a preset

play20:47

number of threads which defaults to four

play20:49

it will spin up these four threads and

play20:51

will constantly reuse those threads for

play20:53

all of its work in this set of threads

play20:55

is called the thread pool in no js' and

play20:58

so the reason that we saw forward then

play21:00

that ran together and then the long tail

play21:01

was because we had this default for

play21:03

worker threads in the thread pool

play21:06

so what nodejs is doing whenever we make

play21:08

these requests is it can see that first

play21:10

request come through that's me like okay

play21:11

I got this I'm gonna assign this to the

play21:12

first thread and thread pool the second

play21:15

request will go to the second the third

play21:16

to the third four to the fourth but when

play21:18

that fifth request comes through no it's

play21:20

gonna be it's gonna say alright all of

play21:22

my worker threads are busy right now so

play21:24

I'm gonna stick this other request in a

play21:26

queue until one of the worker threads

play21:28

becomes available and then the same

play21:29

thing with the six requests so once that

play21:31

first request finishes notice you can

play21:33

say like alright okay so now I have one

play21:35

of these threads available again I'm

play21:37

gonna pick off one of these queued

play21:38

requests and assign it to the next two

play21:40

so that's why it really does look like

play21:42

it did for operations and then two

play21:44

because that's actually what it did

play21:45

under the hood and so this is a case

play21:48

where we're actually seeing the

play21:49

limitation and you know look at this

play21:50

kind of like limitation in the thread

play21:52

pool so all right let's move on to our

play21:55

next example and that is the HTTP module

play21:58

so we have this a little bit of sample

play22:00

code here this is using the HTTP module

play22:02

what this is gonna do is it's going to

play22:04

download my profile profile photo from

play22:06

my personal website I chose this

play22:09

specific one because it's a rather large

play22:11

files about eight hundred kilobytes

play22:13

my website is hosted in Azure which

play22:16

works well for this test because a

play22:17

throughput inside of a jar is really

play22:19

consistent also consists in Amazon and

play22:21

nativist anything would happen there the

play22:24

other reason I wanted to do this is

play22:25

because I controlled this system which

play22:26

meant I was able to disable the CDN like

play22:30

there was no CDN sitting in front of

play22:31

this CD ends are great for performance

play22:34

because you know it can do lots of like

play22:36

caching and things like that you can be

play22:38

downloading files closer to where you

play22:39

are geographically and you also decrease

play22:41

your bandwidth cause they're not great

play22:43

for this test because CD ends make the

play22:45

timing unpredictable which is not good

play22:48

for benchmarks so we wanted to download

play22:50

something that was very very predictable

play22:51

so I chose this file so what we're doing

play22:53

here is we're downloading it we're

play22:55

listening to the data event to make sure

play22:57

that we're actually going to download

play22:58

all of the data note is kind of smart

play23:01

whenever we do this if we're not

play23:02

listening to the data event at all it's

play23:03

actually just going to kind of skip

play23:05

downloading part of it and then we wait

play23:07

for the end event and then we're going

play23:08

to tie it and so this and we're timing

play23:10

from when we call HTTP dot request to

play23:12

the time the end event is fired so once

play23:15

again we are starting with two requests

play23:16

and we look at the execution time when

play23:18

it--and it looks like this and we say

play23:19

all right great it actually took almost

play23:21

the exact amount same amount of time to

play23:23

download that file twice which we want

play23:25

to see so it took about seven hundred

play23:27

milliseconds alright so now we're gonna

play23:30

do the same thing we did before and

play23:31

increase the number of requests to four

play23:33

and so we see they also took all about

play23:36

the exact same amount of time we also

play23:38

took about seven hundred milliseconds it

play23:39

did not increase the amount of time it

play23:41

takes to download this file which is

play23:44

different than the results we saw in

play23:45

crypto the reasons for this it has

play23:48

nothing to do with note this and this is

play23:49

all just about like kind of a computer

play23:51

architecture and bottlenecks whenever

play23:53

we're downloading a file and especially

play23:54

in this case well we're downloading a

play23:56

file and only saving it to memory we're

play23:57

not writing it to the hard drive

play23:59

the limitation is the network itself

play24:02

like whenever we're downloading a file

play24:04

like this our computers are basically

play24:05

sitting there doing nothing most of the

play24:07

time and everyone smile you get a little

play24:08

bit of data from the network which is

play24:10

lovely and go process so since we you

play24:12

know we're not limited by the number of

play24:14

CPU cores because our CPUs sitting there

play24:16

doing nothing then you know we don't hit

play24:19

that bottle neck so we do this for it's

play24:21

the exact same amount of time as to you

play24:23

know it's just the workload is different

play24:25

then

play24:26

all right so we'll increase this to six

play24:29

like we did before and well so this is a

play24:31

little is a little more unexpected

play24:33

though you're compared to the previous

play24:34

slide you notice it still took about 700

play24:36

milliseconds and there's no tail so this

play24:39

is different than crypto you know if

play24:42

this it turns out that this is actually

play24:44

not subject to the limitations of the

play24:46

thread pool the reasons for that is

play24:48

inside of node whenever possible it will

play24:52

actually use C++ asynchronous primitives

play24:55

under the hood so it turns out it is

play24:56

actually possible to do asynchronous

play24:58

coding inside of C++ in certain cases

play25:01

this is a thing that is provided by the

play25:03

operating system itself so the way this

play25:06

works it looks a little different than

play25:07

JavaScript but it's roughly the same

play25:09

thing the idea is that we tell via OS

play25:11

when we tell the kernel like you know I

play25:13

want to go ahead and download this

play25:14

resource and then the kernel is actually

play25:16

going to manage downloading that code

play25:18

it's happening in the kernel not inside

play25:20

of your application and then what we can

play25:22

do is we can actually ping the kernel

play25:23

and ask hey you don't with this request

play25:25

yet you know are you done with this and

play25:26

so inside know we just can't continue

play25:28

Lee asking are you done yet are you done

play25:29

yet are you done yet and eventually it's

play25:31

gonna say yes once it's done we can then

play25:33

go and call some other methods that says

play25:35

all right give me the results for this

play25:36

thing that I requested now since this is

play25:39

a part of the kernel we have to use a

play25:41

different mechanism for each different

play25:43

OS because they have different ways of

play25:45

doing this so on linux this method is

play25:47

called a poll on Mac OS is it called KQ

play25:50

and our windows this is called get

play25:52

queued completion status X and so

play25:55

whenever we are making these

play25:56

asynchronous C++ calls because the

play25:59

operating system is doing it all for us

play26:00

we don't have to really do any code in

play26:02

C++ we don't have to assign it to a

play26:04

background thread and so whenever we're

play26:06

using this it's actually happening in

play26:07

the main thread itself and thus we're

play26:10

not limited limited to the number of

play26:12

threads in the thread pool cool

play26:15

so that's how that whole thing is

play26:18

working how does that relate back to the

play26:20

event loop well it turns out that the

play26:25

event loop sort of acts like a central

play26:28

dispatch for all of these requests this

play26:30

is of course no oversimplification the

play26:32

event loop actually does a lot of

play26:33

different things but specifically for

play26:35

the purpose of performance and

play26:37

especially threading performance

play26:39

we can think of the event loop is

play26:41

basically a director you know whenever

play26:43

we make one of these requests in

play26:44

JavaScript you know it's gonna go

play26:46

through it's gonna do some a lot of work

play26:47

in JavaScript itself but eventually gets

play26:49

to the point where it's gonna cross from

play26:50

JavaScript and to C++ and once it

play26:53

crosses over to that side that's when

play26:55

the request goes to the event loop and

play26:56

the event loop is going to look at this

play26:57

request and I'm once again over

play26:59

simplifying here there's a lot more

play27:01

stuff that does under the hood in detail

play27:03

but basically what it does is it'll look

play27:04

at this request and say is this a

play27:05

synchronous method okay cool within the

play27:08

thread that I'm running in I'm gonna

play27:09

ship off to some other you know C++ code

play27:12

that's going to just go and do that

play27:13

request right then and there if it's an

play27:16

asynchronous request the milliner's

play27:18

going to look at this and say alright is

play27:19

this something that I can run using a

play27:21

C++ async primitive if so it'll ship it

play27:24

off to the bit of C++ code directly that

play27:26

handles that you know inside of the main

play27:27

threat if it can't be run using a C++

play27:31

async from it then it's gonna say

play27:32

alright this kit has to go into a

play27:33

background thread and so it starts to go

play27:35

into this whole like threading logic so

play27:37

it's gonna you know queue this up for to

play27:39

be sent over to one of these threads and

play27:42

so the event loop is the one that

play27:43

manages it and then whenever each of

play27:45

these calls finishes you know it's gonna

play27:46

signal back to the event loop either

play27:48

coming from one of the threads or

play27:49

directly from the C++ code if it's in

play27:51

you know C++ async primitive and then

play27:54

the event loops can't say like alright

play27:55

this is done and it's gonna go notify

play27:57

back you know across v8 back into

play28:00

JavaScript land be like alright this

play28:02

operation is done in here's the result

play28:03

in an aside of JavaScript note is going

play28:06

to go and then call all of those

play28:08

callbacks that are registered and

play28:09

waiting for that you know before that

play28:12

result and so and that's how we get it

play28:14

back you know it's kind of like

play28:15

constantly going and basically we can

play28:16

think of it like a circle I like I said

play28:18

there's a lot of other things the event

play28:19

loop does as well it manages like timers

play28:21

it manages whenever it's time to

play28:23

shutdown and a bunch of other things

play28:24

like that too so a real question of

play28:27

course is which API is use which

play28:29

asynchronous mechanism yeah this is what

play28:31

we want to know to understand the

play28:32

performance and by the way I kind of

play28:34

shamelessly borrow this slide from Bert

play28:36

belter he created for his own talk he

play28:37

gave on the event loop where it actually

play28:39

works on the event loop so he knows this

play28:40

stuff a lot better than I do the key

play28:43

thing is like all the kernel async this

play28:45

is pretty much all of our networking

play28:46

like networking most of the time is done

play28:49

using kernel async mechanism so we're

play28:51

not subject to the limit

play28:52

the thread pool same thing with pipes

play28:55

most of the time and the same thing with

play28:56

all of the DNS resolve calls but there

play29:00

are also some things we have to run in

play29:01

thread pool so everything from the file

play29:02

system module is run in the thread pool

play29:04

this is the big thing that I kind of

play29:06

keep in mind turns out there's just not

play29:07

any C++ asynchronous primitives for file

play29:10

i/o and so whenever you're doing a lot

play29:12

of file system calls you know a whole

play29:14

bunch of file i/o you know you may run

play29:16

into the limitations of the thread pool

play29:18

now more than likely you're actually

play29:20

going to be limited by just how fast

play29:21

your hard drive is and you won't run

play29:22

into this but it is hypothetically

play29:25

possible that you might be able to run

play29:26

into these thread pool limitations turns

play29:29

out that DNS lookup itself has to be run

play29:31

in thread pool as well and there's also

play29:32

a couple of edge cases too for pipes you

play29:35

know and think like file system stuff

play29:37

now it does all some of the stuff is

play29:39

also dependent on which OS you're

play29:41

running in because like I said each OS

play29:42

provides different asynchronous

play29:43

primitives so on the UNIX side all of

play29:46

the UNIX domain sockets which I

play29:48

mentioned earlier for IPC calls and

play29:49

things like that

play29:50

all tty input so tty input if you're not

play29:54

familiar with that term is basically the

play29:56

console so standard out standard air and

play29:59

things like that and standard n so those

play30:02

are TTY so all of your console.log

play30:03

constant info is going through this TC

play30:05

white module under the hood same thing

play30:07

with UNIX signals so this is SIGINT sig

play30:09

term things like that if you're familiar

play30:11

with those

play30:11

they finally child process so the thing

play30:13

you know your exec spawn things like

play30:15

that those are all handled using kernel

play30:18

ASIC mechanisms on UNIX but the reverse

play30:20

is true on Windows in windows child

play30:22

process tt and TTY are all handled using

play30:25

threads just because gake you know the

play30:28

Windows mechanism doesn't provide those

play30:30

primitives there's also a couple of edge

play30:32

cases for TCP servers on Windows that

play30:34

have to be running background threads

play30:36

instead of using kernel async mechanisms

play30:38

and so like if you're you know running

play30:40

your app and you're getting some really

play30:42

weird performance numbers like

play30:43

especially if you're looking at

play30:44

something like wait why did this happen

play30:45

here I thought it should have happened

play30:46

here you know one of the first things I

play30:48

would recommend looking at is you know

play30:50

what are the things you're calling could

play30:52

this possibly be a limitation especially

play30:54

if you're seeing that weird long tail

play30:55

that I showed in the graph earlier now

play30:57

there's a whole bunch of other things

play30:59

that can cause performance issues so I

play31:00

don't want to say like this will be your

play31:01

issue before Matt's on node is

play31:03

complicated of course but this cancer

play31:06

be a part of it so if you want to learn

play31:09

more about this there are two great

play31:11

talks the sort of classic talks about

play31:13

the event loose one by a Sam Roberts

play31:14

who's sitting right there and also one

play31:16

by Burt Beltre and both of these kind of

play31:18

start by describing the the event loop

play31:21

from the inside out it talks about like

play31:22

how it's constructed and how it operates

play31:24

and they're a great way to kind of learn

play31:25

more about this by the way I'll put

play31:27

these slides up on Twitter so you don't

play31:28

have to worry about memorizing or taking

play31:30

them down right now it was also a great

play31:32

blog post by Daniel Kahneman that kind

play31:33

of summarizes these as well alright and

play31:36

with that if anyone has any questions

play31:38

I'm gonna be at the Microsoft booth you

play31:40

can find me there and ask me all kinds

play31:41

of questions about node or typescript or

play31:44

all sorts of other stuff like that and

play31:45

with that I want to thank you all for

play31:46

coming

play31:47

[Applause]

Rate This

5.0 / 5 (0 votes)

Related Tags
Node.jsイベントループ非同期処理マルチスレッドBrian HughesMicrosoftパフォーマンスJavaScriptC++開発者向け
Do you need a summary in English?