How Much Memory for 1,000,000 Threads in 7 Languages | Go, Rust, C#, Elixir, Java, Node, Python
Summary
TLDRThis blog post explores the memory consumption of handling a million concurrent tasks across various programming languages, including Rust, Go, Java, C#, Python, Node.js, and Elixir. The author compares asynchronous and multi-thread programs, noting significant differences in memory usage. Through synthetic benchmarks, the author tests each language's capability to launch and wait for 10 seconds on tasks, observing that Rust with Tokio async runtime and Go's goroutines performed well with low memory footprint. Surprisingly, C# managed to compete with Rust, while Node.js and Elixir consumed more memory. The author suggests that real-world tests involving tasks like websocket connections would provide a more accurate representation of language performance.
Takeaways
- 🧠 The script discusses a benchmark comparing memory consumption between asynchronous and multi-thread programs in various programming languages such as Rust, Go, Java, C#, Python, Node.js, and Elixir.
- 🔍 The author highlights a significant difference in memory consumption among different programs, with some consuming over 100 megabytes and others reaching almost three gigabytes.
- 📈 A synthetic benchmark was created to test the performance of programs handling a large number of network connections, with the number of tasks controlled by a command-line argument.
- 🤖 Rust was tested with three different programs using traditional threads, async with Tokio, and async with async-std, showing the language's capabilities with different threading models.
- 🐹 Go routines were used to demonstrate Go's ability to handle concurrency, with the use of a wait group to manage the completion of tasks.
- 🌟 Java's introduction of virtual threads in JDK 21 was noted as a significant step forward, bringing it closer to the capabilities of Go routines.
- 🚀 C# was shown to have first-class support for async-await, which is similar to managed environments found in other languages.
- 🐍 Node.js was pointed out as having a high memory footprint, especially when handling a large number of connections, due to its event-driven, non-blocking I/O model.
- 🐍 Python demonstrated surprisingly low memory consumption, especially when using asyncio, which was unexpected compared to other managed runtimes.
- 🔑 The importance of considering factors such as task launch time and communication speeds, in addition to memory consumption, was emphasized for a comprehensive performance evaluation.
- 🔍 The author suggests that the benchmark should be expanded to include more realistic use cases, such as handling websocket connections or open TCP connections, to better reflect real-world programming challenges.
Q & A
What is the main topic of the blog post discussed in the transcript?
-The main topic of the blog post is a comparison of memory consumption between asynchronous and multi-thread programs across various popular programming languages when handling a large number of concurrent tasks.
Why did the author create a synthetic benchmark for the comparison?
-The author created a synthetic benchmark because the existing computer programs they needed to compare were complex and differed in features, making it difficult to draw meaningful conclusions from a direct comparison.
What programming languages were included in the memory consumption comparison?
-The programming languages included in the comparison were Rust, Go, Java, C#, Python, Node.js, and Elixir.
What is the significance of the GitHub repository mentioned in the transcript?
-The GitHub repository mentioned is where the author has shared the synthetic benchmark programs written in various languages, allowing others to contribute and test the performance of their own languages.
What was the author's observation regarding Node.js in terms of memory consumption?
-The author observed that Node.js has a high memory consumption, especially when handling a large number of connections, with some programs consuming almost three gigabytes of memory.
What is the difference between traditional threads and async/green threads as mentioned in the Rust programs?
-Traditional threads in Rust are actual OS threads, while async/green threads are managed by the runtime and are lighter in terms of system resources, as seen with async implementations using Tokyo and async-std.
What is the concept of 'virtual threads' introduced in Java JDK 21?
-Virtual threads in Java JDK 21 are a feature that provides a similar concept to Go routines, allowing for a large number of lightweight threads that are managed by the JVM, rather than OS threads.
What is the author's opinion on the memory consumption of .NET (C#) in the benchmark?
-The author was surprised that .NET (C#) had the worst memory footprint in the benchmark, especially considering it requires a significant amount of memory even when idle.
How did the author find Python's performance in the memory consumption benchmark?
-The author found Python's performance surprising, as it fared well in terms of memory consumption, using only half the memory of Java or Node.js.
What was the author's final observation on the memory consumption of concurrent tasks?
-The author observed that a high number of concurrent tasks can consume a significant amount of memory, and that runtimes with high initial overhead, like C#, can handle high workloads more effectively.
Outlines
🔬 Comparative Memory Consumption in Programming Languages
The speaker initiates a discussion on the memory requirements for running one million concurrent tasks, highlighting the significant variance in memory usage observed across different programs. They delve into a comparative analysis of memory consumption between asynchronous and multi-threaded programs in languages like Rust, Go, Java, C#, Python, Node.js, and others. The speaker recounts a previous experience comparing the performance of network connection handling programs, noting a 20x difference in memory consumption. The discussion leads to the idea of creating a synthetic benchmark for a more controlled comparison, and the speaker invites the audience to contribute to an ongoing project on GitHub aimed at building an interpreter and testing language performance comprehensively.
🚀 Benchmarking Concurrency in Programming Languages
The speaker describes the creation of a benchmark program across various programming languages to launch and manage concurrent tasks. The program waits for 10 seconds per task before exiting, controlled by a command-line argument. They detail the implementation in Rust using traditional threads, async with Tokio, and async-std, as well as Go's use of goroutines and Java's adoption of virtual threads in JDK 21. The speaker also touches on C#'s async-await support and Node.js's event-driven architecture, before moving on to Python's asyncio and Elixir's async capabilities. The test environment and hardware used for these benchmarks are also mentioned, emphasizing the importance of understanding the memory footprint of each language's runtime.
📊 Memory Footprint Analysis Across Different Runtimes
The speaker presents the initial memory footprint results of the benchmark, noting the differences between compiled languages like Go and Rust, and managed environments or interpreted languages like Node.js, Java, .NET, Python, and Elixir. They express surprise at certain outcomes, such as .NET's high memory consumption despite not increasing significantly with the addition of tasks. The speaker also points out that the memory consumption of native threads in Rust is quite low compared to other runtimes, and that async tasks or virtual threads might offer a lighter alternative. The summary underscores the significant memory differences between compiled and managed runtimes, as well as the surprising performance of certain languages like Python.
📈 Scaling Up: Memory Consumption at 10K and 100K Tasks
As the number of tasks increases to 10K and then 100K, the speaker observes the memory consumption trends. They note that non-green threads have been excluded from the benchmarks due to system limitations, and that the memory usage of some languages like C# and Java does not change significantly even with the increase in tasks. The speaker hypothesizes that certain runtimes may be reserving memory or have high initial overheads that allow them to handle high workloads more efficiently. They also express skepticism about the accuracy of the results for some languages, suggesting that more complex tasks or interactions might yield different outcomes.
🏁 Final Benchmark Observations and Recommendations
The speaker concludes the benchmark by discussing the memory consumption at one million tasks, noting the system limitations that prevented some languages from spawning that many threads. They comment on the surprising performance of C#, which showed an increase in memory usage only at this extreme level. The speaker also points out that Go's memory consumption is significantly higher than expected, contradicting the common perception of it being lightweight. They emphasize that while the comparison focused on memory consumption, other factors like task launch time and communication speeds are also crucial. The speaker suggests future benchmarks should include more realistic tasks like handling websocket connections to better simulate real-world usage.
👏 Appreciation for the Benchmark and Final Thoughts
In the final paragraph, the speaker expresses appreciation for the work done on the benchmark, commending the author, Peter, for his efforts. They highlight the importance of considering the impact of simple programming constructs like for-each loops on performance, using the example of a chat client built with Node.js. The speaker suggests that even minor details in code can significantly affect a program's efficiency, and encourages the audience to think critically about their programming choices. They end with a call to clap for Peter and to acknowledge the value of the insights gained from the benchmark.
Mindmap
Keywords
💡Memory Consumption
💡Concurrent Tasks
💡Asynchronous Programming
💡Go Routines
💡Virtual Threads
💡Garbage Collection
💡Benchmarking
💡Node.js
💡Rust
💡C#
💡Elixir
Highlights
Comparison of memory consumption between asynchronous and multi-thread programs across popular languages.
Performance comparison of computer programs handling a large number of network connections.
Memory consumption can vary significantly, with some programs exceeding others by 20x.
Node.js consumes more memory at 10K connections compared to others.
Introduction of synthetic Benchmark to compare performance directly.
Invitation to contribute to the synthetic Benchmark on GitHub.
Building a server for full interpretation and remote compilation as a test for language performance.
Rust programs created using traditional threads, async with Tokyo, and async with async-std.
Go routines as the building block for concurrency without separate implementation.
Java's introduction of virtual threads in JDK 21, similar to Go routines.
C#'s first-class support for async-await and its synthetic thread implementation.
Node.js's memory consumption is significantly lower due to its event loop and timers.
Python's asyncio and its surprisingly low memory footprint.
Elixir's async capabilities and its memory consumption at one million tasks.
Hardware used for testing: Xeon processor and its implications for results.
Rust and Go programs compile to static binaries and have a smaller memory footprint.
Managed platforms and interpreters consume more memory compared to compiled binaries.
Surprise that .NET has the worst memory footprint, suggesting potential for tuning.
Observation that native threads are more memory-intensive than async tasks or virtual threads.
C#'s competitive performance in memory consumption even at one million tasks.
Go's loss of advantage over Rust in memory consumption at higher task counts.
The importance of considering task launch time and communication speeds in benchmarks.
Call for more comprehensive benchmarks involving real-world tasks like websocket connections.
Transcripts
how much memory do you need to run one
million concurrent tasks
by the way alerts are off
uh uh in this blog post I delve into the
comparison of memory consumption between
asynchronous and multi-thread programs
across popular languages like rust go
Java C sharp
Python node.js and of course everybody's
favorite
these nuts uh sometimes ago I had to
compare performance of a few computer
programs designed to handle a large
number of network connections I saw a
huge difference in memory consumption of
those programs even exceeding 20x
okay that I mean it makes sense it will
you know how are they handling things
because I can go it's pretty it's pretty
slick goes pretty dang pretty dang slick
you know what I mean uh some programmers
consume little over 100 megabytes but
others reach almost uh three gigs I
think what he's trying to say here I
think what Peter what old Peter up there
kalaskowski is trying to say is that
node.js
node.js loves the memory okay at 10K
connections unfortunately those programs
were quite complex and differed also in
features so it'd be hard to compare them
directly and draw some meaningful
conclusion as that wouldn't be an apple
to Apple comparison this led me to an
idea of creating a synthetic Benchmark
instead
synthetic by the way I actually do want
to like follow this this little line of
thought right here so I want everyone to
pay really close attention if you go to
github.com the primogen right uh
and you look at this and you go to my
latest one is they're like yeah not left
pad there we go TS is rust zigds I guess
I'm I'm going through a D's face okay uh
if you go here you can join and add your
own language we're building The
Interpreter okay
and then I want to test it I actually
want to build a server that does like
full interpretation and we'll figure out
what the what the rules are and all that
but actually can compile remotely right
and then that is what language can do
that the fastest I feel like that is a
way cooler
like test of languages than these dumb
tests that you see right there's that
one YouTube video that did like uh that
just did like a mathematical formula and
be like which one does this one the best
and it's just like it but that's not
even like real at all okay you're not
even creating memory we need like memory
you need cleanup you need things that
happen you need connections you need sys
calls you want to see the entire you
know the entire thing you don't want to
just see like just some tiny little
nothing you know what I mean it never
works that way it's always a lie it's
just a lie it's just always alive
anyways let's do this
we have Microsoft guy yeah Benchmark I
created the following program in various
programming languages let's launch and
concurrent tasks for each tack waits for
10 seconds and then the program uh
exists after all the tasks finished what
the program continues to exist after we
are finished
I'm pretty sure this is like a fork bomb
then uh the number of tasses controlled
by the command line argument let's go
let's go the little help of chat
Jeopardy I could write such programs in
a few what
what
what do you mean
this is not hard in Rust I mean are you
using a real threat are using Hardware
threads or green threads or go that
simple Java this is probably not that
hard to probably do it in a few moments
of just a what do you mean chat Jeopardy
see this seems like such a crazy
crazy things people just love them Chad
jeopardies uh anyways rust I created
three programs in Rust the first one
uses traditional threads
that's what I wanted to see okay I
wanted to see one Hardware in one
language traditional threads okay I'm
sick of all this polyphretisms going on
uh here's the core of it bam make it
happen uh the other two versions had
async with Tokyo and the other with
async stood
here's the core of the Tokyo beautiful
beautiful
I like this the async stud uh variant is
very similar so I won't put it there go
uh go routines are the building block
for Conservancy so we don't need to do
them separately uh but we used a weight
group okay beautiful great use of a
weight group by the way this is a great
use of a weight await group loved it
loved every part of this uh again I
think go for all of its downfalls also
has a lot of updrafts what's what's the
term for like
what's the opposite of a downfall
yes I did have a bad experience with C
sharp why you bring that up okay
an upfall an upwind an updraft
an uplift
a boob job what do we call it
I don't even know
Java Java traditionally uses threads but
JD jdk 21 offers a preview of virtual
threads which are similar concept to go
routines look at Java go
Java has almost caught up to 2015. I
think we all need to be impressed right
now can we just take a moment to realize
that Java's going places it really is
like this is incredible
I mean the next thing you know
they I mean the next they could actually
beat C plus plus by the time C plus plus
23 comes out Java might actually be
exceeding what happened in 2023 as far
as technology Technologies go it could
be it could be incredible and I think
you guys are just not even considering
how amazing this is all right uh let's
see list of threads new arraylist good
choice of arraylist uh you did not
create you know a little alarming that
you didn't you know pre-populate the
size here you knew how big it was going
to be a bunch of things new threads all
right a little sleep little truck catch
uh does it so he's not counting one
thing I'd be a little careful of is even
though you don't need to there is no
counting right here going on right how
many times did this fail I think that's
really important to think about
you know just in case when you're
creating these tests you you need to do
that okay
um thread start thread add let's go
thread join perfect beautiful I'm
curious about this as well
when you go individual thread joins this
could be bad right am I am I right on
this one this could actually have a a
greater Slowdown
because you're you're doing it one at a
time and so you could have already had
like a hundred finished that you like
aren't
you know what I mean that's a race yeah
yeah I feel like this is
it's blocking I know that but it's it's
not just blocking us that you could pay
a huge penalty up front and then you'll
have this whole thing where you have to
go through each one and I assume also
join is like not a free call
I assume it's like a sis call is that
fair
I don't know I don't know what it takes
to do a join but my my assumption is
that it's a non-trivial cost
uh
you know what I mean yeah anyways just a
thought just a thought but I guess if
they're green threads maybe that's not
the case maybe they're actually really
fast and delicious you know what I mean
I don't know anyways these are just my
thoughts I'm just kind of raw dogging my
thoughts out here okay uh here's the the
variant with virtual threads uh notice
how similar they are oh nice that's
beautiful well done oh I guess these
ones are Hardware ones and the other one
these ones are Hardware ones these ones
are oh my goodness I'm scrolling way too
fast oh my goodness I'm gonna vomit
these ones are virtual ones beautiful
still doing that I don't like that C
sharp is similar to rust has first class
support for async await awesome so there
we go we're gonna do a little task run
right here so this must be uh okay so
this has to be
those have to be synthetic thread so
we're not actually getting so C sharp is
not doing Hardware thread so there does
need to be like a a disambiguation here
which one's doing like actual actual
threads versus which ones are doing some
sort of managed environment because I
would assume that the managed
environment this is where a managed
environment does amazing right
right I think so
you can do both
well I would assume whenever you see
something like uh async await usually
there's some sort of whole managed
environment that goes into it
usually right because if you're not
doing it yourself there's something else
going on you know what I mean I feel
like uh the managed environment would be
slower no because Hardware Hardware
threads are expensive
right uh node.js util's promise file
okay so again you don't want to do this
I feel like there is a little something
there you got to be careful about you
know maybe a little extra garbage
collection going on I don't know but
they're probably fine probably not a big
deal probably enough probably not oh my
goodness
probably not enough to be too upset
about it uh python
I don't know what delay is delay must
just be a function that returns a
oh set timeout oh yeah okay yeah yeah
I'm dumb I'm dumb this is also a syntax
error
guys
syntax error right there okay watch your
parents okay don't you should make sure
you always just put code up on an
article that's just like it works always
make sure I mean I've done it too we've
all done it just make sure it works
right okay so this is looking good uh
python out of days think of weight in
three five nice okay look at this
asyncio a sinkio there we go beautiful
uh that all looks great Elixir is famous
for async capabilities as well let's go
I love to see it uh task await mini
until infinity and beyond
you know I really hate that phrase by
Buzz Lightyear to infinity and beyond
wouldn't you not be at Infinity if you
could go beyond the point of infinity
isn't that just like not Infinity like
can we be real here
that's the joke
uh it's true also Infinity yeah I know
thank you Deep Thoughts with me
just letting you know deep thoughts all
right test environment Hardware Xeon
this thing okay
um this is starting to look like a
personal computer which again gotta be a
little careful uh rust 169 nice go
eighteen one
nice jdk
uh dot net node
python Elixir let's go all programs were
launched uh using release mode if
available other options were left uh
default
all right minimum of footprint let's
start with something small because some
of the runtimes requires uh some memory
for themselves yep that's right yep here
you go this makes sense though because a
node would be really really large well
c-sharp requires a 131 megabytes for
just nothing
what
like I get that there's like a whole
thing going on but that's just a lot
I mean this all seems about in line with
node right C sharp I I would what does C
sharp do that requires three times the
amount of telemetry information that the
other ones do
uh okay it's pindos anyways so go make
sense rust all these things make sense
right because they all should be really
really small because they're actually
you know these are actually like
compiled thingies but I would like to
say that this is really impressive for
go I want you to take a a moment about
this
this includes a garbage collector
okay like things are running here
that's really good that's really really
good good job go good job go that's
really really good the surprise hero is
python it shouldn't be surprising uh oh
I mean I guess it's surprising in the
sense that it's half the it's half the
memory of I I guess I would have I guess
in my head I would have probably put it
the same as Java or node.js uh Elixir
also a little surprising and so large
interesting
um
I wonder how he's getting the memory is
he using vmrss what does he use in here
uh we can see there's our certainly two
groups of programs go and rust programs
compile statically to Native binaries uh
need very little memory Yep this makes
sense the other programs running on
managed platforms and or let's see or
through interpreters consume more memory
although python fares really well in
this case there is about an order of
magnitude difference in memory
consumption between these two groups yep
uh it is a surprise to me that.net
somehow has the worst footprint but I
guess that uh this can be tuned with
some settings maybe let me know in the
comments okay this is good it's good
that he's stating where he's a little
bit surprised about I like to see that
all right so 10K tasks before I oh no oh
no you can see you can see what's
happening here all right so let's see
rust threads this this makes this makes
perfect sense right it is expensive
so I guess one thing that they we didn't
specify here or that he didn't speak
about with these two right here which
was how much
worker threads were created
right
so I don't know what Tokyo does
but there is definitely something to
that right uh threat stack size right
yeah there's a there's a lot here that
might be hidden that we may not be uh
considering correctly there's just it
just looks really small which it may not
be this again this is one of the
problems about making a really really
small synthetic test is that you don't
know what's going to get you this makes
more sense this is something I I guess I
I can believe go being this uh just
because go does have a whole managed
system around it and so this is good
this is still great this is less than
what it took a thousand or ten thousand
go threads go routines go go co routines
uh is less memory
than node.js by itself right so it's
pretty good
pretty dang good I like to see this uh
let's see Java virtual threads yeah this
makes sense I'm a little bit surprised
that Java is that big for regular
threads right uh C sharp I assume
doesn't change much yeah it doesn't
change pretty much at all because again
it's probably doing the same Tokyo thing
that's going on here added nothing
all right
so this makes sense
uh virtual threads also this is you know
slightly in line this is more in line
with go I guess uh note I'm very
curious how that one I guess it's
because set timeout is not really
so one thing that he did not do
correctly is it's not
no doesn't really have this concept of
threads right I I guess maybe the most
correct way you could do this would be
do would to do something like
uh worker workers right
and the reason why this is kind of odd
is that node just creates a uh a a event
Loop item
and that event Loop item is probably
some very small piece of information my
guess is that especially since the timer
is probably a time for when it's done
and a pointer to a function to call
right it's like going to be a pretty
dang small amount of memory
and so this makes sense that node really
doesn't do much because you actually
didn't create multiple threads
you created 10 000 timers which is much
much different uh because all of these
through here they can they can also uh
execute with parallelism
you know what I mean
there's parallelism that can go on in
these that cannot happen in node.js
therefore they're not really equal so
it'd have to be worker threads you'd
have to use something like worker
threads I don't know if python has the
same parallelism problem I don't really
know how python works and I also don't
know how Elixir works but good job
elixir
right
I said parallelism parallelism
intentionally because these can all
execute with parallel parallelism
parallel parallelism depending on how
many worker threads where's Gradle great
question
all right few surprises uh everybody
probably expected the threads would be a
big loser of this Benchmark and this is
true for Java threads which indeed
consumed about 250 megabytes of ram the
native Linux threads used uh from rust
seem to be lightweight enough that the
10K threads of memory consumption is
still lower than the idle memory
consumption of many other runtimes async
tasks or virtual green threads might be
lighter than native threads
I would say might is I mean observably
probably true is what we're seeing here
right I'd say it's observably true uh
but we don't see the advantage at uh
only 10K tasks we need more tasks
okay another surprise here is go uh go
routines are supposed to be very
lightweight but they're actually
consumed more than 50 of the ram
required by rust threads honestly I was
expecting much bigger difference in
favor of go hence I conclude uh conclude
that 10K current tasks threads are quite
competitive alternative Linux kernel uh
definitely does something right here
huh
again I don't again I I'm not sure how
much I buy this I don't know I don't
know what go does
you know what I mean I don't know what
go does that makes this good or bad go
maybe reserving more memory it may have
different parameters than something like
Tokyo does and so
again I don't know how fair this is to
say that rust greatly outperformed it
because it's not doing anything too much
Telemetry too much Telemetry all right
go also has lost its advantage over rust
Ace uh async in the previous Benchmark
it now consumes over six more times more
memory than the best rust program which
is also taken by python yeah the final
surprise is that 10K task memory
consumption.net didn't significantly go
up uh from the idle memory used yeah
again
Telemetry now uh Telemetry it's actually
just Telemetry uh probably it just uses
pre-allocated memory or its idle memory
is just so high uh that it 10ks didn't
yeah it didn't matter okay 100 000 tasks
okay let's do this let's see uh so the
thread benchmarks could be excluded
probably this could have somehow tweaked
by changing uh system settings but after
about an hour I gave up so hires a
hundred thousand tasks
okay so yeah you can't spawn non there
you go so this is a good notice that all
non-green uh threads have all gone away
so I I guess my guess is that his
program kept crashing he probably I
would assume your U limit you should be
able to do you I don't know if there's
like I don't know what the potential
requirements are that you can't spawn a
hundred thousand threads but my guess is
that you just Fork bomb yourself and it
explodes and dies
um all right so Tokyo's gone up this has
gone up this has gone up like to me this
is pretty these are all pretty fine
again I really doubt C Sharp's doing
something right something about C sharp
tells me that this is not executing the
way you think it is
can we all agree that this is not doing
what you think it is
there's no way that you just did ten
thousand there's no way that you did one
to ten thousand to a hundred thousand
with absolutely no memory change
something is being clever here
right something's being very clever
or C sharp is probably the best
no Jazz so C Sharp's gonna win I hope
everybody sees this coming right
I hope everybody sees this coming that C
sharp is gonna start beating out some
Rust if they keep going with thread
limits all right at this point go
program has been beaten up uh not only
by rust but also by Java C sharp and
node.js uh though let's see and
linux.net likely cheats because it's
memory uh you still isn't going up I had
to double check if it really launches
the right number of tasks but indeed it
does and still uh exits after about 10
seconds uh it doesn't block the main
Loop okay I would still
I would argue you need to do something
I bet you this will greatly change in C
sharp if you had like a
a concurrent hash map that every one of
those tasks try to add one item to and
read one item from I think it would just
completely
change the memory wildly right
uh let's say okay one million tasks
let's go extreme extreme extreme extreme
all right add one million tasks a Lister
gave up okay nice system limit has been
reached okay edit some commenters
pointed out that I could have increased
the limit yep U limit after adding Arrow
P plus bajillion uh to Elixir it ran
fine okay nice
all right let's see what we got here
nice look at look at that C sharp oh
memory did go up this time
so that's interesting C Sharp's memory
did go up
I wonder why this 10x caused memory to
go up but the other two 10xes didn't
sus and C Sharp's the best everybody
finally we see an increase in memory
consumption of the c-sharp program but
still very competitive it even managed
to slightly beat one of the rust run
times the distance between go and the
others increase now go loses over 12x to
the winner it also loses 2x to Java
Java which contradicts a general
perception of jvm being a memory hog and
go being lightweight
hey
Russ Tokyo remained unbeatable this
isn't surprising after seeing how it did
100K tasks final word as we observed a
high number of concurred tasks actually
I want to have a final word first okay
I'm having the final word first first
off I don't know if I like this
Benchmark I love the idea I don't know
if I love The Benchmark I feel like you
need to do more things right I really do
feel like you need to do more things for
this to be real because something is
wrong here first off one thing about
c-sharp and memory that they're
completely just disregarding along with
Java is garbage collection along with
node along with go like all of these
have garbage collection so does python I
assume elixir does too but I mean Elixir
has already had four gigs am I right am
I right
um but
that's where rust is gonna really shine
is that if you're just measuring memory
doing something that creates and uses
memory these other ones are going to
really struggle but I wonder how much go
is going to struggle because go gets the
best of Two Worlds it gets a managed
memory environment but it also gets like
the smallness of rust when it comes to
usage of memory so when you create a
struct you're getting like a smaller
structure you're not getting a node.js
struct which is just much different
right an object in node is not going to
be nearly as lightweight as an object
and go it's just that's how it works and
so there is like it's kind of
interesting you know what I mean
it's just it's just interesting that
doing nothing this is the results but I
just don't believe it because my guess
is that this Elixir number four
gigabytes is likely what would happen to
a lot of these if you did that
all right final word
what is wrong why do I keep having like
uh as we have observed a high number of
concurrent tasks can consume a
significant amount of memory even if it
did not do not perform complex
operations yeah I mean it makes sense
like just imagine that every single
imagine that every single task ran
requires a hundred bytes of memory
that would be a hundred that'd be 100
megabytes at a million right so it
likely is going to require more than
that yeah stack size you got all sorts
of stuff it requires probably more than
100 bytes therefore it makes sense this
makes sense right
this is like what's a million times 4K
right if you had 4K stack size boom you
got that right
uh anyways let's see conversely other
run times with high initial overhead can
handle High workloads effortlessly
C sharp for the win by the way by the
way the big takeaway here is you should
just use C sharp we're all C sharp Andes
now inside this stream I hope
everybody's ready for the C sharp Arc C
sharp Arc everyone excited for it I'm
excited for it
um I think that c Sharp's obviously the
best language okay I've been telling you
guys this for so long now and honestly
this chart just proves how good C sharp
is okay you guys kept always talking
about how great go is look at how
terrible go is 2.6 gigabytes
okay you just don't understand things at
all all right C sharp clearly best
language
clearly best language the comparison
focused solely on memory consumption
while other factors such as task launch
time and communication speeds are
equally important notably at one million
tasks I observe that the overhead of
launching tasks became evident and most
programs required more than 12 seconds
to complete stay tuned for upcoming
benchmarks where we'll explore
additional aspects in depth I'd love to
see this except for instead of doing
node.js just doing timers let's set
worker threads I'd love to see some sort
of computation model added to everything
I personally just think that the best
way to do this is to do some sort of
longer run living task right how many
websocket connections can you make how
effectively how many open TCP
connections can you make to a server and
then just start sending something back
and forth how much can you do in a
language not like these because you know
the reality is you don't use a language
to launch a timer you use a language to
do something and I feel like a
websocket's like a really great kind of
uh
it's a really great simple way to test
something because it is it's just a TCP
connection you have to do a moderate a
pretty small amount of work to parse out
a frame
it shows what garbage collection does to
the system uh it shows what kind of the
interacting with with the system calls
does to a system and then if you have
anything extra on the system such as
like if you do a chat room it will you
know if you have to for each over
clients what do four Loops do to your
program it's very very interesting uh
even with node.js the difference like if
you build a chat client
and all it is is a chat line to where
websocket connections can join in make a
simple request to join rooms and then
send messages to the room a for each
statement will effectively cut your RPS
in half when you're iterating over the
available sockets
or it's not RPS it's MPS messages per
second pretty wild that that can happen
right and so it's it's pretty wild that
even such a simple small little thing
can have such a huge impact
on performance
anyways just something to think about
anywho all right hey great article
though great article everybody hey
everybody great article hey great
article everybody everybody
give a little clap for Peter good job
Peter appreciate the work you put in
what is the name
you know what the hell the name is the
name
is the primogen
5.0 / 5 (0 votes)