They made Python faster with this compiler option
Summary
TLDRThe video script discusses the impact of compiler optimizations on Python's performance, particularly focusing on Fedora Linux's decision to use the '-O3' optimization level for compiling Python. This change results in a performance boost, with some cases showing up to a 4% increase in speed. The script delves into the intricacies of compiler optimization levels -O1, -O2, and -O3, explaining how they affect code execution, memory usage, and CPU cache efficiency. It also touches on the concept of function inlining and its role in enhancing performance at the cost of increased binary size. The discussion is aimed at providing viewers with a deeper understanding of how compiler optimizations can significantly influence the performance of software.
Takeaways
- 🐍 Fedora Linux's decision to compile Python with the -O3 optimization option has made Python run significantly faster on the platform.
- ⚙️ The -O3 optimization level is known to enhance performance, with speed improvements ranging from 1.6% to 4% in various cases.
- 🔍 The discussion opens up the topic of compiler optimizations, particularly focusing on function inlining, which is a key aspect of the -O3 optimization.
- 📚 The script explains the basics of compiling, which is the process of converting high-level language code into machine-level instructions.
- 💾 The script touches on the importance of registers and memory in the compiling process, highlighting the trade-offs between using limited register resources and memory.
- 🔧 Compiler optimization levels -O1, -O2, and -O3 are explained, with each level offering different levels of optimization and performance gains.
- 📈 The script provides a comparison of binary sizes and performance between Python compiled with -O2 and -O3, showing a larger binary size with -O3 but with improved performance.
- 🚀 Function inlining, a part of -O2 and -O3 optimizations, can significantly speed up code execution by reducing the overhead of function calls.
- 💥 The -O3 optimization includes aggressive function inlining and Single Instruction, Multiple Data (SIMD) optimizations, which can further boost performance.
- 📊 Benchmark results indicate that the -O3 optimization generally provides a performance boost, with improvements shown in various tests and workloads.
Q & A
What is the significance of the -O3 optimization option in compiling Python on Fedora Linux?
-The -O3 optimization option in GCC significantly improves the performance of Python on Fedora Linux by enabling more aggressive compiler optimizations. This can result in speed improvements ranging from 1.6% to 4% in various benchmarks and workloads.
Why did Fedora switch from using -O2 to -O3 optimization for Python?
-Fedora switched to -O3 optimization for Python to align with Upstream Python's release builds, which are known to be faster due to this more aggressive optimization level.
What are the trade-offs when using the -O3 optimization level?
-While -O3 can significantly improve performance, it also increases the size of the binary due to aggressive function inlining, which can lead to higher memory usage and potential performance deterioration on systems with limited memory.
How does function inlining as part of -O2 and -O3 optimization work?
-Function inlining replaces function calls with the actual function code, reducing the overhead of function calls and improving cache utilization. -O2 performs inlining for small functions, while -O3 does more aggressive inlining, potentially inlining almost all functions.
What is the difference between the compiler optimization levels -O1, -O2, and -O3?
-The optimization levels -O1, -O2, and -O3 in GCC represent different levels of compiler optimizations. -O1 enables basic optimizations, -O2 includes further optimizations like function inlining for small functions, and -O3 includes even more aggressive optimizations, often resulting in the largest performance gains but also larger binary sizes.
How does the compilation process translate high-level code into machine-level instructions?
-The compilation process translates high-level code into machine-level instructions by going through several stages, including parsing the code, optimizing it, and then generating the assembly code that corresponds to the machine-level instructions the CPU can execute.
What is the role of registers in the compilation process?
-Registers play a crucial role in the compilation process as they are fast storage locations within the CPU used for holding temporary values during computation. Compilers aim to use registers efficiently to minimize memory access, which can slow down the execution.
Why might aggressive function inlining in -O3 optimization not always result in performance improvements?
-Aggressive function inlining in -O3 optimization might not always result in performance improvements because it can significantly increase the binary size, leading to more memory usage and potential cache misses, which can offset the benefits of reduced function call overhead.
What is the impact of the -O3 optimization on the binary size of Python?
-The -O3 optimization increases the binary size of Python due to the inclusion of more aggressive function inlining, which can result in a larger executable size compared to the -O2 optimization level.
How do compiler optimizations like -O3 affect the development and deployment of software?
-Compiler optimizations like -O3 can affect software development and deployment by potentially increasing the performance of the software, but also by increasing the size of the binaries, which can impact the distribution and memory usage of the software.
Outlines
🐍 Python Performance Boost on Fedora Linux
The script discusses the performance improvement of Python on Fedora Linux due to the implementation of the -O3 compiler optimization option. This option enhances the speed of Python, with improvements ranging from 1.6% to 4%. The speaker uses this news as a springboard to delve into compiler optimizations, specifically function inlining. The script mentions that Fedora 41 will come with Python compiled using the -O3 optimization, a change approved by the Fedora Engineering and Steering Committee. The speaker contrasts this with previous versions that used -O2, which is considered more stable. The script also touches on the speaker's personal preference for Red Hat and macOS over Fedora, highlighting the ease of use and tool inclusion in Linux distributions.
🛠️ Compiler Optimizations and Function Inlining
This section provides an in-depth look into compiler optimizations, focusing on the GCC options -O1, -O2, and -O3. The script explains that these options dictate how the compiler translates high-level code into machine-level instructions. The speaker clarifies the process of compiling, which involves translating code into machine language that the CPU can execute. The script also delves into the concept of registers and memory, highlighting registers as a scarce resource that the compiler must manage efficiently. The discussion then moves to the first level of optimization, -O1, which involves local register substitution to reduce unnecessary memory writes. The script also introduces the idea of function inlining, which is a more aggressive optimization technique used in higher optimization levels.
🔧 Deep Dive into Compiler Optimization Techniques
The script continues the exploration of compiler optimizations, particularly focusing on the -O2 and -O3 levels. It explains that -O2 optimization includes function inlining but is limited to small functions to maintain stability. The speaker then discusses the -O3 level, which is more aggressive, performing function inlining on almost all functions, leading to a significant increase in binary size. This results in better cache hits and performance but at the cost of increased memory usage. The script also touches on the potential downsides of aggressive inlining, such as decreased performance on devices with limited memory due to increased swapping. Additionally, the script mentions the use of Single Instruction Multiple Data (SIMD) optimizations in -O3, which can further enhance performance if the CPU supports it.
📊 Benchmarks and Impact of Optimization Levels
This part of the script presents benchmark results comparing the performance of Python compiled with -O2 and -O3 optimization levels. The speaker shows that the -O3 level generally provides a modest performance improvement, with speed increases ranging from 1.04 to 1.09 times across various benchmarks. The script also compares the binary sizes of Python compiled with the two optimization levels, with -O3 resulting in a larger binary due to aggressive function inlining. The speaker concludes by emphasizing that these optimizations are beneficial for performance but may not be necessary for all users, especially those already using upstream Python versions that are compiled with the -O3 option.
📖 Conclusion and Learning Resources
In the final paragraph, the speaker wraps up the discussion on compiler optimizations for the CPython interpreter and reflects on the learning experience. They express enthusiasm for understanding the intricacies of compiler optimization levels -O1, -O2, and -O3. The script also includes a call to action for the audience to check out the speaker's operating system course for further learning. The speaker provides a discount link for the course and encourages the audience to explore the content, highlighting the comprehensive nature of the course and the passion behind its creation.
Mindmap
Keywords
💡Compiler Optimizations
💡Function Inlining
💡Fedora Linux
💡GCC (GNU Compiler Collection)
💡Benchmark
💡CPython
💡Instruction Set
💡Memory Access
💡Cache Miss
💡Single Instruction, Multiple Data (SIMD)
💡Upstream Python
Highlights
Fedora Linux's Python performance has improved due to a small but powerful compiler option.
The performance increase varies, with some cases showing a 4% improvement and others as low as 1.6%.
The discussion opens the gate to delve into compiler optimizations, specifically function inlining.
Fedora 41 will come with a compiled version of Python using the -O3 optimization option.
The -O3 optimization is already used by Upstream Python for its release builds, known to make Python significantly faster.
Previous versions of Fedora compiled Python with the -O2 optimization, which is considered more stable.
Function inlining is a compiler optimization technique that can improve performance by reducing function call overhead.
The -O1 optimization level performs local register substitutions to optimize code.
The -O2 optimization level includes function inlining for small functions, reducing memory access and improving cache hits.
The -O3 optimization level is more aggressive with function inlining, potentially increasing binary size but improving performance.
Aggressive function inlining can lead to larger binary sizes, which might impact memory usage and cache efficiency.
Single Instruction Multiple Data (SIMD) is utilized by -O3 to perform multiple data operations in one instruction, if the CPU supports it.
The benchmarks show that -O3 optimization can lead to performance improvements of up to 1.09 times faster in certain cases.
Users of Upstream Python are already benefiting from -O3 optimizations as it's the default compilation option.
The video discusses the trade-offs between different optimization levels and their impact on performance and binary size.
The presenter expresses enthusiasm for learning about compiler optimizations and their practical applications.
The video concludes with a call to action for viewers to check out the presenter's operating system course for more in-depth knowledge.
Transcripts
running python on Fedora Linux is now
significantly faster thanks to this
small but powerful compiler option now
double code on significant because that
really depends what you think is
significant here in this particular case
it's 4% in some cases is
1.6% but I want to use uh this
opportunity this particular news to open
the gate
to discuss a very important issue n and
that issue
is compiler optimizations and
specifically uh something that is well
known as um function in lining I want to
dive deep into that how about we dive
into this news news time for the back
and Engineering show so this comes from
foron my favorite uh website for
operating systems Hardware optim ization
news and let's read the blur and discuss
Fedora cleared to build python package
with the
-3 optimizations now some of you might
know what this is some of you might not
I'll I'll explain all of this okay the
Fedor engineering and steering committee
F Co has signed off on the plans for
Fedor 41 as the current release uh
honestly I didn't use Fedora much I used
red
hat and uh MBE obono is my main
operating system I know it's not cool to
say that but hey it is what it is the
other operating system I use as you know
Mac and work it's primarily
Windows un cool as it is but Fedora 41
you guys is going to come with a
compiled version of python with an
option- O3 you know that means Fedora as
a as a as a distro of Linux comes with
shipped tools cuz that's what
distribution is it's it's just a you
know it's
a it's a nice bouquet if you will of
tools on tops of the kernel kernel is
the best thing that's the core of the
stuff and I talk this about this in my
OS and the and the Destroyers just hey
let's make a nice UI let's make a nice
gooey here let's nice let's make this
let's make partitioning easy let's make
you hey format stuff easy you want to
get started really you don't have to run
F desk and figure out all this stuff
what sector to start with what's what's
the beginning SE logical sector and
what's the end logical se you don't have
to do all of that we'll take care of all
that with the beautiful UI you know um
and uh we will include the curl for you
version of curl we include uh GCC
version this tool include python that's
another option so here we're looking at
a version of Fedora that has python but
the version prior to that was compiled
with a specific option so Fedora comes
with a version of python and this python
is we're talking about C python which is
the word c means it's written in C right
so the python has been written in C and
to compile python The Interpreter that
takes your script P scripts and you know
transpile them and build them
into interpret them that logic is The
Interpreter and that interpreter itself
is itself written in C compiling that in
C was using a certain option D O2
certain optimization we'll talk about
what o21 02 and3 in a minute right but
that's that's what it is right so there
is a been with 41 to use o03 instead of
o2 for better optimizations what are
this optimizations we we we we'll see
we'll see we'll see all of this stuff
the O3 optimization level is what
Upstream python uses for its release
builds and it's proven that it makes
python significantly faster the
significantly faster here I'm reading is
between codes okay across range of
Benchmark and workload so it's already
being being used in Upstream python so
the python you download if you don't
have any uh if you if you have Linux and
you just
installed uh you didn't you don't have
any like python version and you
installed python from the official repo
then you get a python that is already
compiled with
o03 optimization we're going to talk
about what that means so you get the
fast version It's Upstream python the
vanilla out of the box comes with that
the version that comes with Fedora does
not it's compiled with the with the more
O2 which is considered uh more stable
you
know so and they talk about like how is
it faster but but what what is that
that's what I want to talk about here so
we have o1 O2 O3 these are GCC options
the compiler option just the it says all
right if you give me a c code I can comp
compile it with certain optimization and
I'm going to um I'm going to explain
here so the first option to compile
first what is compiling right what is
compile what does it what does that mean
right compiling is taking a
language written in a highlevel human
readable thing you know and pushing it
to a
specific Machine level instructions that
that CPU because CPU executes
instructions understands yeah so you
compile a c
code into this machine language so one
line of code can produce 100 machine
instructions and based on that logic
okay so based on that we have this
intermediary which is the compile and
there is also the Linker which actually
produces the executables but it's out of
the equation here so now we have the
compiler we compile right so you really
need to specify the CPU to actually know
what are you compiling against because
compiling against uh arm is different
than Intel is different than uh AMD is
different than you know other CPUs that
I don't know about right so you need to
know the CPU and you need to know like
what is this 32-bit versus 64-bit
because 64-bit means you going to use
64bit instruction length versus 32 bit
you can to use 32 bits that's 4 bytes
versus 8 bytes yikes we're compiling so
this this process of translating so you
have the freedom of doing anything you
want I decided to do these sets of
instruction right
this this is what I want my friend this
is what I want I want a equal one Bal 2
right I want b equal 2 and Cal
a plus b that's what I want that's
that's the instructions I wrote right
you will translate them whatever you
want but give me the final output it
does not mean translate one line by line
as long as you get the final output
that's that's the job of the compiler
and that's where the art of building
compilers come so you then produce sets
of instructions and CPUs work with this
local
Lightning Fast memory uh uh storage
scratch pads called registers and CPU
cannot work with anything outside of the
register even if you have something in
the memory it needs to put it in the
register so that it can pass it to the
ALU right from the control unit to
actually do the math or add or
multiplication whatever the operation is
so you need to be in registers so you
have scarce resource and that's called
the registers so you need to do whatever
you want and you have an unlimited
resource almost which is the memory
memory is just the the to to the
compiler that's the unlimited resource
all right you can put anything you want
there but the registers is the true
scarce resource now if you flip the
equations memory becomes the scarce
resource to like a database right
application but then the desk becomes
like the hey it's an unlimited resource
and when I say unlimited take it with a
grand so it means like it's it's more
ubiquitous
all right so we're compiling so I want
to go through three options where the
compilers does thing here right first of
all no optimization the the the easiest
way to compile this code to this with no
optimization is say all right I'm just
going to
move right and this is assembly but you
can think of assembly as one to one as
the as an instruction so I was like all
let's move uh I don't know said r0 move
one to it then
store uh r0 to a which is the address of
a right uh let's just use uh I don't
know address like that right so I want I
want you to store I want you to to save
one in R zero and then store it in
memory because the user said hey I want
an integer a equal this and an integer b
equal this an integer that's what that's
what when one to one would do right
there's no optimization it's like I'm
going to do that right this is a hit to
memory you are actually going to the
memory and and I talk about that the
average hit of the memory depending if
you hit the same Row in the dam or not
it's 100 nond you know that's the that
that's the cost of going to memory which
is which is can be slow right I talk
about that in my my OS course but but
then you do this and then you say all
right then move r
z two let's store two in the same
register and it's okay because we used
another register and let's let's make it
worse I'm going to use another register
R zero R1 so we start we for those
listening I am I am I really have a very
simple program with three lines of code
integer a equal 1 integer b equal 2
integer c equal a plus b that's all what
I want right and I'm translating this
one by one as a compiler so I'm moving
to a register close by and then I'm I'm
storing r0 into the address of a which
is basically in this in this case going
to be in the stack so I'm storing r01 in
a because user told me to store it right
so then then I'm going to store uh two
and R1 and then I'm going to store
R1 in the address of B so go and store
two in B in memory in the memory
location so another 100 NC it's it's
less because those those two is going to
be in the same uh Row in the dam so you
probably open the r to the hit and then
the second B is this is in the same the
same frame so you're probably hitting
the same row right uh especially that
this whole thing is is a single page uh
virtual memory page that is so now
you're doing that boom then you do all
right C We need c let's add R3 R1 and R2
and store it in R3 now you have R3 as
the sum of those two and then you now
move or store actually R3 to the address
of C so that's what you going to do
without optimization you translated
three lines of code to 1 2 3 4 5 6
probably there are a lot of boiler plate
instructions that I skipped but that's
basically the the the gist of it so what
you do the first optimization right
which is done with this thing that's
called Dash
o1 right the imprimis will be wait a
minute what are you doing what are you
doing what you you stored one in a but
you never used a technically I mean you
used it to add but I I know what you're
trying to do the output of this whole
thing you just W see to have the sum of
one and two so what the what the
optimization will do is like all right
will do no you do this what is this
strike through can I do strike through
here come on KVA what the heck why can't
I do strike through here strike through
oh there you go there you go right make
it red bad y let's make it red
y red it's like I'm not going to move
I'm not going to I'm not going to move
I'm going to well I take it back we need
to store one in R zero
right and then we really not don't need
to store it in memory we don't need one
to be living in memory at all we don't
need it it doesn't have to be so we go
oh the not spip that so all right I'm
going to store two in the register R1 so
we have R Zer and R1 in the local
registers because we have enough
registers to work with and then I don't
need to store R1 to B there is no point
of storing B Because B is not going to
be used at all right except in one place
so and and I and as far as I I look
through this I'll be fine you know so
I'll I'll I'll do strike through this
one and I'll remove that guy and then
I'll just add these guys on the store
see so I just with this optimization I
remove two instructions and this is a
very simple thing you know we don't we
don't really write to to to memory if we
don't need to and this is called the
local register substitutions you know or
a location so so we use as much as
possible we use local registers but the
problem with this is this only
applicable if you have enough registers
so the compiler thinks through okay I
have this many registers to work with
and I have these functions so if if you
have enough registers you'll work with
them if you don't have you'll have to go
back to memory store it save it it's
like working with a temporary variables
right to to work with temporary variabl
you have to store them and then put them
aside and then work with them again so
that's the first optimization right and
there's like other kind of optimization
like like like Sub sub expression
elimination like if you evaluated a plus
b I'm not and you later said oh a plus b
again you did something like that right
it will detect that oh we have already
did this a plus b so I'm not going to
re-eval I'm not going to do an add again
I know what a plus b is right so it will
do that stuff optimization and will
avoid doing an an extra add this kind of
optimization that that the that the
compiler does then then there's the O2
right and example of an O2 what we'll do
is and I'm not going to go through this
so This make the video very long but
essentially it will do function in
lining right it's very important but but
but but but but it's going to but only
for uh small functions that's all what
it does right a function and lining is
that if you
call right let's say this is now I put
this in a function let's called add
right and this is an add function and
then you have
here this main
function right and then you say hey add
add one and two add three and four add
five and six add whatever you see my
point
right what we'll do is it will replace
all the these add functions with the
content
itself right of the function if the
function is small enough why because
instead of jumping to this of course
there should be parameters and we
changing these values but instead of
jumping back and forth between
completely two different instruction
sets right because eventually what will
happen is we're going to compile this
function into a set of instructions and
we're going to load them into memory and
then we're going to compile this into a
that's for instruction get a load in
memory and then the CPU will be jumping
back and forth between every time I go
to this code and then execute it and
then go all the way here and then
execute and then oh I'm done let me go
back and then execute the rest and then
jump back jump back and forth right and
it's not so bad in this particular
example but what is happening with every
function call there is a cost associated
with that slight over head you know like
you're going to set up the stack frame
and you're going to because you're
function you have to save whatever work
you have been been doing before probably
like the base point the old base pointer
the old stack pointers uh the return
address if if need so and then you need
to remember saving is what to me that's
the cost right you're going back to
memory you're writing to memory and
that's the 100 nond right you going
doing
that right so that's one cost the other
cost which is to me that's the that's
what really makes it bad is you're
jumping between different portions of
your process memory and that uh
introduces cash messes because you see
PC CPU loves when it reads something it
reads a beautiful 64 bytes and then of
course all of this is living in a 4K
page virtual memory so you're reading
sequential stuff but if you say Okay
read this fun execute this line but then
go jump and go all over the way all the
way to line
9,901 because my function lives there
happens to live there go and execute
that now you lost this beauty cash
beautiful cash that you have on your L1
cache and l1i cash right you just jumped
over there and then you you will be
causing jumping back and forth right
between them and that's the cost so in
lining what it do it literally copies
this code inside this so yes it bloats
your code
significantly all right
so it does inlining so what is the
beauty of if you do inlining then if you
do a read guess what everything is I'm
not jumping I read something my function
the code of the function is right there
so as I'm executing main I don't need to
jump somewhere else to execute there and
I need to I don't need to set up a stack
frame I don't need to save anything
there's no function call at all there's
no parameters to pass there's no
overhead of all this stuff right and I'm
talking about this overhead is too tiny
but it adds up so so you put all this
inlining stuff inside which bloats your
code right so that's the cost you you
going to get more memory you're going to
need more memory but you might get this
beautiful cash hit and that's what what
they do here for o03 that's O2 so what
is what does O3
does aggressive in lining that's what it
does it almost all functions I don't
know exactly how it works they did
decides like all
right I'm going to do everything like
any function in lining almost let just
put it put it which increases
significantly the size of the binary I'm
going to show you an example here of how
python looked on O2 and how python
looked on O3 aggressive functional
lining so you
increase your uh size of the binary
that's the
cost but you get beautiful nice cash
hits as a result of course aggressive
function aligning can also deteriorate
performance because now you're putting
your memory especially in smaller
devices if you have a lot of memory then
you're going to be swapping out right uh
if it's just too large of a process uh
and it's not a a problem with python per
se because if you once you look python
that entire text area which is the code
is just mapped once and probably is
going to be labeled as not swap app all
cuz that's how I would do it I like
don't swap code let's code be hot in
memory and then all processes that are
python the and the next pythons will all
share that memory that's the beauty of
share memory cuz you're going to share
all this memory and you you cannot do
this easily with without virtual memory
so another thing that O3 does is it's um
s single instruction multiple data s a s
IMD which I talked about on this channel
right you take one single as opposed to
instead of
doing let's say you want to add uh you
want to add one and two and three and
four and five and six and seven right so
because add always takes like two
parameter so you have to add one and two
and then you add two and three and you
have to add
add three and four like let's say you
adding an array right and you're doing
all this silliness right doing an
summing all this stuff to do this you
have to do four instructions four ad
well we've we've seen how it's actually
more than that right way more than that
in this particular case what s s IM IMD
does it says all right let's use vectors
right I'm going to add one and two and
three and four let's just do eight here
and I want them to add these two vectors
and you put use special registers this
only applicable if the CPU support is as
IMD of course right so five six seven
and
eight so you will sub We'll add all this
stuff and you boom one instruction
multiple data frash I don't know what
that means just add over this boom
optimization so if your C supports
instructions and python does a lot of
stuff of course so if you
adding arrays The Interpreter will will
will use the version the SM s IMD
version if it's applicable of course
right and you can do so you'll take
advantage of this so that's why O3 is is
faster now how fast is this how fast
well before we talk how fast well look
at the binary sizes here and for those
listening we're looking
at oh man let's do math
does this it's 16,000 kilobytes it's
16ish megabyte let's say 17 megabyte
that's the
O2 that's the python compiled with the
O2 and but this is old huh guys this is
July 2019 so who knows maybe this is
more this is a c python code developer
let's shout them up inada Nai okay so
this 16 megab is the python O2 and the o
three expectably so is larger it's
around 20 megab so you're talking 3
megab extra 4 megabytes extra is it
worth it or not it's up to you right but
but the
benchmarks well the Benchmark shows yeah
found it so the federal project that's
the Benchmark we're looking at here guys
there is a list of the benchmarks the
two 223 async generators and all of that
stuff and mostly all of them are like 1
Point 04 faster 1.09 faster so you're
saving few millisecond here 100
millisecond here 100 millisecond there
200 millisecond here 200 millisecond
there it's just all adds up and guys by
the way if you're not in Fedora this
this doesn't affect you at all because
you're probably using a version Upstream
which already compiles with that option
right so you're safe essentially you're
good right and um
yeah but I I thought I'll talk about
that because it's just interesting to
understand all these you know compiling
option but yeah that's what I want to
talk about 01 02 03 you know for the
cpython
interpreters you know and I'm going to
reference all uh I'm going to reference
all of this stuff for you guys to read
cuz I got to credit everybody here this
is good uh I the the news is not that
it's me and I've seen the comments
people doesn't don't seem to like Fedora
for some reason know they it seems like
Fedora was not innovating enough and
they were making fun of this article but
I just I just love I loved learning a
new thing you know this 01 O2 O3
optimization I didn't I knew about the
optimization themselves I didn't know
about these levels to be honest right so
I spent some time researching and I just
I love it I love it anything of
engineering love it love it see you on
the next one you guys check out my
operating system
cor OS course. win that is OS
course. wiin a shortcut to direct you to
UD me
directly with a beautiful
coupon
um and enjoy it over 20 hours you guys
21 I've added some more lectures 20 2
hours I spent two years working on that
course I learned so much and you will
see my enthusiasm in the course and you
probably get sick of it but hope you
check it out thank you so much see you
on the next one
関連動画をさらに表示
L-5.1: Memory Management and Degree of Multiprogramming | Operating System
Basics of Time Complexity and Space Complexity | Java | Complete Placement Course | Lecture 9
Lecture 29 : MEMORY HIERARCHY DESIGN (PART 2)
Tailwind CSS: How to Optimize Performance with Shruti Balasa
LINQ's INSANE Improvements in .NET 9
Lec-15: Various Data types in Python🐍 | Comparison of all python data types
5.0 / 5 (0 votes)