LINQ's INSANE Improvements in .NET 9

Nick Chapsas
24 Sept 202411:26

Summary

TLDRThe video highlights the performance improvements in .NET 9's LINQ, which can be up to 1,800 times faster in some use cases compared to previous versions. The presenter demonstrates benchmarks comparing .NET 8 and .NET 9, showing significant speed enhancements and reduced memory allocations for common LINQ methods like Any, All, Count, and First. The optimizations include using spans and consolidating iterators, making LINQ more efficient and competitive with handwritten code. The video also covers how these changes can be applied to existing code bases for better performance. Additionally, a new course on behavioral interviews is mentioned.

Takeaways

  • πŸš€ .NET 9 can be up to 1,800 times faster than previous versions in some cases, thanks to performance improvements in LINQ.
  • ⚑ Microsoft is focusing on optimizing LINQ to make it much faster, addressing its historical performance issues.
  • 🏒 Some companies avoided using LINQ in the past due to performance concerns, but these improvements make it competitive and sometimes even faster than handwritten code.
  • πŸ›  Common methods like 'Any', 'All', 'Count', 'First', and 'Single' have received significant optimizations in .NET 9, leading to up to five times faster performance with no memory allocations.
  • πŸ’‘ Microsoft’s optimization involves rewriting key LINQ methods using techniques like spans, which help avoid performance bottlenecks.
  • πŸ“Š Benchmark comparisons between .NET 8 and .NET 9 reveal significant speed improvements across various LINQ operations.
  • 🧠 Microsoft's LINQ optimizations include smarter method chaining, consolidating multiple operations into fewer steps to reduce overhead and improve efficiency.
  • πŸ›‘ The use of interfaces like 'IEnumerable' and optimizations such as removing unnecessary virtual dispatch calls contribute to LINQ’s improved performance in .NET 9.
  • πŸ”„ Methods like 'Skip' and 'Take' are now consolidated, leading to fewer iterations and better performance when processing collections.
  • πŸ“‰ Memory allocations for empty collections in LINQ methods are eliminated in .NET 9, making these operations up to 20 times faster.

Q & A

  • What is the main focus of the video?

    -The video focuses on the performance improvements of LINQ in .NET 9, showing how it's up to 1,800 times faster in some use cases compared to previous versions.

  • Why was LINQ historically considered slow?

    -LINQ was historically considered slow due to its inefficiencies, which led some companies to avoid using it because of performance concerns.

  • What are some of the specific methods discussed that saw performance improvements in .NET 9?

    -The methods discussed include Any, All, Count, First, and Single, all of which have seen significant performance improvements, with some running up to five times faster and using zero memory allocation.

  • What approach did Microsoft take to improve LINQ performance in .NET 9?

    -Microsoft optimized LINQ in .NET 9 by rewriting certain internal implementations, using spans, and making smarter decisions like avoiding unnecessary memory allocations and consolidating operations like Skip and Take into fewer iterations.

  • How does the performance of LINQ in .NET 9 compare to .NET 8?

    -In benchmarks, LINQ in .NET 9 outperformed .NET 8 significantly, with up to five times faster execution times and zero memory allocations in many common operations.

  • What are spans, and how are they used in .NET 9?

    -Spans are a memory-efficient feature in .NET that allow for accessing and manipulating data without copying it. In .NET 9, spans are used extensively to optimize LINQ operations by reducing memory allocations and improving performance.

  • What specific optimization techniques were highlighted for LINQ in .NET 9?

    -Some key techniques include optimizing data structures like arrays and lists using spans, consolidating iterators in operations like Skip and Take, and optimizing frequently used methods like Where and Select to reduce overhead.

  • How has the memory allocation been improved in LINQ methods in .NET 9?

    -In .NET 9, memory allocation for many LINQ methods has been reduced to zero, particularly in operations on empty collections, which leads to less garbage collection and better application performance.

  • What benefits does optimizing LINQ in .NET 9 bring to developers?

    -The optimizations in LINQ in .NET 9 make developers more productive by allowing them to use LINQ without worrying about performance penalties. The reduced overhead and improved execution times make LINQ a more viable option for production code.

  • What other improvements were made to LINQ in .NET 9 according to the video?

    -Other improvements include enhanced handling of complex operations like Distinct, Append, Reverse, and DefaultIfEmpty, which are now much faster. Additionally, LINQ now avoids full dataset copies for methods like First and optimizes virtual dispatch calls.

Outlines

00:00

πŸš€ Introduction to .NET 9's Link Performance Boosts

In this video, Nick introduces how .NET 9 improves the performance of Link, making it up to 1,800 times faster in certain cases. He reflects on how Microsoft has prioritized performance optimization for Link, a historically slow feature. Nick recalls experiences from companies where Link was prohibited due to its inefficiency, and he expresses excitement about how these improvements now allow it to compete with, or even surpass, handwritten code. The video will showcase benchmarks comparing .NET 8 and .NET 9, demonstrating the performance gains achieved without changing the code, simply by updating the .NET version.

05:03

πŸ” Exploring Common Link Methods and Their Performance

Nick dives into common Link methods such as `Any`, `All`, `Count`, `First`, and `Single` and explains their role in typical C# code. He sets up a benchmark to measure the performance difference between .NET 8 and .NET 9. After running the benchmarks, Nick reveals the impressive results, with methods performing up to five times faster and with zero memory allocations in .NET 9. He then shifts focus to understanding how Microsoft achieved these optimizations, highlighting changes made to the underlying implementation, such as the use of `Span` for memory efficiency.

10:04

πŸ› οΈ Optimizing Lists and Arrays with .NET 9’s Memory Marshalling

Nick delves deeper into the performance enhancements in .NET 9, explaining how lists, which are backed by arrays, benefit from optimizations like the `CollectionsMarshal.AsSpan` method. He demonstrates how .NET 9 takes advantage of arrays' internal structures, allowing developers to extract spans for performance gains. Nick also introduces the dangerous but powerful `Unsafe.As` method, which can be used for low-level operations. These techniques significantly optimize common operations, such as `First`, `Last`, or `Single`, by reducing overhead and memory usage when dealing with arrays and lists.

🏎️ Advanced Benchmarks and Link Chain Optimizations

Nick runs more complex benchmarks involving operations such as `Distinct`, `Append`, and `Select`, showing how .NET 9 reduces execution time by consolidating iterators and optimizing chains of Link methods. He explains that .NET 9 can intelligently merge operations that would normally add overhead, like `Where` and `Select`, into a single iterator. This leads to significant performance improvements when chaining Link methods, making code more efficient without sacrificing readability. The results show reductions in memory allocations and execution times, highlighting how well .NET 9 handles more advanced Link operations.

πŸ’‘ Interface Refactoring and Memory Efficiency

Nick elaborates on the behind-the-scenes changes that made these optimizations possible, such as refactoring internal interfaces like `IPartition` and consolidating iterators. He discusses how .NET 9 reduces overhead for methods like `Skip` and `Take` by combining operations that used to create multiple iterators into a single iterator. This refactoring leads to fewer virtual dispatch calls, which boosts overall performance. Nick emphasizes that many Link methods have seen improvements, especially for empty collections, which now avoid memory allocation entirely in .NET 9.

🧠 Smarter Link in .NET 9: SequenceEqual and Other Enhancements

Nick showcases further examples of performance enhancements in .NET 9, such as the `SequenceEqual` method, which drastically reduces execution time and memory usage by optimizing how data sets are compared. He highlights improvements to methods like `ToLookup`, which can now operate up to 20 times faster. The use of spans and more efficient algorithms has made Link a much more attractive choice for production code, allowing developers to achieve high performance with less effort. Nick ends by asking viewers if these changes have altered their view on using Link in their own projects.

Mindmap

Keywords

πŸ’‘.NET 9

.NET 9 is the latest version of Microsoft's .NET framework mentioned in the video. The video demonstrates how .NET 9 offers significant performance improvements over previous versions, especially in how it handles operations with LINQ (Language-Integrated Query). These optimizations, such as reducing memory allocation and faster method execution, allow for much more efficient processing of data compared to .NET 8.

πŸ’‘LINQ (Language-Integrated Query)

LINQ is a powerful feature of C# that allows developers to query collections of objects easily. In the video, LINQ is shown to have undergone major performance improvements in .NET 9, making operations like 'Any', 'All', 'Count', and 'First' execute significantly faster. The video's theme revolves around demonstrating how LINQ is no longer a bottleneck in performance due to these optimizations.

πŸ’‘Benchmarking

Benchmarking refers to the process of measuring the performance of code under specific conditions. In the video, the speaker runs benchmarks comparing .NET 8 and .NET 9 to demonstrate the performance improvements achieved in .NET 9. For example, running simple LINQ operations shows performance gains of up to 5 times faster in .NET 9 without changing the code itself.

πŸ’‘Memory Allocation

Memory allocation is the process by which the system assigns memory to various operations or data. In the context of the video, the speaker highlights how .NET 9 significantly reduces memory allocation during common LINQ operations. This improvement leads to better performance, reduced garbage collection, and a more efficient application runtime, especially when handling large data sets.

πŸ’‘Span<T>

Span<T> is a type introduced in .NET for working with slices of memory in a more efficient way. The video explains how many LINQ optimizations in .NET 9 revolve around converting data structures like arrays and lists into spans, allowing for much faster data access and manipulation. By using spans, methods like 'First' and 'Any' become considerably faster, avoiding unnecessary memory allocations.

πŸ’‘Unsafe Code

Unsafe code refers to a feature in C# that allows developers to bypass certain safety checks for the sake of performance. The speaker mentions how .NET 9 uses unsafe methods like 'Unsafe.As' to improve performance by allowing direct access to memory. Although this can be dangerous if used incorrectly, it is highly effective in speeding up LINQ operations when used properly in the .NET framework.

πŸ’‘Enumerables

Enumerables are collections of data that can be iterated over, such as arrays or lists. In the video, the speaker discusses how methods like 'First', 'Any', and 'Select' in LINQ often involve looping through enumerables. The optimizations in .NET 9 allow these methods to handle enumerables more efficiently, often consolidating multiple operations into a single step, thus speeding up the processing.

πŸ’‘Garbage Collection

Garbage Collection (GC) is the process by which the .NET runtime automatically frees up memory that is no longer in use. The video emphasizes how the reduced memory allocations in .NET 9 lead to fewer garbage collection cycles. This, in turn, helps improve application performance by minimizing the pauses caused by GC, especially during frequent LINQ operations.

πŸ’‘Chaining

Chaining refers to the process of combining multiple LINQ methods together, such as 'Where', 'Select', and 'OrderBy'. The video explains that in .NET 9, the framework is now smart enough to optimize these chains by reducing the overhead that was previously associated with multiple method calls. This results in fewer intermediate operations, making the chained LINQ queries faster and more memory-efficient.

πŸ’‘IEnumerable Optimization

IEnumerable optimization in .NET 9 is the process by which Microsoft has improved how iterators in LINQ handle data collections. The video discusses how operations like 'Skip' and 'Take', which previously required separate enumerators for each operation, are now combined into a single, more efficient process. This dramatically reduces the computational overhead and boosts performance in large-scale applications.

Highlights

Link in .NET 9 can be up to 1,800 times faster in some use cases due to major performance improvements.

Historically, Link had performance issues, and some companies even banned its use due to inefficiency.

The improvements in .NET 9 allow Link to sometimes outperform handwritten code, especially in operations like loops.

Benchmark tests show that common methods like Any, All, Count, First, and Single are significantly faster in .NET 9 compared to .NET 8.

In the benchmarks, results showed up to five times faster execution with zero memory allocation for methods like Any, All, Count, First, and Single.

One of the primary optimizations in .NET 9 is the extensive use of spans, improving memory handling and performance.

Microsoft optimized Link by rewriting methods to leverage spans, arrays, and collection marshalling for better memory management.

The improvements in .NET 9 extend beyond basic methods to complex operations like Select, Reverse, and Distinct, which also see major speed boosts.

By optimizing how Link chains methods together, .NET 9 reduces the overhead caused by multiple wrappers in common operations.

The reverse range method, for example, performs almost twice as fast, and distinct operations also show a massive reduction in time and memory usage.

In .NET 9, optimizations like combining iterators for methods such as Skip and Take reduce the virtual dispatch calls, improving performance.

Link now recognizes patterns in operations like Where and Select, consolidating them into a single iterator, which boosts performance for existing codebases without changes.

New internal refactoring in .NET 9 consolidates interfaces like IPartition into fewer, more efficient structures, improving virtual call efficiency.

Memory allocations have been minimized, with methods like Join, Reverse, and ToLookup experiencing up to 20 times faster execution when working with empty collections.

Developers are encouraged to adopt these performance improvements, as they result in substantial productivity gains without sacrificing performance.

Transcripts

play00:00

hello everybody I'm Nick and in this

play00:01

video I'm going to show you how link

play00:02

inet 9 can be up to 1,800 times faster

play00:08

in some use cases with some amazing

play00:10

performance Improvement done by the net

play00:12

team now if you've been following the

play00:14

link journ up until now you know that

play00:17

Microsoft is really focusing on making

play00:19

link as fast as it can be because it was

play00:22

historically something pretty slow in

play00:24

fact I worked in companies where we were

play00:26

not allowed to use Link at all because

play00:28

of performance reasons so it's very nice

play00:31

to see Microsoft actually optimizing

play00:32

link to a very good degree to the point

play00:34

where it's really competing or in some

play00:36

cases surpassing handwritten code with

play00:39

things like Loops for example in this

play00:41

video I'm going to show you some of the

play00:42

most important link performance

play00:43

improvements in net 9 and I'm also going

play00:46

to show you how Microsoft did it and if

play00:48

you have a suspicion on how they did it

play00:50

leave a comment down below okay so let

play00:52

me show what I have here I have a net 9

play00:54

project but it's actually targeting both

play00:56

net 8 and net 9 and I'm going to running

play00:58

benchmarks against both net 8 and net 9

play01:01

and comparing the difference because we

play01:03

want to see without changing anything

play01:05

but the net version how faster is our

play01:08

code now I want to start from this one

play01:10

the benchmarks 4 because I think that's

play01:12

an example of something that all of us

play01:14

are using in some capacity so what we

play01:17

have here is a list where we create a

play01:20

range of 1,000 items from 0 to 999 and

play01:24

then we say to list so we create a list

play01:26

and then we use the any all count first

play01:30

or single methods these are extremely

play01:33

common methods in cop I see them all the

play01:36

time so what I'm going to do is I'm

play01:37

going to grab the benchmarks for name

play01:40

and I'm going to run some benchmarks I'm

play01:43

going to say Benchmark Runner run and

play01:46

just see where we stand with performance

play01:48

with these very simple things that all

play01:50

they're doing is they're taking the

play01:52

input and then they're running a check

play01:54

on it it's very common code in C you see

play01:57

this all the time so let's go ahead and

play01:59

run the benchmarks and see what we get

play02:01

back my configuration will allow me to

play02:03

run it both in net 8 as a Baseline and

play02:06

also in net 9 in the same execution now

play02:09

while this is running and in case you

play02:10

missed it on do train we're running our

play02:12

last week of the back to school discount

play02:14

so until the end of September you can

play02:15

get 30% off any course with code bts30

play02:18

and actually today we just released a

play02:20

brand new course on D train called

play02:21

Career nailing the behavioral interview

play02:24

and it's co-authored by two authors ni

play02:26

centino a principal software engineering

play02:28

manager in Microsoft and Murphy a

play02:30

software engineering manager at Yelp the

play02:33

behavioral interview is something many

play02:34

people are actually failing interviews

play02:36

on I know we failed many people because

play02:38

of the behavioral aspect even though

play02:40

they were very good technically so it's

play02:41

a core fundamental part of end developer

play02:43

and many people are skipping on

play02:45

preparing it because they think if they

play02:46

write good code that's all there is to

play02:48

it now the great thing about this course

play02:50

is that it's still getting the benefit

play02:51

of the 30% discount but it only applies

play02:54

for the firstand view so I'm going to

play02:55

put a link in the description you can

play02:57

check it out if it's for you and I can

play02:58

guarantee you that if you learn learn

play03:00

everything in that course you will never

play03:01

fail a behavioral interview again okay

play03:03

so results are back and let's see what

play03:05

we have here so as you can see we have a

play03:08

massive massive improvement from five

play03:10

times faster five times FAS five times

play03:12

faster three times faster two and a half

play03:14

times faster and again five times faster

play03:16

and zero memory allocation for any of

play03:20

those methods so we go from 1.1 microc

play03:23

to around 200 NS for any all count first

play03:27

single now how is Microsoft doing this

play03:30

because for me I believe that you should

play03:32

always know how these optimizations

play03:34

occur so you can actually use that

play03:35

knowledge on your own code if you have

play03:37

the opportunity so if we go over here

play03:39

and we go on any of those methods let's

play03:41

go on first because we actually kind of

play03:43

covered this in the last code cop video

play03:45

but I want to show you again so if we go

play03:47

in the net 8 implementation of this

play03:49

method you're going to see that all

play03:50

first is doing is is getting the

play03:52

enumerable and then the predicate and

play03:54

then we go into the try get first method

play03:56

and it's doing a few checks and then

play03:57

it's looping around that enumerable so

play04:00

you have all of these move next and

play04:03

current calls because of the enumerator

play04:06

which in net 9 if we go and which use

play04:08

the N 9 version as you're going to see

play04:11

this was completely Rewritten and if we

play04:13

go into try get first now what we're

play04:16

doing first if we're checking hey can I

play04:18

actually get the span yes it's always

play04:21

spans it's only spans you should know

play04:23

about that by now so can we try to get

play04:26

the span out of that inumerable coming

play04:29

in and how are we doing this well we go

play04:31

in and we say is it an array if it is

play04:36

use unsafe. as which by the way deserves

play04:38

a video of its own it's an amazing very

play04:41

very dangerous feature but if you know

play04:42

how to use it it's incredible leave a

play04:44

comment down below if you want me to

play04:45

make a video on that and if we can do

play04:48

that then we get the span out of it if

play04:51

it is a list then we're using the

play04:53

collections marshall. aspan method which

play04:55

by the way you can use too to get a span

play04:57

out of the list on the way this method

play04:59

is working is by going in and actually

play05:03

accessing the internal array that every

play05:05

list is backed by because if we go into

play05:08

the lists sorry for all the jumping but

play05:10

if we go into the list every list is

play05:11

backed by an array actually and then

play05:14

list is a wrapper around an array that

play05:15

knows how to resize it effectively and

play05:17

then adds a bunch of methods on top of

play05:19

it and if we can do that then we're

play05:22

getting the span out and we returning it

play05:24

with some very very interesting stuff

play05:26

like the memory Marshall do get array

play05:27

data reference again a method you can

play05:30

use as well if you know what you're

play05:32

doing it's some of my favorite methods

play05:34

to use and if we can do that then we get

play05:36

out the span otherwise we return an

play05:39

empty span basically and we say we could

play05:41

not find the span so go and fall back to

play05:45

what you were doing before which if we

play05:47

go back into the original implementation

play05:49

if you can find the span then go and

play05:51

fall back for the slower versions but

play05:53

for the majority of the use cases what

play05:55

you're using an array or a list this

play05:58

will be heavily heavily optimized it's

play06:00

an amazing feature you don't even know

play06:02

about and I do think you should know

play06:04

about because those are improvements you

play06:05

can actually make on your own code if

play06:07

you want to now there's plenty more

play06:09

improved here let's take a look at the

play06:11

benchmarks one class so what do we have

play06:13

here we have a bunch of innumerables of

play06:15

1,000 items and we do a few operations

play06:18

so for example here we turn it into an

play06:19

array and we say give me the distinct

play06:21

values in this array or here we append a

play06:25

value and we select the value of each

play06:28

item multiplied by two here we have a

play06:30

reversal here we have a default if empty

play06:33

and we have a selection again and a

play06:35

multiplication we have the two list Skip

play06:37

and take and we have a union and we have

play06:40

the first last count element ad and

play06:42

first methods on any of those anur rules

play06:46

I'm going to run this Benchmark because

play06:48

I find it very very interesting let's go

play06:50

ahead and add this this just shows you

play06:52

that you can do more complex operations

play06:55

in link and you'll see how they perform

play06:58

and it's very clever how Microsoft

play07:01

actually optimized this to be that fast

play07:04

they really did a lot of changes behind

play07:07

the scenes to make this possible because

play07:10

when you chain methods in link well

play07:13

you're adding a wrapper on top of a

play07:14

wrapper on top of a wrapper on top of a

play07:16

wrapper and you have this chain of

play07:18

responsibility that this has to happen

play07:19

first and then this has to happen first

play07:21

and you add one thing on top of the

play07:23

other but if link was smarter it would

play07:27

be able to combine some of those

play07:29

operations

play07:30

and consolidate some of those iterators

play07:32

into fewer jumps from one method to the

play07:35

other and from one operation to the

play07:36

other and that's exactly what is

play07:38

happening here let's wait for this

play07:39

Benchmark to return and I'm going to

play07:41

show you how everything works behind the

play07:43

scenes okay so results are back and

play07:44

let's see what do we have here so as you

play07:47

can see distinct first from 39 nond down

play07:51

to eight no memory the memory aspect of

play07:56

it is always massive to me because again

play07:59

less allocations less garbage collection

play08:02

less pausing on your application but

play08:04

look at this a pen select last from

play08:07

3.2 microsc to two nanc like crazy

play08:13

that's the 1,600 times faster just crazy

play08:16

reverse range half the time default if

play08:19

empty select element at 2.8 down to four

play08:23

three times faster here and again many

play08:25

times faster here so how is Microsoft

play08:28

doing this well as described Steven to's

play08:30

performance improvements inet 9 blog

play08:32

post which these examples are coming

play08:33

from by the way this is mostly coming

play08:35

from consolidating internal interfaces

play08:38

and removing overhead an example would

play08:40

be that the a buy method for example

play08:42

followed by a first will now avoid full

play08:44

data set copies or sorts before wouldn't

play08:47

leading to way better performance you

play08:49

also see a lot of internal interface

play08:51

refactoring so I don't know if you

play08:52

remember but there is this interface

play08:54

called I partition which was actually

play08:56

used by many enables in net that is not

play08:59

not used anymore instead these things

play09:01

now are Consolidated in an iterator

play09:04

which I don't actually know if we have

play09:06

access to it doesn't look like we do but

play09:09

if we go over here let's take a look at

play09:12

net 9 I don't know if I will find it but

play09:14

I will try yeah here we go iterator of

play09:17

type T Source now this is used now as

play09:21

another feather optimization this

play09:22

basically leads to fewer checks and

play09:24

cheaper virtual dispatch calls leading

play09:26

to again better performance a very

play09:29

similar story can be seen here with Skip

play09:31

and take Skip and take before would have

play09:33

two iterators but now they're

play09:34

Consolidated they're detected and

play09:36

Consolidated into one skipping another

play09:38

call ultimately lincoln. net9 just got

play09:41

way way way more clever and it's more

play09:44

self-aware you have methods like two

play09:46

list or two array knowing what's coming

play09:49

before and being able to optimize how to

play09:52

deal with that operation the most common

play09:54

problem with this was the wear and

play09:56

select methods which would create two

play09:58

iterators if dolet 9 now sees where and

play10:01

select it's going to make one iterator

play10:04

massive boost to all of your existing

play10:05

code bases just for updating a net

play10:08

version there's more improvements over

play10:09

here things like the any call has a

play10:12

massive Improvement then we have all

play10:14

these other methods like chunk distinct

play10:16

group buy join to look up reverse select

play10:19

select many skip while take while and

play10:21

where all of that has been optimized

play10:23

when it comes to empty collections

play10:25

before net 9 all of these methods would

play10:27

allocate some memory in net 9 they don't

play10:29

allocate any memory and they can be up

play10:31

to 20 times faster with the two lookup

play10:34

method for example having a massive

play10:35

Improvement and in benchmarks 5 uh we

play10:38

have a method like the sequence equal

play10:41

where you have two different sequences

play10:42

as an innumerable we have an array of

play10:45

these values and then we have a list of

play10:46

the same values executing this would go

play10:48

from 26 microc seconds to less than 1

play10:52

micros around 900 Nan just massive

play10:56

massive Improvement and they're mostly

play10:58

done with making link clever rare and

play11:00

also using span more it seems that

play11:02

Microsoft is really investing in link

play11:04

which I really really like and I've

play11:06

actually started using link more and

play11:07

more in my production code because it

play11:09

makes me so so much more productive the

play11:11

minuscule amount of performance I might

play11:13

be losing just doesn't matter anymore

play11:16

but now one from you has your opinion

play11:18

changed on link and what do you think

play11:19

about those changes leave a comment down

play11:21

below and let me know well that's all I

play11:22

had for you for this video thank you

play11:23

very much for watching and as always

play11:25

keep coding

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
LINQ optimization.NET 9performance boostC# codingMicrosoft updatesfaster queriesmemory efficiencydeveloper tipscode improvementsoftware engineering