A deep dive into optimizing LCP
Summary
TLDRIn this video, Philip Walton discusses optimizing Largest Contentful Paint (LCP), a core web vitals metric. He explains the importance of LCP, provides a framework for improving it, and offers practical steps like prioritizing LCP resource loading, reducing render delays, optimizing resource load times, and minimizing Time to First Byte (TTFB).
Takeaways
- 😀 Largest Contentful Paint (LCP) is a core web vitals metric that measures the time from when a user starts loading a web page until the largest image or text block within the viewport finishes rendering.
- 🏁 Google recommends aiming for an LCP of 2.5 seconds or less for at least 75% of all page visits to provide a good loading experience.
- 📊 The 75th percentile of LCP times is crucial, and it represents the value that is 75% or 3/4 of the way through a sorted list of LCP times.
- 🔍 Improving LCP involves optimizing the experience for enough users so that at least 75% of them are within the good threshold, not just targeting a specific set of users.
- 🌐 LCP is the core web vitals metric that sites struggle with the most, with only 52.7% of sites meeting the good LCP threshold.
- 🔍 Optimizing LCP involves breaking it down into smaller, more manageable problems: Time To First Byte (TTFB), LCP resource load delay, LCP resource load time, and element render delay.
- 🚀 The key to improving LCP is to identify bottlenecks in the loading and rendering process and address them, such as reducing TTFB, optimizing resource load times, and ensuring elements render quickly.
- 🛠️ General best practices for optimizing LCP include prioritizing the loading of the LCP resource, reducing render-blocking resources, optimizing image and font resources, and using CDNs for faster delivery.
- 🌐 Real-world data from HTTP Archive suggests that resource load delay might be the biggest bottleneck for LCP, indicating a need for better prioritization and loading strategies.
- 📈 A step-by-step approach to optimizing LCP includes eliminating unnecessary resource load delay, ensuring the LCP element can render as soon as its resource finishes loading, reducing the load time of the LCP resource, and improving Time To First Byte.
Q & A
What does LCP stand for and why is it important?
-LCP stands for Largest Contentful Paint. It is important because it measures the time from when a user starts loading a web page until the largest image or text block within the viewport finishes rendering. Optimizing LCP is crucial for providing a good user experience and is one of the three core web vitals metrics recommended by Google.
What is the recommended LCP score and how does it relate to the user experience?
-Google recommends that developers aim for an LCP of 2.5 seconds or less for at least 75% of all page visits. This means that if 75% of the time your pages can render the largest image or text block within 2.5 seconds, then those pages are classified as providing a good loading experience.
How is the 75th percentile of LCP determined and why is it significant?
-The 75th percentile of LCP is determined by taking the value that is 75% or 3/4 of the way through a sorted list of LCP times from fastest to slowest. It is significant because it represents the LCP time for the majority of users, indicating the overall performance of the page load.
What happens if performance optimizations only improve the already fast LCP experiences?
-If performance optimizations only make the already fast LCP experiences faster, the 75th percentile does not change. To improve the LCP scores at the 75th percentile, the experience for enough users must be improved so that at least 75% of them are within the good threshold.
Why do some developers struggle with optimizing LCP?
-Developers struggle with optimizing LCP because there are many factors to consider when optimizing load performance. Often, the optimizations they try do not work or do not help much, making it difficult to identify what will actually make a difference for their specific site.
What are the four main subparts of LCP and how do they contribute to the total LCP time?
-The four main subparts of LCP are: 1) Time To First Byte (TTFB), which is the time from when the user initiates page loading until the browser receives the first byte of the HTML document response. 2) LCP resource load delay, which is the time between TTFB and when the browser starts loading the resource needed for LCP. 3) The time it takes to load the LCP resource itself. 4) The element render delay, which is the time from when the LCP resource finishes loading until it's rendered on the screen. These subparts collectively add up to the full LCP time.
Why is it recommended to focus on optimizing the LCP resource load delay?
-Optimizing the LCP resource load delay is recommended because it ensures that the LCP resource is prioritized and starts loading as early as possible after the HTML document is received. This can significantly impact the overall LCP time and user experience.
What are some general best practices for reducing the resource load time of the LCP element?
-General best practices for reducing the resource load time include optimizing image and web font files, setting proper caching headers, using a CDN to serve resources closer to users, and potentially using server-side rendering or pre-rendering pages as static files.
How can developers use the 80-20 principle in optimizing LCP?
-The 80-20 principle suggests that about 80% of the time should be spent making network requests needed to render the LCP element, and 20% of the time should be allocated to everything else. This principle helps in identifying opportunities to improve LCP by focusing on the most impactful optimizations first.
What is the role of server-side rendering in optimizing LCP?
-Server-side rendering plays a crucial role in optimizing LCP by allowing the HTML that's delivered to already contain the markup when the browser receives it. This means the browser doesn't have to wait for the JavaScript to finish loading before it can render the images, reducing the element render delay.
Outlines
🌐 Introduction to LCP and Its Importance
Philip Walton introduces the concept of Largest Contentful Paint (LCP), one of the three core web vitals metrics, which measures the time it takes for the largest image or text block to finish rendering after the user starts loading a web page. He emphasizes the importance of optimizing LCP, aiming for a score of 2.5 seconds or less for at least 75% of all page visits. Walton also explains the concept of the 75th percentile and how it relates to user experience, highlighting the need for developers to focus on improving the loading experience for a broad spectrum of users.
🔍 Breaking Down LCP into Subparts
The script delves into the components that contribute to LCP, focusing on the HTML document and the resource needed to render the LCP element. It breaks down LCP into four subparts: Time To First Byte (TTFB), LCP resource load delay, the time to load the LCP resource, and the element render delay. The speaker illustrates how each part contributes to the overall LCP and emphasizes the importance of optimizing each subpart to achieve a faster loading experience. The goal is to minimize non-essential delays and prioritize network requests needed for the LCP element.
📈 Analyzing LCP Subpart Timings
Using data from the HTTP Archive, the script analyzes the distribution of LCP subpart timings across different web pages. It reveals that resource load delay might be the biggest bottleneck for LCP, contrary to the common belief that image load times are the primary issue. The speaker discusses the limitations of lab data and the need for real user data to accurately assess LCP optimization. Despite these limitations, the data suggests that there is room for improvement in optimizing the LCP resource load.
🛠️ Step-by-Step LCP Optimization Strategy
The script outlines a four-step approach to optimize LCP: 1) Eliminate unnecessary resource load delay by prioritizing the LCP resource, 2) Ensure the LCP element can render as soon as its resource finishes loading, 3) Reduce the resource load time by optimizing images and web fonts, and 4) Deliver the initial HTML document as fast as possible. Each step is designed to address specific bottlenecks and improve the overall loading experience.
🖼️ Optimizing a Real-World Web Page
Philip Walton demonstrates the optimization process using a demo page featuring a photo slideshow viewer. He explains how to identify and address issues in the resource load delay and element render delay. Techniques such as using 'preload' or 'priority hints' to start loading the LCP image earlier, and server-side rendering to prevent JavaScript from blocking image rendering, are discussed. The goal is to ensure that the LCP image starts loading as early as possible and renders immediately after loading.
🌐 Final Steps and Additional Resources
The final part of the script covers the optimization of image formats and sizes, using tools like Squoosh CLI to convert images to more efficient formats like AVIF or WebP. It also discusses the use of the 'picture' element to conditionally load the best image version based on browser capabilities and screen size. The speaker highlights the importance of server-side optimizations, such as proper caching headers, and encourages developers to use real user performance data for effective optimization. Resources for further learning on optimizing LCP are provided.
Mindmap
Keywords
💡Largest Contentful Paint (LCP)
💡75th Percentile
💡Resource Load Delay
💡Time To First Byte (TTFB)
💡Core Web Vitals
💡Performance Optimization
💡Critical Rendering Path
💡Real User Monitoring (RUM)
💡Server-Side Rendering (SSR)
💡Content Delivery Network (CDN)
Highlights
Largest Contentful Paint (LCP) is one of the three core web vitals metrics, representing the time from when a user starts loading a web page until the largest image or text block within the viewport finishes rendering.
Google recommends aiming for an LCP of 2.5 seconds or less for at least 75% of all page visits to provide a good loading experience.
Understanding the 75th percentile is crucial, as it represents the value at 75% or 3/4 of the way through a sorted list of LCP times.
Performance optimizations that only improve already fast LCP experiences or slightly improve poor experiences will not change the 75th percentile.
To improve LCP scores at the 75th percentile, focus on optimizations that improve the experience for enough users so that at least 75% are within the good threshold.
LCP is the core web vitals metric that sites struggle with the most, with only 52.7% of sites meeting the good LCP threshold.
Optimizing LCP is complex due to the many factors involved in load performance, making it difficult for developers to identify effective optimizations.
Breaking down LCP into smaller, more manageable problems can help in addressing each separately and effectively.
The two most important resources for LCP optimization are the HTML document and the resource needed to render the LCP element.
LCP can be broken down into four subparts: Time To First Byte (TTFB), LCP resource load delay, LCP resource load time, and element render delay.
Understanding the ideal values for each LCP subpart is key to identifying bottlenecks and making effective optimizations.
A well-optimized page should spend about 80% of the time making network requests and 20% of the time on everything else.
Lab data from HTTP Archive suggests that resource load delay might be the biggest bottleneck for LCP on the web, rather than image load times.
A step-by-step approach to optimizing LCP includes eliminating unnecessary resource load delay, ensuring the LCP element can render as soon as its resource finishes loading, reducing the load time of the LCP resource, and delivering the initial HTML document as fast as possible.
Preloading the LCP image or using priority hints can help in starting the load of the LCP resource earlier.
Server-side rendering or pre-rendering pages as static files can help in reducing element render delay by ensuring the LCP element is contained within the HTML response received from the server.
Optimizing image formats and sizes, and using the picture element to conditionally load the best version, can significantly reduce the load time of the LCP resource.
Applying proper caching headers and using a CDN can help in reducing the time to first byte (TTFB), which affects everything that comes after it.
Transcripts
[MUSIC PLAYING]
PHILIP WALTON: Hey, everyone.
I'm Philip Walton.
And, today, we're going to be doing a deep dive
into optimizing LCP.
But, first, I want to quickly recap what LCP is
and explain why I think it's so important for developers
to fully understand this metric and know
how to improve their scores.
LCP stands for Largest Contentful Paint.
It's one of the three core web vitals metrics.
And it represents the time from when
the user starts loading a web page until the moment when
the largest image or text block within the viewport
finishes rendering.
At Google, we recommend that developers
aim for an LCP of 2.5 seconds or less for at least 75%
of all page visits.
In other words, if, 75% of the time, your pages
can render the largest image or text block within 2.5 seconds,
then we would classify those pages
as providing a good loading experience.
But I know that sometimes that whole 75th percentile
bit can be confusing.
So let's take a closer look at exactly what that means.
Here is an example of distribution
of all visits to a particular page sorted in order of LCP
times from fastest to slowest.
In this chart, each bar represents a single loading
experience of a real person visiting this page.
On most sites, you'd probably have thousands or millions
of bars.
But, for the purposes of making it easy to visualize,
I'm showing an example page that only has 36 visits.
So to get the 75th percentile from a list
of values like this one.
All you have to do is take the value
that is 75%, or 3/4, of the way through the list.
In this example of 36 data points,
the 75th percentile corresponds to the 27th value
in the list, which, here, is just under three seconds.
So it's classified as needs improvement.
Remember that to be classified as good,
an LC value must be 2.5 seconds or less.
But, anyway, the main reason I'm showing this
to you, this distribution of real user values,
is I want you to take a look at what
happens if we were to implement a performance optimization that
would make all of the already fast LCP experiences even
faster.
Did you notice that, even though the LCP times improved
for these users, the 75th percentile did not change?
Similarly, if the site were to improve the poor experiences,
so that they were slightly faster though still poor,
it would also not change the 75th percentile.
If you want to improve your LCP scores at the 75th percentile,
the only way to do that is to improve the experience
for enough users, so that at least 75% of them
are within that good threshold.
And, generally speaking, the best way to do that
is to make optimizations that improve the experience
across the board, not just targeting
a specific set of users.
Of course, you can make optimizations
that target specific users if you notice specific problems.
But, in this talk, I'm going to focus on general LCP
best practices that apply to all types of situations.
So I've covered what LCP is, as well as
how you should approach optimizing it
for a broad spectrum of users.
But another important topic is why
I'm focusing on just LCP in this talk today instead
of the other core vitals metrics.
Well, based on data from the Chrome User Experience Report,
of the three core web vitals metrics,
LCP is the one that sites struggle with the most.
Only 52.7% of sites meet the good LCP threshold
compared to much higher rates for CLS and FID.
Moreover, LCP is increasing at a slower pace
than the other metrics, which also
suggests that developers are having more trouble optimizing
for it.
We know that sites are definitely
trying to improve their core web vital scores because we've
seen lots of improvement in CLS over the past few months.
But, clearly, LCP is giving developers a bit more trouble.
So this raises the question, what
makes LCP so hard to optimize?
I'm sure there are many reasons for this.
But I suspect that a big reason is
that there are just so many things
to think about when optimizing load performance.
And I know from talking to developers that, in many cases,
they're trying really hard to optimize LCP.
But the things that they're trying just aren't working,
or they aren't helping very much.
They can't figure out what they need
to do that will actually make a difference
for their specific site.
So LCP is a big, complex problem.
But I find that when you're facing a big problem that's
hard to solve, it's helpful if you first break it down
into smaller, more manageable problems
and address each of those separately.
And I think we can do exactly that with LCP.
So in the rest of this talk, I'm going
to present a framework for how I recommend
the developers approach improving LCP on their sites.
OK, here, we have an example of waterfall
from a pretty typical page load containing CSS, JavaScript,
and image resources.
And while all of these network requests
are important, in general, for the sake of optimizing LCP,
you really only need to be focusing
on two, the HTML document and then
whatever other resource may be needed
to render the LCP element.
In this case, the LCP element is an image.
But the same principle would apply
for a text node that needed to load a web
font before it could render.
So now that we've identified the two most important resources,
we can use the relevant timing attributes of those resources
to break down LCP into its most important subparts.
The first subpart is the time from when the user initiates
loading of the page until when the browser receives
the first bite of the HTML document response.
This is commonly referred to as Time To First Byte, or TTFB.
The reason this time is important
because it represents the first moment
that the browser is able to start discovering
additional resources that are needed to render the page,
including the resource needed to render the LCP element,
which we'll get to in a bit.
The second subpart is the LCP resource load delay.
This is the delta between TTFB and when
the browser starts loading the resource needed for LCP.
In some cases, the LCP element can
be rendered without loading any additional resources,
like if the LCP element is a text node using a system font.
And for those pages, the resource load delay is zero.
In general, you want your resource load delay
to be as small as possible.
The third subpart is the time it takes to load
the LCP resource itself.
Again, if the LCP element on your page
doesn't require a research request,
then this time will also be zero.
And lastly, the fourth Subpart of LCP
is the element render delay.
This is the time from the moment your LCP resource finishes
loading until it's actually rendered to the screen ever.
So every single page can have its LCP value broken down
into these four subparts.
There's no overlap or gaps in between them.
And, collectively, they add up to the full LCP time.
To illustrate that point, let's take a look at what
happens if we were to reduce the resource load time
part in this example, in this case,
when we reduce the network load time,
the element render delay got extended
by the exact same amount of time.
The time just shifts from one part to a different part.
And so LCP doesn't change.
That's because, in this example, the page
needs to wait for the JavaScript, those yellow bars
at the bottom, to finish loading.
Since, here, the JavaScript is responsible for adding the LCP
element to the page.
So I want to pause here for a moment
and really emphasize that last point.
Because I think this is where a lot of developers
get frustrated.
When they search for posts online telling them
how to improve their LCP, one of the most common pieces
of advice is to optimize their images.
But optimizing your images will only
affect this one part of LCP.
And if this part isn't your bottleneck, then reducing it
won't help you improve your score,
as demonstrated in this example.
So the key to improving LCP is to figure out
where your bottlenecks are.
And a good way to do that is to understand what
the ideal or recommended values are for each of these LCP
subparts that I've just introduced.
And at a high level, the advice is pretty simple.
You want to be spending the bulk of your time making
network requests that are needed to render the LCP element.
And you want to minimize all other time as much as possible.
Anything that gets in the way of your pages
starting to load the LCP resource as soon as possible
or rendering the LCP element as soon as
that resource is done loading is essentially wasted time.
So it's important to eliminate those times, if you can.
So given that general principle, this
is roughly how these LCP subparts should break down
on a well-optimized page.
The total time in the right column
is based on the goal of 2.5 seconds for LCP.
And the other times are just a percentage of that.
So notice how about 80% of the time
is allocated to making network requests.
And 20% of the time is allocated to everything else.
Later in the talk, I'll walk through an example
of a real-world performance optimization.
And I'll use this 80-20 principle
to identify opportunities to improve.
And speaking of opportunities to improv,
I bet you're probably curious to know what the breakdown of time
spent in each LCP subpart looks like for sites
in the real world.
So, unfortunately, we don't have real user data
for these specific metrics yet.
But we do have lab data from HTTP archive.
And that can give us some insight
into answering this question.
Here is a chart that shows the breakdown of LCP subpart
timings from 5.3 million web page test runs
which is every single page in HTTP Archive where
the LCP element was an image with a URL source.
To make this chart, I took all of those
runs and sorted the results by LCP value
from fastest to slowest.
And then I broke it down into five buckets
where the top bucket represents the fastest 20% of web pages,
and the bottom bucket represents the slowest 20%.
Within each bucket, I took the average LCP subpart time value
to create each stacked bar, and so the total bar length
is the average LCP value within that bucket.
And while this chart shows the absolute timings
for each subpart, and you can visually
see how the total time spent in each part
gets worse as you move from bucket to bucket,
I actually think a more interesting way
to visualize the same data is to look
at each subpart as a percentage of the total LCP time.
So here's what that looks like.
And to be honest, when I first look at this data,
I was pretty surprised at the results.
I was expecting that the majority of the LCP time
would be spent loading large unoptimized images.
In other words, I was expecting the green bars
to be a lot wider, especially in the bottom row.
But this data suggests that image load times might not
actually be the main bottleneck for LCP on the web.
From the amount of purple I'm seeing in these results,
it seems like the biggest bottleneck might actually
be resource load delay.
Now, I want to stress that this is lab data from web page test
runs.
It's not real user data from the wild.
This data uses a single network and device configuration
for every run.
So it's definitely not representative of the myriad
of devices and networks used in the real world.
Also, the web page test runs used by HTTP Archive
don't contain repeat visits.
So things like the user's cache state
are not factored into this at all.
So I don't think we can say with any certainty
that this is exactly how LCP breakdowns look
in the real world.
But what I think we can say with high confidence
is that sites are definitely not optimizing LCP resource load as
effectively as they could be.
And while that's obviously a bummer for people like me,
I do think there's reason to be optimistic about the data.
The subparts of LCP that are hardest to improve
are represented here by the blue and green segments in the chart
above.
And the parts of LCP that are easiest to improve,
in my opinion, are represented by
the purple and yellow segments.
Given how much purple and yellow we see in this chart,
I'm hopeful that if we can do a better job of helping
developers discover where their bottlenecks are,
then we should be able to see some big improvements.
So I know I shared a lot of information so far,
and I haven't really given any advice yet.
So let's do that now.
Here is a step-by-step approach, a recipe, if you will,
for how to optimize LCP on any given page.
There are only four steps, and I put them in order
with the easiest and most high-impact optimizations
first.
Step one, eliminate unnecessary resource load delay.
The key point in this step is to ensure that the LCP resource is
prioritized so we can start loading immediately
after the HTML document is received.
And the best way to check on your pages,
whether the LCP resource is loading early enough,
is to compare its request start time
with the start time of the first sub
resource loaded by the page.
In this case, the first sub resource is a style sheet,
and it starts loading quite a bit
before the LCP image resource starts.
So that's a signal that there's opportunity to improve.
To fix this we, could use preload or add priority hints
to the image tag, and then the browser
would know to start loading it earlier.
You also want to check to make sure that the LCP image isn't
being lazy loaded because that will result
in additional resource load delay
that you never want for your LCP image.
In this case, preloading the image should do the trick,
and here's what that looks like after it's been implemented.
Now you can see that the image resource has started
loading at the same time as the first stylesheet which
is exactly what we want.
And so step one is pretty much done.
But before we move on to the next step,
I want to once again point out that reducing the resource load
delay here did not change LCP.
That is still blocked on the JavaScript code
as I mentioned earlier.
Step two is to eliminate unnecessary element render
delay.
In other words, we need to make sure
that as soon as the LCP resource finishes loading,
nothing else on the page is preventing it
from rendering right away.
That can be things like render-blocking stylesheets
and JavaScript files.
It could also be something like an A/B test runner
that is intentionally hiding content until it can figure out
what experiment the user is in.
In our example, one way we could reduce
render times is to optimize the size of the JavaScript
files we're loading.
Techniques like [? mimification ?] and tree
shaking can help with this and should
reduce the overall script download times.
So this is definitely an improvement,
but it's still not great.
The recipe said to ensure that nothing
is blocking rendering after the LCP resource finishes loading.
It doesn't just say to reduce the blocking time.
In this example, the JavaScript code
contains a framework that is client-side
rendering the application.
So if we update our framework to use server-side rendering
or to pre-render these pages as static files,
then the JavaScript will still load,
but it will no longer be a bottleneck
for rendering the LCP image.
Ah, that's much better.
Now the JavaScript code isn't blocking rendering at all.
So the LCP image can render as soon as it's downloaded.
Step three in this recipe is to reduce the resource load
time as much as possible, and you can do that
by following all of the general best practices
around optimizing images and web fonts.
Anything that you can do to reduce
the file size of the resource should reduce its load times.
You should also make sure that you're
setting the proper caching headers
and using a CDN so that you can serve those resources
from a location as geographically close
to your users as possible.
If you reduce load times in this example,
the results would look like this.
So we're almost there.
But as you can see, there's now a stylesheet
that's taking a bit longer to download than the LCP image
resource.
We were actually able to optimize the LCP resource
by so much that it's now smaller than the stylesheet which
means the stylesheet is blocking rendering
until it's done loading.
I recommend looking at techniques like critical CSS,
or you could find other ways to remove the unused styles
wherever possible.
Another option is to inline the CSS into the document,
but that can have negative performance
effects for repeat visitors.
My recommendation is not necessarily
to remove or in-line the style Sheets
but to just reduce them so they're smaller in size
than your LCP resource.
That should help ensure that it's either not
blocking or rarely blocking which
is a pretty good compromise.
In this case, reducing the size of the stylesheet
slightly is enough to prevent it from blocking rendering
of the LCP image which is good enough
to move on to the final step.
Step four is to reduce your time to first byte.
This step is saved for last because it's usually
the step that developers have the least control over.
It's also one of the hardest to optimize.
That being said, having a good time to first byte
is critical because it affects everything that comes after it.
One of the best ways to improve your time to first byte
is to use a CDN.
Just like with optimizing resource load times,
it's important to get your servers as close to your users
as possible.
Here's what that looks like.
As I said before, any improvements to this part
will directly affect every other part that follows.
That's because nothing can happen on the front end
until the back end delivers that first bite of the response.
So to recap, here are the four steps in the recipe.
Step one, ensure the LCP resource starts
loading as early as possible.
Step two, ensure the LCP element can
render as soon as its resource finishes loading.
Step three, reduce the load time of the LCP resource
as much as you can without sacrificing quality.
And step four, deliver the initial HTML document
as fast as possible.
If you're able to follow these four steps in your pages,
then you should feel confident that you're
delivering an optimal loading experience to your users,
and you should see that reflected
in your real-world LCP scores.
So now I want to go through a real-life example
of actually applying this to a real web page.
So here I have a demo page that I
created to mimic a lot of the real-world issues I've
seen recently on sites that are trying to optimize their LCP.
The demo includes a photo slideshow viewer
that consists of a main image as well as several image
thumbnails.
This type of pattern is common across the web on everything
from news sites to e-commerce sites to general landing pages.
The demo also loads two web fonts as well as the CSS
framework, in this case, Bootstrap,
because these types of dependencies
are also common on the web, and I wanted the demo
to be reasonably realistic.
OK, let's head over to the code, and the first thing
I want to show you before we start optimizing is I
added a file called perf dot js that my demo is loading.
This file calculates the four LCP subpart timings
and logs them to the console.
It also uses the performance dot measure method from the user
timing API so that I can easily visualize
these timings in DevTools.
Let me open up DevTools to the Performance tab
and show you a quick trace so you can see what I mean.
So as this page loads, I want you
to notice that the text comes in first,
and then you can see a gap where the photos will go,
and then eventually all the photos
fade in together once they've all finished loading.
Now that the load is done, take a look
at the LCP subpart timings here in the timings track.
I've staggered these timings so that the TTFB and the resource
load time are on the top row, and then
the resource load delay and element render delay timings
are on the bottom row.
Remember that the goal is to be spending about 80%
of your time loading the main document and the LCP
resource, which are the two timings here on the top row,
and then less than 20% of your time in the load or render
delay portions which are here on the bottom row.
As you can see from this trace, we're spending most of our time
in the resource load delay portion which is a problem.
So let's fix that.
But before we switch to the Code Editor,
let's take a closer look at the network waterfall
to see what's happening within the resource load delay
portion of LCP.
As you can see here, we're loading a few fonts
and stylesheets.
Once that's done, we have some JavaScript files.
And once the main dot js file finishes loading,
there's an API request for photos dot json.
The LCP image doesn't start loading until this API
request finishes.
So if you want to reduce the resource load delay,
then we have to start loading the LCP image resource earlier,
and the two methods for doing that
are preload or priority hints.
But given that the load of this image
was not initiated from the HTML, it
was initiated from the JavaScript, then
that limits our options to really just preload
at this point.
To preload your LCP image, you can add a link rel preload tag
to the head of your HTML document,
and you set the href value to your image URL.
Also make sure to set the as attribute to image.
Now the image resource is discoverable
from within the HTML source, so the browser
doesn't have to wait for the JavaScript and API request
to finish before it can start loading the LCP image.
Let's take a look at how that improves things.
So you can see that the LCP image now
started loading earlier, but it's still not
loading as early as the fonts and stylesheets which means
we have more work to do here.
The reason it's starting later than those other resources
is because Chrome is assigning it
a low priority which it generally does for images
that aren't in the view port.
And these images actually aren't in the view port
because the JavaScript code that puts them there hasn't run yet.
One hack to get Chrome to load this earlier
is to move our link rel preload tag
above all the other linked tags, but that doesn't really
address the priority issue.
A better, more semantic option is
to use the new priority hints API and just tell the browser
that this request should be high priority, which
we can do with the fetch priority attribute.
Now that we've added that, let's run the trace again
to see the difference.
As you can see here, the image now
starts loading at the same time as the other resources,
and it's assigned a high priority.
Also notice that the resource load delay segment is now tiny,
so our job here is done.
Let's move on to step two and address
this long element-rendered delay.
If you remember from earlier, I said
that the image viewer component was built and rendered
in JavaScript and that it programmatically waits
for all images to finish loading before fading in.
When the network is fast, this is a nice UX touch.
But when the network is slow, it's
painful to wait this long before you see anything.
So let's fix that.
Most popular JavaScript frameworks
today support a feature called server-side rendering which
essentially just means that your client-side code can
be run on the server so that the HTML that's delivered already
has the markup in it when the browser receives it.
This is a great solution for pages like this one
because it means that the browser doesn't
have to wait for the JavaScript to finish loading before it
can render the images.
Now explaining how to enable service
and rendering in your application
is a bit outside of the scope of this demo,
but we can easily mimic the results
of what server-side rendering would give us
by just copying the client-side rendered code
and pasting it into our index to HTML file.
After all, it doesn't really matter what stack we're using.
From the browser's point of view, all that's relevant
here is that the image element is
contained within the HTML response received
from the server.
An aesthetic file is just as valid of a way
to do that as a server-generated response.
Now when we reload this page, notice
how the images are being rendered as soon as they're
loaded rather than waiting for everything to load and render
all at once.
And if we look at the timing values here in DevTools,
you can see that the element render time segment is now
also tiny.
So let's move on to step three and see
what we can do to reduce the load time of the LCP image.
If we switch over to the Elements panel
and take a closer look at this image,
you can see that it's a JPEG which
is not the most optimal format we could use here.
And it's also loading a file that's
1,600 pixels wide despite the fact
that the rendered size is only 372 pixels.
Which even on a tube DPR display is still more than twice
as big as it needs to be.
So let's address both of these issues.
We can convert these files to a more optimal image
format like ABIF or WebP, and we can also
create a few different sized versions of each of them.
Image conversion tools are going to be your best
friend during this step, and in my case,
I'm going to use the Squoosh CLI to do the conversion in bulk.
If we head over to the terminal, I
can resize and convert all of my source images
to ABIF with just this one command.
And if I repeat this process a few times for each
of the formats and sizes I want, then in just a few minutes,
I can have optimized versions of each of my images
in a variety of formats and sizes.
In this case, I created a ABIF, WebP, and JPEG versions of each
at 1600 pixels wide to target desktop and 800 pixels wide
to target mobile screen sizes.
I could create more sizes if I wanted to, but at some point,
there's going to be a trade off between file size savings
and cache hit rate on your CDN edge nodes.
So now that we have multiple versions of each image,
let's update our HTML to conditionally load
the best version given the user's browser's
capabilities and screen size.
We can use the picture element to do that.
The picture element allows us to list several source options,
and then the browser will automatically
pick the best option for us.
In this case, I'll list the source files
for the ABIF version first because those ended up
having the smallest file size.
Next I'll list the WebP source files,
and then finally I'll keep the image tag with the JPEG
since JPEG is supported by pretty much every browser.
With this change, let's reload our page
and take a look at the trace.
Notice how now our resource load delay got big again.
And if you look at the waterfall above,
you can see that the reason is because we didn't update
our link rel preload tag to match the image loaded
by our picture element.
So we're preloading a JPEG but then ultimately rendering
an ABIF image which is not good because now we're
loading two images.
But this raises an interesting question.
Is it even possible to conditionally preload
the same image that will eventually
be used by the picture element?
Well, the answer is yes and no.
The link rel preload tag does take an image source set
attribute which has all the capabilities of the source set
attribute on the image tag.
But it doesn't allow you to specify
multiple different formats each with their own separate source
sets like you can with a picture tag.
Fortunately, though, priority hints
provides a solution here as well.
Rather than use the link rel preload tag,
we can apply the fetch priority attribute directly
to the image tag, and then the browser will automatically
determine the right version of the image
to prioritize, like this.
So not only is it cleaner to do it this way, but in many cases,
developers may not have the ability
to modify the content of the head tag
just on specific pages.
But they probably do have the ability
to modify their image tag attributes.
So now when we reload and look at the trace,
you can see that the ABIF version is being prioritized,
and it's also the 800 pixel-wide file that's being loaded.
At this point, all the remaining optimizations
are going to be done on the server side.
We'll want to make sure that we're
applying the proper caching headers to our images
as well as to our HTML document responses
to ensure that they can be cached when appropriate.
Note that we want to ensure that they're
being cached by both the browser, which
helps with repeat visits from the same user,
but also on the CDN, which helps all users
in the same geographic region who may be requesting many
of those same resources.
In the interest of time, I'm not going
to cover those optimizations in this demo,
and honestly, any optimizations you're
making to your server, CDN, or to your network configurations
should be based on real user performance data,
not lab-based simulations like we've
been looking at here today.
So much of your server side, CDN, and network performance
depends on factors like cache hit rate
and other things that are directly related
to user behavior which is why you need real user performance
data to effectively optimize them.
If you want to learn more about measuring and optimizing
performance with real user performance
data from the field, head on over to web
dot dev where you'll find lots of resources on that topic.
So that's it for me today.
If you want to learn more, make sure to check out
web.dev/optimize-lcp.
It goes into a lot more detail on all of the techniques
that I covered today.
As always, thanks for watching, and happy optimizing.
[MUSIC PLAYING]
関連動画をさらに表示
Karlijn Löwik | RUMvision | The State of Web Performance in 2024
SEO and Core Web Vitals in HTML | Sigma Web Development Course - Tutorial #6
Matt Medlyn | Graham and Green | CLSINPWTFBBQ - Misadventures in Core Web Vitals
Performance Optimization and Wasted Renders | Lecture 239 | React.JS 🔥
Technical SEO for Developers | 2023 Checklist
Senior Frontend Developer Interview Questions 2024
5.0 / 5 (0 votes)