How to identify and fix Render-Blocking Resources
You might already be using Chrome DevTools to identify render-blocking resources as you’re building your page. But you’re also often looking across multiple pages, multiple projects, or pages that are already in production and you’re looking for opportunities to optimize performance across your entire site. In that case, the Chrome DevTools is useful when you have already identified a page to optimize, but Sentry can help you identify the pages and resources across all of your pages and projects. Let’s look at how to use both tools.
We’ll start with Chrome DevTools. This is super simple to do, by opening the “Coverage” tab (Cmd/Ctrl + Shift + P > “Coverage”) and seeing how much of the resources that we’re loading are actually being used. Click through the following demo to see how this is done. Start by clicking on the filled-circle icon next to start recording and refresh the page:
Click the same button again to stop recording and analyze the list. You’ll see all of the CSS and JS files being loaded, along with “Unused Bytes” and “Usage Visualization” columns that display how much of their contents are actually being used to render the page. By looking at the list, you’ll get a few ideas:
- You might notice “stray” files that get loaded but are 100% unused which can be completely removed from the page
- You might notice files like the first
about.2de…cssfile being largely unused (88% unused). This gives you the idea of potentially splitting that CSS file into two files based on the usage, so you can only load the smaller file with the used CSS.
- You might notice files like
_vercel/insights/script.jsthat aren’t critical for the page load, which you can check whether they’re deferred or not.
Clicking on the “reload” button will start the profiler and automatically reload the page for you. When the page loads completely you’ll get that view from the screenshot. Look for the FP/FCP point in the timeline. The resources to the left of it are what you want to look at (don’t forget to expand the “Network” section to see the list). Now you have the list of files that are blocking the first paint, and you know how much they are being used. FORENSIC🔬!
Lighthouse will also tell you if there are opportunities to improve on this. You’ll get a list of render-blocking resources, along with potential savings for each of them.
To figure out which pages across all of your web projects need to be looked at, I bet you wouldn’t want to do it manually for each page of each project. Implementing a tool like Sentry will make this process much easier. Sentry collects and processes the production data of your projects. Whenever it notices an asset that takes too long to load, it creates an Issue for you and lets you know about it. Here’s how that looks like:
Bonus: to see what would happen if a certain file isn’t loaded, you can create a “Network request blocking” pattern that prevents it from loading. Open the Run Command popup (Cmd/Ctrl + Shift + P) and search for “Show Network request blocking”. Click on the “+ Add pattern” button and type in the pattern to target the specific file. You can use * for wildcards. For example, if I want to target the first “about” CSS file I’ll type in
_astro/about.2de*.css (the * will type in the rest of the filename). Now if I refresh the page, that file won’t be loaded and I’ll see the impact that file has on my page. Judging from the changes on the page, I can make a decision to either split the file or completely remove it from the page.
CSS has a big impact on the performance of your website. The “how”, “when” and “what” of loading CSS matters. Here are three things you can do to improve your website performance.
In order to start “putting pixels on the screen”, the browser needs to create the DOM and CSSOM, which includes downloading and parsing the HTML and CSS. Inlining the critical CSS will save the browser an additional fetch, so it can start rendering the above-the-fold content as early as possible. Isolate the CSS needed to render the above-the-fold content, and preload the rest in a different file with the following line:
<link rel="preload" href="styles.css" as="style" onload="this.onload=null;this.rel='stylesheet'"> <noscript><link rel="stylesheet" href="styles.css"></noscript>
What happens, in this case, is that the critical CSS is obtained along with the HTML and the browser is able to immediately create the DOM and CSSOM and start rendering. The rest of the CSS is being loaded asynchronously without blocking the page rendering, which in turn improves your web vitals (especially the FCP).
You saw that the “Coverage” tab can tell you how much % of your CSS file is used, and how much of it it’s not. Removing the unused part will decrease the time the browser needs to download the CSS and create the CSSOM. If you don’t want to do that by hand, you can use tools like PurgeCSS. PurgeCSS is a PostCSS plugin that scans your project and your CSS files and removes the selectors that don’t match any of the elements.
Most of the time your CSS files will contain media queries that probably won’t be used in a given user session, for example, style blocks that are used only on mobile breakpoints, which don’t matter at all to your desktop users. There’s a way to conditionally load them only when a media query is truthful. Here’s how you can do that:
<link href="desktop.css" rel="stylesheet" media="screen and (min-width: 2550px)"> <link href="phone.css" rel="stylesheet" media="screen and (max-width: 640px)">
To help yourself with this, you can use a PostCSS plugin called “postcss-extract-media-query” which will automatically extract all of the
@media blocks and put them into their own file based on the queries.
The remainder of the critical JS extraction should be either deferred or asynchronously loaded depending on the script itself.
<script> in the
<head> tag that has neither
The browser fetches the async script without blocking the HTML parsing, but it then immediately blocks the HTML parsing and starts to execute it. Deferred scripts are being fetched the same, without blocking, but the execution happens after the page load, as opposed to immediately.
<script async src="async.js"></script> <script defer src="defer.js"></script>
Rule of thumb: if the rendering of the page depends on the script, use
async, otherwise always use
defer. Frameworks like React, Next.js, Vue.js, etc. will append your website’s JS code as
async scripts since the rendering depends on them.
It goes without saying, but minifying the CSS and JS files for production is simply a good idea. It shrinks down the file size, so browsers will spend less time downloading them.
Minifying your CSS can be done with PostCSS plugins like cssnano, or this guide if you’re using Webpack. Tools like Vite do this by default. Minifying your JS is also a straightforward job. It depends on what tools you’re using to build your website, but:
- Webpack does it automatically
- Vite as well
- Parcel as well
- Esbuild is straightforward
- Rollup has a plugin
After you do all these things, you’d want to make sure that your changes affected the performance of your website, making it more performant. Implementing a tool like Sentry will help you monitor your website’s performance on production for all of your users and their different devices and internet speeds. The FCP you see in your Chrome DevTools is not going to be the FCP that a user with a low-end device and/or slower internet speed experiences.
Furthermore, getting alerted whenever a page’s performance gets worse is key to ensuring a good user experience without having to manually monitor your app. You can create your own custom Alerts that will send you an email / Slack you / Discord you whenever a certain condition happens, like the LCP/FID/CLS/FCP/TTFB score gets above a certain threshold, or the Apdex drops or even the number of users experiencing an error increases. Here’s a quick tutorial on how to define an LCP alert: