Fixing a slow site iteratively

Site performance is potentially the most important metric. The better the performance, the better chance that users stay on a page, read content, make purchases, or just about whatever they need to do. A 2017 study by Akamai says as much when it found that even a 100ms delay in page load can decrease conversions by 7% and lose 1% of their sales for every 100ms it takes for their site to load which, at the time of the study, was equivalent to $1.6 billion if the site slowed down by just one second. Google’s industry benchmarks from 2018 also provide a striking breakdown of how each second of loading affects bounce rates.

Source: Google/SOASTA Research, 2018.

On the flip side, Firefox made their webpages load 2.2 seconds faster on average and it drove 60 million more Firefox downloads per year. Speed is also something Google considers when ranking your website placement on mobile. Having a slow site might leave you on page 452 of search results, regardless of any other metric.

With all of this in mind, I thought improving the speed of my own version of a slow site would be a fun exercise. The code for the site is available on GitHub for reference.

This is a very basic site made with simple HTML, CSS, and JavaScript. I’ve intentionally tried to keep this as simple as possible, meaning the reason it is slow has nothing to do with the complexity of the site itself, or because of some framework it uses. About the most complex part are some social media buttons for people to share the page.

Here’s the thing: performance is more than a one-off task. It’s inherently tied to everything we build and develop. So, while it’s tempting to solve everything in one fell swoop, the best approach to improving performance might be an iterative one. Determine if there’s any low-hanging fruit, and figure out what might be bigger or long-term efforts. In other words, incremental improvements are a great way to score performance wins. Again, every millisecond counts.

In that spirit, what we’re looking at in this article is focused more on the incremental wins and less on providing an exhaustive list or checklist of performance strategies.

Lighthouse

We’re going to be working with Lighthouse. Many of you may already be super familiar with it. It’s even been covered a bunch right here on CSS-Tricks. It’s is a Google service that audit things performance, accessibility, SEO, and best practices. I’m going to audit the performance of my slow site before and after the things we tackle in this article. The Lighthouse reports can be accessed directly in Chrome’s DevTools.

Go ahead, briefly look at the things that Lighthouse says are wrong with the website. It’s good to know what needs to be solved before diving right in.

On the bright side, we’re one-third of the way to our goal!

We can totally fix this, so let’s get started!

Improvement #1: Redirects

Before we do anything else, let’s see what happens when we first hit the website. It gets redirected. The site used to be at one URL and now it lives at another. That means any link that references the old URL is going to redirect to the new URL.

Redirects are often pretty light in terms of the latency that they add to a website, but they are an easy first thing to check, and they can generally be removed with little effort.

We can try to remove them by updating wherever we use the previous URL of the site, and point it to the updated URL so users are taken there directly instead of redirected. Using a network request inspector, I’m going to see if there’s anything we can remove via the Network panel in DevTools. We could also use a tool, like Postman if we need to, but we’ll limit our work to DevTools as much as possible for the sake of simplicity.

First, let’s see if there are any HTTP or HTML redirects. I like using Fiddler, and when I inspect the network requests I see that there are indeed some old URLs and redirects floating around.

It looks like the first request we hit is https://anonrobot.github.io/redirect-to-slow-site/ before it HTML redirects to https://anonrobot.github.io/slow-site/. We can repoint all our redirect-to-slow-site URLs to the updated URL. In DevTools, the Network inspector helps us see what the first webpage is doing too. From my view in Fiddler it looks like this:

This tell us that the site is using an HTML redirect to the next site. I’m going to update my referenced URL to the new site to help decrease latency that adds drag to the initial page load.

Improvement #2: The Critical Render Path

Next, I’m going to profile the sit with the Performance panel in DevTools. I am most interested in unblocking the site from rendering content as fast as it can. This is the process of turning HTML, CSS and JavaScript into a fully fleshed out, interactive website.

It begins with retrieving the HTML from the server and converting this into the Document Object Model (DOM). We’ll run any inline JavaScript as we see it, or download it if it’s an external asset as we go line-by-line parsing the HTML. We’ll also build the CSS into the CSS Object Model (CSSOM). The CSSOM and the DOM combine to make the render tree. From there, we run the layout which places everything on the screen in the correct place before finally running paint.

This process can be “blocked” if it has to wait for resources to load before it runs. That’s what we call the Critical Render Path, and the things that block the path are critical resources.

The most common critical resources are:

  • A <script> tag that is in the <head> and doesn’t contain an async, or defer, or module attribute.
  • A <link rel="stylesheet"> that doesn’t have the disabled attribute to inform the browser to not download the CSS and doesn’t have a media attribute that matches the user’s device.

There’s a few more types of resources that might block the Critical Render Path, like fonts, but the two above are by far the most common. These resources block rendering because the browser thinks the page is “unfinished” and has no idea what resources it needs or has. For all the browser knows, the site could download something that expects the browser to do even more work, like styling or color changes; hence, the site is incomplete to the browser, so it assumes the worst and blocks rendering.

An example CSS file that wouldn’t block rendering would be:

<link href="printing.css" rel="stylesheet" media="print">

The "media="print" attribute only downloads the stylesheet when the user prints the webpage (because perhaps you want to style things differently in print), meaning the file itself isn’t blocking anything from rendering before it.

As Chris likes to say, a front-end developer is aware. And being aware of what a page needs to download before rendering begins is vitally important for improving performance audit scores.

Improvement #3: Unblock parsing

Blocking the render path is one thing we can immediately speed up, and we can also block parsing if we aren’t careful with our JavaScript. Parsing is what makes HTML elements part of the DOM, and whenever we encounter JavaScript that needs to run now, we block that HTML parsing from happening.

Some of the JavaScript in my slow webpage doesn’t need to block parsing. In other words, we can download the scripts asynchronously and continue parsing the HTML into the DOM without delay.

The <async> tag is what allows the browser to download the JavaScript asset asynchronously. The <defer> tag only runs the JavaScript once the page construction is complete.

There’s a trade off here between inlining JavaScript (so running it doesn’t require a network request) versus placing it into it’s own JavaScript file (for modularity and code-reuse). Feel free to make your own judgement call here as the best route is going to depend on the use case. The actual performance of applying CSS and JavaScript to a webpage will be the same whether it’s an external asset or inlined, once it has arrived. The only thing we are removing when we inline is the network request time to get the external assets (which sometimes makes a big difference).

The main thing we’re aiming for is to do as little as we can. We want to defer loading assets and make those assets as small as possible at the same time. All of this will translate into a better performance outcome.

My slow site is chaining multiple critical requests, where the browser has to read the next line of HTML, wait, then read the next on to check for another asset, then wait. The size of the assets, when they get downloaded, and whether they block are all going to play hugely into how fast our webpage can load.

I approached this by profiling the site in the DevTools Performance panel, which is simply records the way the site loads over time. I briefly scanned my HTML and what it was downloading, then added <async> to any external JavaScript script that was blocking things (like the social media <script>, which isn’t necessary to load before rendering).

Profiling the slow site reveals what assets are loading, how big they are, where they are located, and how much time it takes to load them.

It’s interesting that Chrome has a browser limit where it can only deal with six inflight HTTP connections per domain name, and will wait for an asset to return before requesting another once those six are in-flight. That makes requesting multiple critical assets even worse for HTML parsing. Allowing the browser to continue parsing will speed up the time it takes to show something to the user, and improve our performance audit.

Improvement #4: Reduce the payload size

The total size of a site is a huge determining factor as to how fast it will load. According to web.dev, sites should aim to be below 1,600 KB interactive under 10 seconds. Large payloads are strongly correlated with long times to load. You can even consider a large payload as an expense to the end user, as large downloads may require larger data plans that cost more money.

At this exact point in time, my slow site is a whopping 9,701 KB — more than six times the ideal size. Let’s trim that down.

Identifying unused dependencies

At the beginning of my development, I thought I might need certain assets or frameworks. I downloaded them onto my page and now can’t even remember which ones are actually being used. I definitely have some assets that are doing nothing but wasting time and space.

Using the Network inspector in DevTools (or a tool you feel comfortable with), we can see some things that can definitely be removed from the site without changing its underlying behavior. I found a lot of value in the Coverage panel in DevTools because it will show just how much code is being used after everything’s downloaded.

As we’ve already discussed, there is always a fine balance when it comes to inlining CSS and JavaScript versus using an external asset. But, at this very moment, it certainly appears that the site is downloading far too much than it really needs.

Another quick way to trim things down is to find whether any of the assets the site is trying to load 404s. Those requests can definitely be removed without any negative impact to the site since they aren’t loading anyway. Here’s what Fiddler shows me:

Looking again at the Coverage report, we know there are things that are downloaded but have a significant amount of unused code still making its way to the page. In other words, these assets are doing something, but are also ready to do things we don’t even need them to do. That includes React, jQuery and Vue, so those can be removed from my slow site with no real impact.

Why so many JavaScript libraries? Well, we know there are real-life scenarios where we reach for something because it meets our requirements; but then those requirements change and we need to reach for something else. Again, we’ve got to be aware as front-end developers, and continually keeping an eye on what resources are relevant to site is part of that overall awareness.

Compressing, minifying and caching assets

Just because we need to serve an asset doesn’t mean we have to serve it as its full size, or even re-serve that asset the next time the user visits the site. We can compress our assets, minify our styles and scripts, and cache things responsibly so we’re serving what the user needs in the most efficient way possible.

  • Compressing means we optimize a file, such as an image, to its smallest size without impacting its visual quality. For example, gzip is a common compression algorithm that makes assets smaller.
  • Minification improves the size of text-based assets, like external script files, by removing cruft from the code, like comments and whitespace, for the sake of sending fewer bytes over the wire.
  • Caching allows us to store an asset in the browser’s memory for an amount of time so that it is immediately available for users on subsequent page loads. So, load it once, enjoy it many times.

Let’s look at three different types of assets and how to crunch them with these tactics.

Text-based assets

These include text files, like HTML, CSS and JavaScript. We want to do everything in our power to make these as lightweight as possible, so we compress, minify, and cache them where possible.

At a very high level, gzip works by finding common, repeated parts in the content, stores these sequences once, then removes them from the source text. It keeps a dictionary-like look-up so it can quickly reference the saved pieces and place them back in place where they belong, in a process known as gunzipping. Check out this gzipped examples a file containing poetry.

The text in-between the curly braces is text that has been matched multiple times and is removed from the source text by gzip to make the file smaller. There are still unique parts of the string that gzip is unable to abstract to its dictionary, but things like { a }, for example, can be removed from wherever it appears and can be added back once it is received. (View the full example)

We’re doing this to make any text-based downloads as small as we can. We are already making use of gzip. I checked using this tool by GIDNetwork. It shows that the slow site’s content is 59.9% compressed. That probably means there are more opportunities to make things even smaller.

I decided to consolidate the multiple CSS files into one single file called styles.css. This way, we’re limiting the number of network requests necessary. Besides, if we crack open the three files, each one contained such a tiny amount of CSS that the three network requests are simply unjustified.

And, while doing this, it gave me the opportunity to remove unnecessary CSS selectors that weren’t being applied in the DOM anywhere, again reducing the number of bytes sent to the user.

Ilya Grigorik wrote an excellent article with strategies for compressing text-based assets.

Images

We are also able to optimize the images on the slow site. As reports consistently show, images are the most common asset request. In fact, the median data transfer for images is 948.1 KB for desktops and 902 KB for mobile devices from 2016 to 2021. That already more than half of the ideal 1,600KB size for an entire page load.

My slow site doesn’t serve that many images, but the images it does serve can be smaller. I ran the images through an online tool called Squoosh, and achieved a 40% savings (18.6 KB to 11.2 KB). That’s a win! Of course, this is something you can do either before upload using a desktop application, like ImageOptim, or even as part of your build process.

I couldn’t see any visual differences between the original images and the optimized versions (which is great!) and I was even able to reduce the size further by resizing the actual file, reducing the quality of the image, and even changing the color palette. But those are things I did in image editing software. Ideally, that’s something you or a designer would do when initially making the assets.

Caching

We’ve touched on minification and compression and what we can do to try and use these to our advantage. The final thing we can check is caching.

I have been requesting the slow site over and over and, so far, I can see it always looks like it’s requested fresh every time without any caching whatsoever. I looked through the HTML and saw caching was disabling here:

<meta http-equiv="Cache-Control" content="no-cache, no-store, must-revalidate">

I removed that line, so browser caching should now be able to take place, helping serve the content even faster.

Improvement #5: Use a CDN

Another big improvement we can make on any website is serving as much as you can from a Content Delivery Network (CDN). David Attard has a super thorough piece on how to add and leverage a CDN. The traditional path of delivering content is to hit the server, request data, and wait for it to return. But if the user is requesting data from way across the other side of the world from where your data is served, well, that adds time. Making the bytes travel further in the response from the server can add up to large losses of speed, even if everything else is lightning quick.

A CDN is a set of distributed servers around the world that are capable of intelligently delivering content closer to the user because it has multiple locations it choose to serve it from.

Source: “Adding and Leveraging a CDN on Your Website”

We discussed earlier how I was making the user download jQuery when it doesn’t actually make use of the downloaded code, and we removed it. One easy fix here, if I did actually need jQuery, is to request the asset from a CDN. Why?

  • A user may have already downloaded the asset from visiting another site, so we can serve a cached response for the CDN. 75.49% of the top one million sites still use jQuery, after all.
  • It doesn’t have to travel as far from the user requesting the data.

We can do something as simple as grabbing jQuery from Google’s CDN, which they make available for anyone to reference in their own sites:

<head>
  <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
</head>

That serves jQuery significantly faster than a standard request from my server, that’s for sure.

Are things better?

If you have implemented along with me so far, or just read, it’s time to re-profile and see if any improvements has been made on what we’ve done so far.

Recall where we started:

This image has an empty alt attribute; its file name is image-1024x468.png

After our changes:


I hope this has been a helpful and encourages you to search for incremental performance wins on your own site. By optimally requesting assets, deferring some assets from loading, and reducing the overall size of the site size will get a functional, fully interactive site in front of the user as fast as possible.

Want to keep the conversation going? I share my writing on Twitter if you want to see more or connect.


The post Fixing a slow site iteratively appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Continuous Performance Analysis with Lighthouse CI and GitHub Actions

Lighthouse is a free and open-source tool for assessing your website’s performance, accessibility, progressive web app metrics, SEO, and more. The easiest way to use it is through the Chrome DevTools panel. Once you open the DevTools, you will see a “Lighthouse” tab. Clicking the “Generate report” button will run a series of tests on the web page and display the results right there in the Lighthouse tab. This makes it easy to test any web page, whether public or requiring authentication.

If you don’t use Chrome or Chromium-based browsers, like Microsoft Edge or Brave, you can run Lighthouse through its web interface but it only works with publicly available web pages. A Node CLI tool is also provided for those who wish to run Lighthouse audits from the command line.

All the options listed above require some form of manual intervention. Wouldn‘t it be great if we could integrate Lighthouse testing in the continuous integration process so that the impact of our code changes can be displayed inline with each pull request, and so that we can fail the builds if certain performance thresholds are not net? Well, that’s exactly why Lighthouse CI exists!

It is a suite of tools that help you identify the impact of specific code changes on you site not just performance-wise, but in terms of SEO, accessibility, offline support, and other best practices. It’s offers a great way to enforce performance budgets, and also helps you keep track of each reported metric so you can see how they have changed over time.

In this article, we’ll go over how to set up Lighthouse CI and run it locally, then how to get it working as part of a CI workflow through GitHub Actions. Note that Lighthouse CI also works with other CI providers such as Travis CI, GitLab CI, and Circle CI in case you prefer to not to use GitHub Actions.

Setting up the Lighthouse CI locally

In this section, you will configure and run the Lighthouse CI command line tool locally on your machine. Before you proceed, ensure you have Node.js v10 LTS or later and Google Chrome (stable) installed on your machine, then proceed to install the Lighthouse CI tool globally:

$ npm install -g @lhci/cli

Once the CLI has been installed successfully, ru lhci --help to view all the available commands that the tool provides. There are eight commands available at the time of writing.

$ lhci --help
lhci <command> <options>

Commands:
  lhci collect      Run Lighthouse and save the results to a local folder
  lhci upload       Save the results to the server
  lhci assert       Assert that the latest results meet expectations
  lhci autorun      Run collect/assert/upload with sensible defaults
  lhci healthcheck  Run diagnostics to ensure a valid configuration
  lhci open         Opens the HTML reports of collected runs
  lhci wizard       Step-by-step wizard for CI tasks like creating a project
  lhci server       Run Lighthouse CI server

Options:
  --help             Show help  [boolean]
  --version          Show version number  [boolean]
  --no-lighthouserc  Disables automatic usage of a .lighthouserc file.  [boolean]
  --config           Path to JSON config file

At this point, you‘re ready to configure the CLI for your project. The Lighthouse CI configuration can be managed through (in order of increasing precedence) a configuration file, environmental variables, or CLI flags. It uses the Yargs API to read its configuration options, which means there’s a lot of flexibility in how it can be configured. The full documentation covers it all. In this post, we’ll make use of the configuration file option.

Go ahead and create a lighthouserc.js file in the root of your project directory. Make sure the project is being tracked with Git because the Lighthouse CI automatically infers the build context settings from the Git repository. If your project does not use Git, you can control the build context settings through environmental variables instead.

touch lighthouserc.js

Here’s the simplest configuration that will run and collect Lighthouse reports for a static website project, and upload them to temporary public storage.

// lighthouserc.js
module.exports = {
  ci: {
    collect: {
      staticDistDir: './public',
    },
    upload: {
      target: 'temporary-public-storage',
    },
  },
};

The ci.collect object offers several options to control how the Lighthouse CI collects test reports. The staticDistDir option is used to indicate the location of your static HTML files — for example, Hugo builds to a public directory, Jekyll places its build files in a _site directory, and so on. All you need to do is update the staticDistDir option to wherever your build is located. When the Lighthouse CI is run, it will start a server that’s able to run the tests accordingly. Once the test finishes, the server will automatically shut dow.

If your project requires the use of a custom server, you can enter the command used to start the server through the startServerCommand property. When this option is used, you also need to specify the URLs to test against through the url option. This URL should be serveable by the custom server that you specified.

module.exports = {
  ci: {
    collect: {
      startServerCommand: 'npm run server',
      url: ['http://localhost:4000/'],
    },
    upload: {
      target: 'temporary-public-storage',
    },
  },
};

When the Lighthouse CI runs, it executes the server command and watches for the listen or ready string to determine if the server has started. If it does not detect this string after 10 seconds, it assumes the server has started and continues with the test. It then runs Lighthouse three times against each URL in the url array. Once the test has finished running, it shuts down the server process.

You can configure both the pattern string to watch for and timeout duration through the startServerReadyPattern and startServerReadyTimeout options respectively. If you want to change the number of times to run Lighthouse against each URL, use the numberOfRuns property.

// lighthouserc.js
module.exports = {
  ci: {
    collect: {
      startServerCommand: 'npm run server',
      url: ['http://localhost:4000/'],
      startServerReadyPattern: 'Server is running on PORT 4000',
      startServerReadyTimeout: 20000 // milliseconds
      numberOfRuns: 5,
    },
    upload: {
      target: 'temporary-public-storage',
    },
  },
};

The target property inside the ci.upload object is used to configure where Lighthouse CI uploads the results after a test is completed. The temporary-public-storage option indicates that the report will be uploaded to Google’s Cloud Storage and retained for a few days. It will also be available to anyone who has the link, with no authentication required. If you want more control over how the reports are stored, refer to the documentation.

At this point, you should be ready to run the Lighthouse CI tool. Use the command below to start the CLI. It will run Lighthouse thrice against the provided URLs (unless changed via the numberOfRuns option), and upload the median result to the configured target.

lhci autorun

The output should be similar to what is shown below:

✅  .lighthouseci/ directory writable
✅  Configuration file found
✅  Chrome installation found
⚠️   GitHub token not set
Healthcheck passed!

Started a web server on port 52195...
Running Lighthouse 3 time(s) on http://localhost:52195/web-development-with-go/
Run #1...done.
Run #2...done.
Run #3...done.
Running Lighthouse 3 time(s) on http://localhost:52195/custom-html5-video/
Run #1...done.
Run #2...done.
Run #3...done.
Done running Lighthouse!

Uploading median LHR of http://localhost:52195/web-development-with-go/...success!
Open the report at https://storage.googleapis.com/lighthouse-infrastructure.appspot.com/reports/1606403407045-45763.report.html
Uploading median LHR of http://localhost:52195/custom-html5-video/...success!
Open the report at https://storage.googleapis.com/lighthouse-infrastructure.appspot.com/reports/1606403400243-5952.report.html
Saving URL map for GitHub repository ayoisaiah/freshman...success!
No GitHub token set, skipping GitHub status check.

Done running autorun.

The GitHub token message can be ignored for now. We‘ll configure one when it’s time to set up Lighthouse CI with a GitHub action. You can open the Lighthouse report link in your browser to view the median test results for reach URL.

Configuring assertions

Using the Lighthouse CI tool to run and collect Lighthouse reports works well enough, but we can go a step further and configure the tool so that a build fails if the tests results do not match certain criteria. The options that control this behavior can be configured through the assert property. Here’s a snippet showing a sample configuration:

// lighthouserc.js
module.exports = {
  ci: {
    assert: {
      preset: 'lighthouse:no-pwa',
      assertions: {
        'categories:performance': ['error', { minScore: 0.9 }],
        'categories:accessibility': ['warn', { minScore: 0.9 }],
      },
    },
  },
};

The preset option is a quick way to configure Lighthouse assertions. There are three options:

  • lighthouse:all: Asserts that every audit received a perfect score
  • lighthouse:recommended: Asserts that every audit outside performance received a perfect score, and warns when metric values drop below a score of 90
  • lighthouse:no-pwa: The same as lighthouse:recommended but without any of the PWA audits

You can use the assertions object to override or extend the presets, or build a custom set of assertions from scratch. The above configuration asserts a baseline score of 90 for the performance and accessibility categories. The difference is that failure in the former will result in a non-zero exit code while the latter will not. The result of any audit in Lighthouse can be asserted so there’s so much you can do here. Be sure to consult the documentation to discover all of the available options.

You can also configure assertions against a budget.json file. This can be created manually or generated through performancebudget.io. Once you have your file, feed it to the assert object as shown below:

// lighthouserc.js
module.exports = {
  ci: {
    collect: {
      staticDistDir: './public',
      url: ['/'],
    },
    assert: {
      budgetFile: './budget.json',
    },
    upload: {
      target: 'temporary-public-storage',
    },
  },
};

Running Lighthouse CI with GitHub Actions

A useful way to integrate Lighthouse CI into your development workflow is to generate new reports for each commit or pull request to the project’s GitHub repository. This is where GitHub Actions come into play.

To set it up, you need to create a .github/workflow directory at the root of your project. This is where all the workflows for your project will be placed. If you’re new to GitHub Actions, you can think of a workflow as a set of one or more actions to be executed once an event is triggered (such as when a new pull request is made to the repo). Sarah Drasner has a nice primer on using GitHub Actions.

mkdir -p .github/workflow

Next, create a YAML file in the .github/workflow directory. You can name it anything you want as long as it ends with the .yml or .yaml extension. This file is where the workflow configuration for the Lighthouse CI will be placed.

cd .github/workflow
touch lighthouse-ci.yaml

The contents of the lighthouse-ci.yaml file will vary depending on the type of project. I‘ll describe how I set it up for my Hugo website so you can adapt it for other types of projects. Here’s my configuration file in full:

# .github/workflow/lighthouse-ci.yaml
name: Lighthouse
on: [push]
jobs:
  ci:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v2
        with:
          token: ${{ secrets.PAT }}
          submodules: recursive

      - name: Setup Hugo
        uses: peaceiris/actions-hugo@v2
        with:
          hugo-version: "0.76.5"
          extended: true

      - name: Build site
        run: hugo

      - name: Use Node.js 15.x
        uses: actions/setup-node@v2
        with:
          node-version: 15.x
      - name: Run the Lighthouse CI
        run: |
          npm install -g @lhci/cli@0.6.x
          lhci autorun

The above configuration creates a workflow called Lighthouse consisting of a single job (ci) which runs on an Ubuntu instance and is triggered whenever code is pushed to any branch in the repository. The job consists of the following steps:

  • Check out the repository that Lighthouse CI will be run against. Hugo uses submodules for its themes, so it’s necessary to ensure all submodules in the repo are checked out as well. If any submodule is in a private repo, you need to create a new Personal Access Token with the repo scope enabled, then add it as a repository secret at https://github.com/<username>/<repo>/settings/secret. Without this token, this step will fail if it encounters a private repo.
  • Install Hugo on the GitHub Action virtual machine so that it can be used to build the site. This Hugo Setup Action is what I used here. You can find other setup actions in the GitHub Actions marketplace.
  • Build the site to a public folder through the hugo command.
  • Install and configure Node.js on the virtual machine through the setup-node action
  • Install the Lighthouse CI tool and execute the lhci autorun command.

Once you’ve set up the config file, you can commit and push the changes to your GitHub repository. This will trigger the workflow you just added provided your configuration was set up correctly. Go to the Actions tab in the project repository to see the status of the workflow under your most recent commit.

If you click through and expand the ci job, you will see the logs for each of the steps in the job. In my case, everything ran successfully but my assertions failed — hence the failure status. Just as we saw when we ran the test locally, the results are uploaded to the temporary public storage and you can view them by clicking the appropriate link in the logs.

Setting up GitHub status checks

At the moment, the Lighthouse CI has been configured to run as soon as code is pushed to the repo whether directly to a branch or through a pull request. The status of the test is displayed on the commit page, but you have click through and expand the logs to see the full details, including the links to the report.

You can set up a GitHub status check so that build reports are displayed directly in the pull request. To set it up, go to the Lighthouse CI GitHub App page, click the “Configure” option, then install and authorize it on your GitHub account or the organization that owns the GitHub repository you want to use. Next, copy the app token provided on the confirmation page and add it to your repository secrets with the name field set to LHCI_GITHUB_APP_TOKEN.

The status check is now ready to use. You can try it out by opening a new pull request or pushing a commit to an already existing pull request.

Historical reporting and comparisons through the Lighthouse CI Server

Using the temporary public storage option to store Lighthouse reports is great way to get started, but it is insufficient if you want to keep your data private or for a longer duration. This is where the Lighthouse CI server can help. It provides a dashboard for exploring historical Lighthouse data and offers an great comparison UI to uncover differences between builds.

To utilize the Lighthouse CI server, you need to deploy it to your own infrastructure. Detailed instructions and recipes for deploying to Heroku and Docker can be found on GitHub.

Conclusion

When setting up your configuration, it is a good idea to include a few different URLs to ensure good test coverage. For a typical blog, you definitely want to include to include the homepage, a post or two which is representative of the type of content on the site, and any other important pages.

Although we didn’t cover the full extent of what the Lighthouse CI tool can do, I hope this article not only helps you get up and running with it, but gives you a good idea of what else it can do. Thanks for reading, and happy coding!


The post Continuous Performance Analysis with Lighthouse CI and GitHub Actions appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

How to Load Fonts in a Way That Fights FOUT and Makes Lighthouse Happy

A web font workflow is simple, right? Choose a few nice-looking web-ready fonts, get the HTML or CSS code snippet, plop it in the project, and check if they display properly. People do this with Google Fonts a zillion times a day, dropping its <link> tag into the <head>.

Let’s see what Lighthouse has to say about this workflow.

Stylesheets in the <head> have been flagged by Lighthouse as render-blocking resources and they add a one-second delay to render? Not great.

We’ve done everything by the book, documentation, and HTML standards, so why is Lighthouse telling us everything is wrong?

Let’s talk about eliminating font stylesheets as a render-blocking resource, and walk through an optimal setup that not only makes Lighthouse happy, but also overcomes the dreaded flash of unstyled text (FOUT) that usually comes with loading fonts. We’ll do all that with vanilla HTML, CSS, and JavaScript, so it can be applied to any tech stack. As a bonus, we’ll also look at a Gatsby implementation as well as a plugin that I’ve developed as a simple drop-in solution for it.

What we mean by “render-blocking” fonts

When the browser loads a website, it creates a render tree from the DOM, i.e. an object model for HTML, and CSSOM, i.e. a map of all CSS selectors. A render tree is a part of a critical render path that represents the steps that the browser goes through to render a page. For browser to render a page, it needs to load and parse the HTML document and every CSS file that is linked in that HTML.

Here’s a fairly typical font stylesheet pulled directly from Google Fonts:

@font-face {
  font-family: 'Merriweather';
  src: local('Merriweather'), url(https://fonts.gstatic.com/...) format('woff2');
}

You might be thinking that font stylesheets are tiny in terms of file size because they usually contain, at most, a few @font-face definitions. They shouldn’t have any noticeable effect on rendering, right?

Let’s say we’re loading a CSS font file from an external CDN. When our website loads, the browser needs to wait for that file to load from the CDN and be included in the render tree. Not only that, but it also needs to wait for the font file that is referenced as a URL value in the CSS @font-face definition to be requested and loaded.

Bottom line: The font file becomes a part of the critical render path and it increases the page render delay.

Critical render path delay when loading font stylesheet and font file 
(Credit: web.dev under Creative Commons Attribution 4.0 License)

What is the most vital part of any website to the average user? It’s the content, of course. That is why content needs to be displayed to the user as soon as possible in a website loading process. To achieve that, the critical render path needs to be reduced to critical resources (e.g. HTML and critical CSS), with everything else loaded after the page has been rendered, fonts included.

If a user is browsing an unoptimized website on a slow, unreliable connection, they will get annoyed sitting on a blank screen that’s waiting for font files and other critical resources to finish loading. The result? Unless that user is super patient, chances are they’ll just give up and close the window, thinking that the page is not loading at all.

However, if non-critical resources are deferred and the content is displayed as soon as possible, the user will be able to browse the website and ignore any missing presentational styles (like fonts) — that is, if they don’t get in the way of the content.

Optimized websites render content with critical CSS as soon as possible with non-critical resources deferred. A font switch occurs between 0.5s and 1.0s on the second timeline, indicating the time when presentational styles start rendering.

The optimal way to load fonts

There’s no point in reinventing the wheel here. Harry Roberts has already done a great job describing an optimal way to load web fonts. He goes into great detail with thorough research and data from Google Fonts, boiling it all down into a four-step process:

  • Preconnect to the font file origin.
  • Preload the font stylesheet asynchronously with low priority.
  • Asynchronously load the font stylesheet and font file after the content has been rendered with JavaScript.
  • Provide a fallback font for users with JavaScript turned off.

Let’s implement our font using Harry’s approach:

<!-- https://fonts.gstatic.com is the font file origin -->
<!-- It may not have the same origin as the CSS file (https://fonts.googleapis.com) -->
<link rel="preconnect"
      href="https://fonts.gstatic.com"
      crossorigin />


<!-- We use the full link to the CSS file in the rest of the tags -->
<link rel="preload"
      as="style"
      href="https://fonts.googleapis.com/css2?family=Merriweather&display=swap" />


<link rel="stylesheet"
      href="https://fonts.googleapis.com/css2?family=Merriweather&display=swap"
      media="print" onload="this.media='all'" />


<noscript>
  <link rel="stylesheet"
        href="https://fonts.googleapis.com/css2?family=Merriweather&display=swap" />
</noscript>

Notice the media="print" on the font stylesheet link. Browsers automatically give print stylesheets a low priority and exclude them as a part of the critical render path. After the print stylesheet has been loaded, an onload event is fired, the media is switched to a default all value, and the font is applied to all media types (screen, print, and speech).

Lighthouse is happy with this approach!

It’s important to note that self-hosting the fonts might also help fix render-blocking issues, but that is not always an option. Using a CDN, for example, might be unavoidable. In some cases, it’s beneficial to let a CDN do the heavy lifting when it comes to serving static resources.

Even though we’re now loading the font stylesheet and font files in the optimal non-render-blocking way, we’ve introduced a minor UX issue…

Flash of unstyled text (FOUT)

This is what we call FOUT:

Why does that happen? To eliminate a render-blocking resource, we have to load it after the page content has rendered (i.e. displayed on the screen). In the case of a low-priority font stylesheet that is loaded asynchronously after critical resources, the user can see the moment the font changes from the fallback font to the downloaded font. Not only that, the page layout might shift, resulting in some elements looking broken until the web font loads.

The best way to deal with FOUT is to make the transition between the fallback font and web font smooth. To achieve that we need to:

  • Choose a suitable fallback system font that matches the asynchronously loaded font as closely as possible.
  • Adjust the font styles (font-size, line-height, letter-spacing, etc.) of the fallback font to match the characteristics of the asynchronously loaded font, again, as closely as possible.
  • Clear the styles for the fallback font once the asynchronously loaded font file has has rendered, and apply the styles intended for the newly loaded font.

We can use Font Style Matcher to find optimal fallback system fonts and configure them for any given web font we plan to use. Once we have styles for both the fallback font and web font ready, we can move on to the next step.

Merriweather is the font and Georgia is the fallback system font in this example. Once the Merriweather styles are applied, there should be minimal layout shifting and the switch between fonts should be less noticeable.

We can use the CSS font loading API to detect when our web font has loaded. Why that? Typekit’s web font loader was once one of the more popular ways to do it and, while it’s tempting to continue using it or similar libraries, we need to consider the following:

  • It hasn’t been updated for over four years, meaning that if anything breaks on the plugin side or new features are required, it’s likely no one will implement and maintain them.
  • We are already handling async loading efficiently using Harry Roberts’ snippet and we don’t need to rely on JavaScript to load the font.

If you ask me, using a Typekit-like library is just too much JavaScript for a simple task like this. I want to avoid using any third-party libraries and dependencies, so let’s implement the solution ourselves and try to make it is as simple and straightforward as possible, without over-engineering it.

Although the CSS Font Loading API is considered experimental technology, it has roughly 95% browser support. But regardless, we should provide a fallback if the API changes or is deprecated in the future. The risk of losing a font isn’t worth the trouble.

The CSS Font Loading API can be used to load fonts dynamically and asynchronously. We’ve already decided not to rely on JavaScript for something simple as font loading and we’ve solved it in an optimal way using plain HTML with preload and preconnect. We will use a single function from the API that will help us check if the font is loaded and available.

document.fonts.check("12px 'Merriweather'");

The check() function returns true or false depending on whether the font specified in the function argument is available or not. The font size parameter value is not important for our use case and it can be set to any value. Still, we need to make sure that:

  • We have at least one HTML element on a page that contains at least one character with web font declaration applied to it. In the examples, we will use the &nbsp; but any character can do the job as long it’s hidden (without using display: none;) from both sighted and non-sighted users. The API tracks DOM elements that have font styles applied to them. If there are no matching elements on a page, then the API isn’t be able to determine if the font has loaded or not.
  • The specified font in the check() function argument is exactly what the font is called in the CSS.

I’ve implemented the font loading listener using CSS font loading API in the following demo. For example purposes, loading fonts and the listener for it are initiated by clicking the button to simulate a page load so you can see the change occur. On regular projects, this should happen soon after the website has loaded and rendered.

Isn’t that awesome? It took us less than 30 lines of JavaScript to implement a simple font loading listener, thanks to a well-supported function from the CSS Font Loading API. We’ve also handled two possible edge cases in the process:

  • Something goes wrong with the API, or some error occurs preventing the web font from loading.
  • The user is browsing the website with JavaScript turned off.

Now that we have a way to detect when the font file has finished loading, we need to add styles to our fallback font to match the web font and see how to handle FOUT more effectively.

The transition between the fallback font and web font looks smooth and we’ve managed to achieve a much less noticeable FOUT! On a complex site, this change would result in a fewer layout shifts, and elements that depend on the content size wouldn’t look broken or out of place.

What’s happening under the hood

Let’s take a closer look at the code from the previous example, starting with the HTML. We have the snippet in the <head> element, allowing us to load the font asynchronously with preload, preconnect, and fallback.

<body class="no-js">
  <!-- ... Website content ... -->
  <div aria-visibility="hidden" class="hidden" style="font-family: '[web-font-name]'">
      /* There is a non-breaking space here here */
  </div>
  <script> 
    document.getElementsByTagName("body")[0].classList.remove("no-js");
  </script>
</body>

Notice that we have a hardcoded .no-js class on the <body> element, which is removed the moment the HTML document has finished loading. This applies webfont styles for users with JavaScript disabled.

Secondly, remember how the CSS Font Loading API requires at least one HTML element with a single character to track the font and apply its styles? We added a <div> with a &nbsp; character that we are hiding from both sighted and non-sighted users in an accessible way, since we cannot use display: none;. This element has an inlined font-family: 'Merriweather' style. This allows us to smoothly switch between the fallback styles and loaded font styles, and make sure that all font files are properly tracked, regardless of whether they are used on the page or not.

Note that the &nbsp; character is not showing up in the code snippet but it is there!

The CSS is the most straightforward part. We can utilize the CSS classes that are hardcoded in the HTML or applied conditionally with JavaScript to handle various font loading states.

body:not(.wf-merriweather--loaded):not(.no-js) {
  font-family: [fallback-system-font];
  /* Fallback font styles */
}


.wf-merriweather--loaded,
.no-js {
  font-family: "[web-font-name]";
  /* Webfont styles */
}


/* Accessible hiding */
.hidden {
  position: absolute; 
  overflow: hidden; 
  clip: rect(0 0 0 0); 
  height: 1px;
  width: 1px; 
  margin: -1px;
  padding: 0;
  border: 0; 
}

JavaScript is where the magic happens. As described previously, we are checking if the font has been loaded by using the CSS Font Loading API’s check() function. Again, the font size parameter can be any value (in pixels); it’s the font family value that needs to match the name of the font that we’re loading.

var interval = null;


function fontLoadListener() {
  var hasLoaded = false;


  try {
    hasLoaded = document.fonts.check('12px "[web-font-name]"')
  } catch(error) {
    console.info("CSS font loading API error", error);
    fontLoadedSuccess();
    return;
  }
  
  if(hasLoaded) {
    fontLoadedSuccess();
  }
}


function fontLoadedSuccess() {
  if(interval) {
    clearInterval(interval);
  }
  /* Apply class names */
}


interval = setInterval(fontLoadListener, 500);

What’s happening here is we’re setting up our listener with fontLoadListener() that runs at regular intervals. This function should be as simple as possible so it runs efficiently within the interval. We are using the try-catch block to handle any errors and catch any issues so that web font styles still apply in the case of a JavaScript error so that the user doesn’t experience any UI issues.

Next, we’re accounting for when the font successfully loads with fontLoadedSuccess(). We need to make sure to first clear the interval so the check doesn’t unnecessarily run after it.  Here we can add class names that we need in order to apply the web font styles.

And, finally, we are initiating the interval. In this example, we’ve set it up to 500ms, so the function runs twice per second.

Here’s a Gatsby implementation

Gatsby does a few things that are different compared to vanilla web development (and even the regular create-react-app tech stack) which makes implementing what we’ve covered here a bit tricky.

To make this easy, we’ll develop a local Gatsby plugin, so all code that is relevant to our font loader is located at plugins/gatsby-font-loader in the example below.

Our font loader code and config will be split across the three main Gatsby files:

  • Plugin configuration (gatsby-config.js): We’ll include the local plugin in our project, list all local and external fonts and their properties (including the font name, and the CSS file URL), and include all preconnect URLs.
  • Server-side code (gatsby-ssr.js): We’ll use the config to generate and include preload and preconnect tags in the HTML <head> using setHeadComponents function from Gatsby’s API. Then, we’ll generate the HTML snippets that hide the font and include them in HTML using setPostBodyComponents.
  • Client-side code (gatsby-browser.js): Since this code runs after the page has loaded and after React starts up, it is already asynchronous. That means we can inject the font stylesheet links using react-helmet. We’ll also start a font loading listener to deal with FOUT.

You can check out the Gatsby implementation in the following CodeSandbox example.

I know, some of this stuff is complex. If you just want a simple drop-in solution for performant, asynchronous font loading and FOUT busting, I’ve developed a gatsby-omni-font-loader plugin just for that. It uses the code from this article and I am actively maintaining it. If you have any suggestions, bug reports, or code contributions, feel free to submit them on on GitHub.

Conclusion

Content is perhaps the most component to a user’s experience on a website. We need to make sure content gets top priority and loads as quickly as possible. That means using bare minimum presentation styles (i.e. inlined critical CSS) in the loading process. That is also why web fonts are considered non-critical in most cases — the user can still consume the content without them — so it’s perfectly fine for them to load after the page has rendered.

But that might lead to FOUT and layout shifts, so the font loading listener is needed to make a smooth switch between the fallback system font and the web font.

I’d like to hear your thoughts! Let me know in the comments how are you tackling the issue of web font loading, render-blocking resources and FOUT on your projects.


References


The post How to Load Fonts in a Way That Fights FOUT and Makes Lighthouse Happy appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Core Web Vital Tooling

I still think the Google-devised Core Web Vitals are smart. When I first got into caring about performance, it was all: reduce requests! cache things! Make stuff smaller! And while those are all very related to web performance, they are abstractly related. Actual web performance to users are things like how long did I have to wait to see the content on the page? How long until I can actually interact with the page, like type in a form or click a link? Did things obnoxiously jump around while I was trying to do something? That’s why Core Web Vitals are smart: they measure those things.

The Lighthouse Tab in Chrome DevTools has them now:

They are nice to keep an eye on, because remember, aside those numbers having a direct benefit for your users once they get to your site, they might affect users getting to your site at all. Web Core Vitals are factoring into SEO and for the new carousel requirements that were previously reserved only for AMP pages.

Tracking these numbers on one-off audits is useful, but more useful is watching them over time to protect yourself from slipping. Performance tooling like Calibre covers them. New Relic has got it. SpeedCurve tracks them.

Cumulative Layout Shift (CLS) is a tricky one. That’s the one where, say, the site has an advertisement at the top of an article. The request for that ad is asynchronous, so there is a good chance the ad comes in late and pushes the content of the article down. That’s not just annoying, but a real ding to performance metrics and, ultimately, SEO.

Nic Jansma’s “Cumulative Layout Shift in Practice” offers deep dive.

CLS isn’t just “does page do it or not?” There is a score, as that illustration above points out. I’d say 0 is a good goal as there is no version of CLS that is good for anybody. There is lots of nuance to this, like tracking it “synthetically” (e.g. in a headless browser, especially for performance tooling) and with real users on your real site (which is called RUM, or Real User Metrics). Both are useful.

If you’ve got CLS that you need to fight, that can be tricky. SpeedCurve has some new tooling that helps:

For each layout shift, we show you the filmstrip frame right before and right after the shift. We then draw a red box around the elements that moved, highlighting exactly which elements caused the shift. The Layout Shift Score for each shift also helps you understand the impact of that shift and how it adds to the cumulative score.

That would make it pretty easy to root out and fix, I’d hope. Particularly the tricky ones. I didn’t know this, but CLS can be caused by far more subtle things which Mark Zeman points out in the post. For example:

  • An image carousel that only moves horizontally can trigger CLS. That feels like a bummer as that’s what they are supposed to do, but apparently, you can trick it by moving carousels only with CSS transform.
  • If you have a very large area, that’s extra risky to move. If it moves just a smidge, it will affect CLS by a lot.
  • Flash of Unstyled Text (FOUT) is a cause of CLS. Even though that’s good for performance for other reasons! Catch 22! It’s a good excuse to reach for perfect font fallbacks.

Tricky, yet important stuff. I really need to get performance tests into my CI/CD, which will really help with this. Feels more and more like web performance is a full-on career subgenre of web development. Front-end web developers really need to understand this stuff and help to some degree, but we’ve already got so much to do.


The post Core Web Vital Tooling appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Automating Performance Audit Using Lighthouse

Using Lighthouse.


A public-facing website needs to run performance audits regularly to check all the metrics are meeting the requirements. Metrics differ from team to team or project to project. Here the metrics are the performance, accessibility, SEO, best-practices, and paw.

Your first performance budget with Lighthouse

Ire Aderinokun writes about a new way to set a performance budget (and stick to it) with Lighthouse, Google’s suite of tools that help developers see how performant and accessible their websites are:

Until recently, I also hadn't setup an official performance budget and enforced it. This isn’t to say that I never did performance audits. I frequently use tools like PageSpeed Insights and take the feedback to make improvements. But what I had never done was set a list of metrics that I needed the site to meet, and enforce them using some automated tool.

The reasons for this were a combination of not knowing what exact numbers I should use for budgets as well as there being a disconnect between setting a budget and testing/enforcing it. This is why I was really excited when, at Google I/O this year, the Lighthouse team announced support for performance budgets that can be integrated with Lighthouse. We can now define a simple performance budget in a JSON file, which will be tested as part of the lighthouse audit!

I completely agree with Ire, and much in the same way I’ve tended to neglect sticking to a performance budget simply because the process of testing was so manual and tedious. But no more! As Ire shows in this post, you can even set Lighthouse up to test your budget with every PR in GitHub. That tool is called lighthousebot and it’s just what I’ve been looking for – an automated and predictable way to integrate a performance budget into every change that I make to a codebase.

Today lighthousebot will comment on your PR after a test is complete and it will show you the before and after score:

How neat is that? This reminds me of Gareth Clubb’s recent post about improving web performance and building a culture around budgets in an organization. What better way to remind everyone about performance than right in GitHub after each and every change that they make?

Direct Link to ArticlePermalink

The post Your first performance budget with Lighthouse appeared first on CSS-Tricks.