The Forensics Of React Server Components (RSCs)

This article is a sponsored by Sentry.io

In this article, we’re going to look deeply at React Server Components (RSCs). They are the latest innovation in React’s ecosystem, leveraging both server-side and client-side rendering as well as streaming HTML to deliver content as fast as possible.

We will get really nerdy to get a full understanding of how RFCs fit into the React picture, the level of control they offer over the rendering lifecycle of components, and what page loads look like with RFCs in place.

But before we dive into all of that, I think it’s worth looking back at how React has rendered websites up until this point to set the context for why we need RFCs in the first place.

The Early Days: React Client-Side Rendering

The first React apps were rendered on the client side, i.e., in the browser. As developers, we wrote apps with JavaScript classes as components and packaged everything up using bundlers, like Webpack, in a nicely compiled and tree-shaken heap of code ready to ship in a production environment.

The HTML that returned from the server contained a few things, including:

  • An HTML document with metadata in the <head> and a blank <div> in the <body> used as a hook to inject the app into the DOM;
  • JavaScript resources containing React’s core code and the actual code for the web app, which would generate the user interface and populate the app inside of the empty <div>.

A web app under this process is only fully interactive once JavaScript has fully completed its operations. You can probably already see the tension here that comes with an improved developer experience (DX) that negatively impacts the user experience (UX).

The truth is that there were (and are) pros and cons to CSR in React. Looking at the positives, web applications delivered smooth, quick transitions that reduced the overall time it took to load a page, thanks to reactive components that update with user interactions without triggering page refreshes. CSR lightens the server load and allows us to serve assets from speedy content delivery networks (CDNs) capable of delivering content to users from a server location geographically closer to the user for even more optimized page loads.

There are also not-so-great consequences that come with CSR, most notably perhaps that components could fetch data independently, leading to waterfall network requests that dramatically slow things down. This may sound like a minor nuisance on the UX side of things, but the damage can actually be quite large on a human level. Eric Bailey’s “Modern Health, frameworks, performance, and harm” should be a cautionary tale for all CSR work.

Other negative CSR consequences are not quite as severe but still lead to damage. For example, it used to be that an HTML document containing nothing but metadata and an empty <div> was illegible to search engine crawlers that never get the fully-rendered experience. While that’s solved today, the SEO hit at the time was an anchor on company sites that rely on search engine traffic to generate revenue.

The Shift: Server-Side Rendering (SSR)

Something needed to change. CSR presented developers with a powerful new approach for constructing speedy, interactive interfaces, but users everywhere were inundated with blank screens and loading indicators to get there. The solution was to move the rendering experience from the client to the server. I know it sounds funny that we needed to improve something by going back to the way it was before.

So, yes, React gained server-side rendering (SSR) capabilities. At one point, SSR was such a topic in the React community that it had a moment in the spotlight. The move to SSR brought significant changes to app development, specifically in how it influenced React behavior and how content could be delivered by way of servers instead of browsers.

Addressing CSR Limitations

Instead of sending a blank HTML document with SSR, we rendered the initial HTML on the server and sent it to the browser. The browser was able to immediately start displaying the content without needing to show a loading indicator. This significantly improves the First Contentful Paint (FCP) performance metric in Web Vitals.

Server-side rendering also fixed the SEO issues that came with CSR. Since the crawlers received the content of our websites directly, they were then able to index it right away. The data fetching that happens initially also takes place on the server, which is a plus because it’s closer to the data source and can eliminate fetch waterfalls if done properly.

Hydration

SSR has its own complexities. For React to make the static HTML received from the server interactive, it needs to hydrate it. Hydration is the process that happens when React reconstructs its Virtual Document Object Model (DOM) on the client side based on what was in the DOM of the initial HTML.

Note: React maintains its own Virtual DOM because it’s faster to figure out updates on it instead of the actual DOM. It synchronizes the actual DOM with the Virtual DOM when it needs to update the UI but performs the diffing algorithm on the Virtual DOM.

We now have two flavors of Reacts:

  1. A server-side flavor that knows how to render static HTML from our component tree,
  2. A client-side flavor that knows how to make the page interactive.

We’re still shipping React and code for the app to the browser because — in order to hydrate the initial HTML — React needs the same components on the client side that were used on the server. During hydration, React performs a process called reconciliation in which it compares the server-rendered DOM with the client-rendered DOM and tries to identify differences between the two. If there are differences between the two DOMs, React attempts to fix them by rehydrating the component tree and updating the component hierarchy to match the server-rendered structure. And if there are still inconsistencies that cannot be resolved, React will throw errors to indicate the problem. This problem is commonly known as a hydration error.

SSR Drawbacks

SSR is not a silver bullet solution that addresses CSR limitations. SSR comes with its own drawbacks. Since we moved the initial HTML rendering and data fetching to the server, those servers are now experiencing a much greater load than when we loaded everything on the client.

Remember when I mentioned that SSR generally improves the FCP performance metric? That may be true, but the Time to First Byte (TTFB) performance metric took a negative hit with SSR. The browser literally has to wait for the server to fetch the data it needs, generate the initial HTML, and send the first byte. And while TTFB is not a Core Web Vital metric in itself, it influences the metrics. A negative TTFB leads to negative Core Web Vitals metrics.

Another drawback of SSR is that the entire page is unresponsive until client-side React has finished hydrating it. Interactive elements cannot listen and “react” to user interactions before React hydrates them, i.e., React attaches the intended event listeners to them. The hydration process is typically fast, but the internet connection and hardware capabilities of the device in use can slow down rendering by a noticeable amount.

The Present: A Hybrid Approach

So far, we have covered two different flavors of React rendering: CSR and SSR. While the two were attempts to improve one another, we now get the best of both worlds, so to speak, as SSR has branched into three additional React flavors that offer a hybrid approach in hopes of reducing the limitations that come with CSR and SSR.

We’ll look at the first two — static site generation and incremental static regeneration — before jumping into an entire discussion on React Server Components, the third flavor.

Static Site Generation (SSG)

Instead of regenerating the same HTML code on every request, we came up with SSG. This React flavor compiles and builds the entire app at build time, generating static (as in vanilla HTML and CSS) files that are, in turn, hosted on a speedy CDN.

As you might suspect, this hybrid approach to rendering is a nice fit for smaller projects where the content doesn’t change much, like a marketing site or a personal blog, as opposed to larger projects where content may change with user interactions, like an e-commerce site.

SSG reduces the burden on the server while improving performance metrics related to TTFB because the server no longer has to perform heavy, expensive tasks for re-rendering the page.

Incremental Static Regeneration (ISR)

One SSG drawback is having to rebuild all of the app’s code when a content change is needed. The content is set in stone — being static and all — and there’s no way to change just one part of it without rebuilding the whole thing.

The Next.js team created the second hybrid flavor of React that addresses the drawback of complete SSG rebuilds: incremental static regeneration (ISR). The name says a lot about the approach in that ISR only rebuilds what’s needed instead of the entire thing. We generate the “initial version” of the page statically during build time but are also able to rebuild any page containing stale data after a user lands on it (i.e., the server request triggers the data check).

From that point on, the server will serve new versions of that page statically in increments when needed. That makes ISR a hybrid approach that is neatly positioned between SSG and traditional SSR.

At the same time, ISR does not address the “stale content” symptom, where users may visit a page before it has finished being generated. Unlike SSG, ISR needs an actual server to regenerate individual pages in response to a user’s browser making a server request. That means we lose the valuable ability to deploy ISR-based apps on a CDN for optimized asset delivery.

The Future: React Server Components

Up until this point, we’ve juggled between CSR, SSR, SSG, and ISR approaches, where all make some sort of trade-off, negatively affecting performance, development complexity, and user experience. Newly introduced React Server Components (RSC) aim to address most of these drawbacks by allowing us — the developer — to choose the right rendering strategy for each individual React component.

RSCs can significantly reduce the amount of JavaScript shipped to the client since we can selectively decide which ones to serve statically on the server and which render on the client side. There’s a lot more control and flexibility for striking the right balance for your particular project.

Note: It’s important to keep in mind that as we adopt more advanced architectures, like RSCs, monitoring solutions become invaluable. Sentry offers robust performance monitoring and error-tracking capabilities that help you keep an eye on the real-world performance of your RSC-powered application. Sentry also helps you gain insights into how your releases are performing and how stable they are, which is yet another crucial feature to have while migrating your existing applications to RSCs. Implementing Sentry in an RSC-enabled framework like Next.js is as easy as running a single terminal command.

But what exactly is an RSC? Let’s pick one apart to see how it works under the hood.

The Anatomy of React Server Components

This new approach introduces two types of rendering components: Server Components and Client Components. The differences between these two are not how they function but where they execute and the environments they’re designed for. At the time of this writing, the only way to use RSCs is through React frameworks. And at the moment, there are only three frameworks that support them: Next.js, Gatsby, and RedwoodJS.

Server Components

Server Components are designed to be executed on the server, and their code is never shipped to the browser. The HTML output and any props they might be accepting are the only pieces that are served. This approach has multiple performance benefits and user experience enhancements:

  • Server Components allow for large dependencies to remain on the server side.
    Imagine using a large library for a component. If you’re executing the component on the client side, it means that you’re also shipping the full library to the browser. With Server Components, you’re only taking the static HTML output and avoiding having to ship any JavaScript to the browser. Server Components are truly static, and they remove the whole hydration step.
  • Server Components are located much closer to the data sources — e.g., databases or file systems — they need to generate code.
    They also leverage the server’s computational power to speed up compute-intensive rendering tasks and send only the generated results back to the client. They are also generated in a single pass, which avoids request waterfalls and HTTP round trips.
  • Server Components safely keep sensitive data and logic away from the browser.
    That’s thanks to the fact that personal tokens and API keys are executed on a secure server rather than the client.
  • The rendering results can be cached and reused between subsequent requests and even across different sessions.
    This significantly reduces rendering time, as well as the overall amount of data that is fetched for each request.

This architecture also makes use of HTML streaming, which means the server defers generating HTML for specific components and instead renders a fallback element in their place while it works on sending back the generated HTML. Streaming Server Components wrap components in <Suspense> tags that provide a fallback value. The implementing framework uses the fallback initially but streams the newly generated content when it‘s ready. We’ll talk more about streaming, but let’s first look at Client Components and compare them to Server Components.

Client Components

Client Components are the components we already know and love. They’re executed on the client side. Because of this, Client Components are capable of handling user interactions and have access to the browser APIs like localStorage and geolocation.

The term “Client Component” doesn’t describe anything new; they merely are given the label to help distinguish the “old” CSR components from Server Components. Client Components are defined by a "use client" directive at the top of their files.

"use client"
export default function LikeButton() {
  const likePost = () => {
    // ...
  }
  return (
    <button onClick={likePost}>Like</button>
  )
}

In Next.js, all components are Server Components by default. That’s why we need to explicitly define our Client Components with "use client". There’s also a "use server" directive, but it’s used for Server Actions (which are RPC-like actions that invoked from the client, but executed on the server). You don’t use it to define your Server Components.

You might (rightfully) assume that Client Components are only rendered on the client, but Next.js renders Client Components on the server to generate the initial HTML. As a result, browsers can immediately start rendering them and then perform hydration later.

The Relationship Between Server Components and Client Components

Client Components can only explicitly import other Client Components. In other words, we’re unable to import a Server Component into a Client Component because of re-rendering issues. But we can have Server Components in a Client Component’s subtree — only passed through the children prop. Since Client Components live in the browser and they handle user interactions or define their own state, they get to re-render often. When a Client Component re-renders, so will its subtree. But if its subtree contains Server Components, how would they re-render? They don’t live on the client side. That’s why the React team put that limitation in place.

But hold on! We actually can import Server Components into Client Components. It’s just not a direct one-to-one relationship because the Server Component will be converted into a Client Component. If you’re using server APIs that you can’t use in the browser, you’ll get an error; if not — you’ll have a Server Component whose code gets “leaked” to the browser.

This is an incredibly important nuance to keep in mind as you work with RSCs.

The Rendering Lifecycle

Here’s the order of operations that Next.js takes to stream contents:

  1. The app router matches the page’s URL to a Server Component, builds the component tree, and instructs the server-side React to render that Server Component and all of its children components.
  2. During render, React generates an “RSC Payload”. The RSC Payload informs Next.js about the page and what to expect in return, as well as what to fall back to during a <Suspense>.
  3. If React encounters a suspended component, it pauses rendering that subtree and uses the suspended component’s fallback value.
  4. When React loops through the last static component, Next.js prepares the generated HTML and the RSC Payload before streaming it back to the client through one or multiple chunks.
  5. The client-side React then uses the instructions it has for the RSC Payload and client-side components to render the UI. It also hydrates each Client Component as they load.
  6. The server streams in the suspended Server Components as they become available as an RSC Payload. Children of Client Components are also hydrated at this time if the suspended component contains any.

We will look at the RSC rendering lifecycle from the browser’s perspective momentarily. For now, the following figure illustrates the outlined steps we covered.

We’ll see this operation flow from the browser’s perspective in just a bit.

RSC Payload

The RSC payload is a special data format that the server generates as it renders the component tree, and it includes the following:

  • The rendered HTML,
  • Placeholders where the Client Components should be rendered,
  • References to the Client Components’ JavaScript files,
  • Instructions on which JavaScript files it should invoke,
  • Any props passed from a Server Component to a Client Component.

There’s no reason to worry much about the RSC payload, but it’s worth understanding what exactly the RSC payload contains. Let’s examine an example (truncated for brevity) from a demo app I created:

1:HL["/_next/static/media/c9a5bc6a7c948fb0-s.p.woff2","font",{"crossOrigin":"","type":"font/woff2"}]
2:HL["/_next/static/css/app/layout.css?v=1711137019097","style"]
0:"$L3"
4:HL["/_next/static/css/app/page.css?v=1711137019097","style"]
5:I["(app-pages-browser)/./node_modules/next/dist/client/components/app-router.js",["app-pages-internals","static/chunks/app-pages-internals.js"],""]
8:"$Sreact.suspense"
a:I["(app-pages-browser)/./node_modules/next/dist/client/components/layout-router.js",["app-pages-internals","static/chunks/app-pages-internals.js"],""]
b:I["(app-pages-browser)/./node_modules/next/dist/client/components/render-from-template-context.js",["app-pages-internals","static/chunks/app-pages-internals.js"],""]
d:I["(app-pages-browser)/./src/app/global-error.jsx",["app/global-error","static/chunks/app/global-error.js"],""]
f:I["(app-pages-browser)/./src/components/clearCart.js",["app/page","static/chunks/app/page.js"],"ClearCart"]
7:["$","main",null,{"className":"page_main__GlU4n","children":[["$","$Lf",null,{}],["$","$8",null,{"fallback":["$","p",null,{"children":"🌀 loading products..."}],"children":"$L10"}]]}]
c:[["$","meta","0",{"name":"viewport","content":"width=device-width, initial-scale=1"}]...
9:["$","p",null,{"children":["🛍️ ",3]}]
11:I["(app-pages-browser)/./src/components/addToCart.js",["app/page","static/chunks/app/page.js"],"AddToCart"]
10:["$","ul",null,{"children":[["$","li","1",{"children":["Gloves"," - $",20,["$...

To find this code in the demo app, open your browser’s developer tools at the Elements tab and look at the <script> tags at the bottom of the page. They’ll contain lines like:

self.__next_f.push([1,"PAYLOAD_STRING_HERE"]).

Every line from the snippet above is an individual RSC payload. You can see that each line starts with a number or a letter, followed by a colon, and then an array that’s sometimes prefixed with letters. We won’t get into too deep in detail as to what they mean, but in general:

  • HL payloads are called “hints” and link to specific resources like CSS and fonts.
  • I payloads are called “modules,” and they invoke specific scripts. This is how Client Components are being loaded as well. If the Client Component is part of the main bundle, it’ll execute. If it’s not (meaning it’s lazy-loaded), a fetcher script is added to the main bundle that fetches the component’s CSS and JavaScript files when it needs to be rendered. There’s going to be an I payload sent from the server that invokes the fetcher script when needed.
  • "$" payloads are DOM definitions generated for a certain Server Component. They are usually accompanied by actual static HTML streamed from the server. That’s what happens when a suspended component becomes ready to be rendered: the server generates its static HTML and RSC Payload and then streams both to the browser.
Streaming

Streaming allows us to progressively render the UI from the server. With RSCs, each component is capable of fetching its own data. Some components are fully static and ready to be sent immediately to the client, while others require more work before loading. Based on this, Next.js splits that work into multiple chunks and streams them to the browser as they become ready. So, when a user visits a page, the server invokes all Server Components, generates the initial HTML for the page (i.e., the page shell), replaces the “suspended” components’ contents with their fallbacks, and streams all of that through one or multiple chunks back to the client.

The server returns a Transfer-Encoding: chunked header that lets the browser know to expect streaming HTML. This prepares the browser for receiving multiple chunks of the document, rendering them as it receives them. We can actually see the header when opening Developer Tools at the Network tab. Trigger a refresh and click on the document request.

We can also debug the way Next.js sends the chunks in a terminal with the curl command:

curl -D - --raw localhost:3000 > chunked-response.txt

You probably see the pattern. For each chunk, the server responds with the chunk’s size before sending the chunk’s contents. Looking at the output, we can see that the server streamed the entire page in 16 different chunks. At the end, the server sends back a zero-sized chunk, indicating the end of the stream.

The first chunk starts with the <!DOCTYPE html> declaration. The second-to-last chunk, meanwhile, contains the closing </body> and </html> tags. So, we can see that the server streams the entire document from top to bottom, then pauses to wait for the suspended components, and finally, at the end, closes the body and HTML before it stops streaming.

Even though the server hasn’t completely finished streaming the document, the browser’s fault tolerance features allow it to draw and invoke whatever it has at the moment without waiting for the closing </body> and </html> tags.

Suspending Components

We learned from the render lifecycle that when a page is visited, Next.js matches the RSC component for that page and asks React to render its subtree in HTML. When React stumbles upon a suspended component (i.e., async function component), it grabs its fallback value from the <Suspense> component (or the loading.js file if it’s a Next.js route), renders that instead, then continues loading the other components. Meanwhile, the RSC invokes the async component in the background, which is streamed later as it finishes loading.

At this point, Next.js has returned a full page of static HTML that includes either the components themselves (rendered in static HTML) or their fallback values (if they’re suspended). It takes the static HTML and RSC payload and streams them back to the browser through one or multiple chunks.

As the suspended components finish loading, React generates HTML recursively while looking for other nested <Suspense> boundaries, generates their RSC payloads and then lets Next.js stream the HTML and RSC Payload back to the browser as new chunks. When the browser receives the new chunks, it has the HTML and RSC payload it needs and is ready to replace the fallback element from the DOM with the newly-streamed HTML. And so on.

In Figures 7 and 8, notice how the fallback elements have a unique ID in the form of B:0, B:1, and so on, while the actual components have a similar ID in a similar form: S:0 and S:1, and so on.

Along with the first chunk that contains a suspended component’s HTML, the server also ships an $RC function (i.e., completeBoundary from React’s source code) that knows how to find the B:0 fallback element in the DOM and replace it with the S:0 template it received from the server. That’s the “replacer” function that lets us see the component contents when they arrive in the browser.

The entire page eventually finishes loading, chunk by chunk.

Lazy-Loading Components

If a suspended Server Component contains a lazy-loaded Client Component, Next.js will also send an RSC payload chunk containing instructions on how to fetch and load the lazy-loaded component’s code. This represents a significant performance improvement because the page load isn’t dragged out by JavaScript, which might not even be loaded during that session.

At the time I’m writing this, the dynamic method to lazy-load a Client Component in a Server Component in Next.js does not work as you might expect. To effectively lazy-load a Client Component, put it in a “wrapper” Client Component that uses the dynamic method itself to lazy-load the actual Client Component. The wrapper will be turned into a script that fetches and loads the Client Component’s JavaScript and CSS files at the time they’re needed.

TL;DR

I know that’s a lot of plates spinning and pieces moving around at various times. What it boils down to, however, is that a page visit triggers Next.js to render as much HTML as it can, using the fallback values for any suspended components, and then sends that to the browser. Meanwhile, Next.js triggers the suspended async components and gets them formatted in HTML and contained in RSC Payloads that are streamed to the browser, one by one, along with an $RC script that knows how to swap things out.

The Page Load Timeline

By now, we should have a solid understanding of how RSCs work, how Next.js handles their rendering, and how all the pieces fit together. In this section, we’ll zoom in on what exactly happens when we visit an RSC page in the browser.

The Initial Load

As we mentioned in the TL;DR section above, when visiting a page, Next.js will render the initial HTML minus the suspended component and stream it to the browser as part of the first streaming chunks.

To see everything that happens during the page load, we’ll visit the “Performance” tab in Chrome DevTools and click on the “reload” button to reload the page and capture a profile. Here’s what that looks like:

When we zoom in at the very beginning, we can see the first “Parse HTML” span. That’s the server streaming the first chunks of the document to the browser. The browser has just received the initial HTML, which contains the page shell and a few links to resources like fonts, CSS files, and JavaScript. The browser starts to invoke the scripts.

After some time, we start to see the page’s first frames appear, along with the initial JavaScript scripts being loaded and hydration taking place. If you look at the frame closely, you’ll see that the whole page shell is rendered, and “loading” components are used in the place where there are suspended Server Components. You might notice that this takes place around 800ms, while the browser started to get the first HTML at 100ms. During those 700ms, the browser is continuously receiving chunks from the server.

Bear in mind that this is a Next.js demo app running locally in development mode, so it’s going to be slower than when it’s running in production mode.

The Suspended Component

Fast forward few seconds and we see another “Parse HTML” span in the page load timeline, but this one it indicates that a suspended Server Component finished loading and is being streamed to the browser.

We can also see that a lazy-loaded Client Component is discovered at the same time, and it contains CSS and JavaScript files that need to be fetched. These files weren’t part of the initial bundle because the component isn’t needed until later on; the code is split into their own files.

This way of code-splitting certainly improves the performance of the initial page load. It also makes sure that the Client Component’s code is shipped only if it’s needed. If the Server Component (which acts as the Client Component’s parent component) throws an error, then the Client Component does not load. It doesn’t make sense to load all of its code before we know whether it will load or not.

Figure 12 shows the DOMContentLoaded event is reported at the end of the page load timeline. And, just before that, we can see that the localhost HTTP request comes to an end. That means the server has likely sent the last zero-sized chunk, indicating to the client that the data is fully transferred and that the streaming communication can be closed.

The End Result

The main localhost HTTP request took around five seconds, but thanks to streaming, we began seeing page contents load much earlier than that. If this was a traditional SSR setup, we would likely be staring at a blank screen for those five seconds before anything arrives. On the other hand, if this was a traditional CSR setup, we would likely have shipped a lot more of JavaScript and put a heavy burden on both the browser and network.

This way, however, the app was fully interactive in those five seconds. We were able to navigate between pages and interact with Client Components that have loaded as part of the initial main bundle. This is a pure win from a user experience standpoint.

Conclusion

RSCs mark a significant evolution in the React ecosystem. They leverage the strengths of server-side and client-side rendering while embracing HTML streaming to speed up content delivery. This approach not only addresses the SEO and loading time issues we experience with CSR but also improves SSR by reducing server load, thus enhancing performance.

I’ve refactored the same RSC app I shared earlier so that it uses the Next.js Page router with SSR. The improvements in RSCs are significant:

Looking at these two reports I pulled from Sentry, we can see that streaming allows the page to start loading its resources before the actual request finishes. This significantly improves the Web Vitals metrics, which we see when comparing the two reports.

The conclusion: Users enjoy faster, more reactive interfaces with an architecture that relies on RSCs.

The RSC architecture introduces two new component types: Server Components and Client Components. This division helps React and the frameworks that rely on it — like Next.js — streamline content delivery while maintaining interactivity.

However, this setup also introduces new challenges in areas like state management, authentication, and component architecture. Exploring those challenges is a great topic for another blog post!

Despite these challenges, the benefits of RSCs present a compelling case for their adoption. We definitely will see guides published on how to address RSC’s challenges as they mature, but, in my opinion, they already look like the future of rendering practices in modern web development.

How to Reduce Time to First Byte (TTFB) in WordPress – Expert Tips

Do you want to improve your WordPress website’s performance and reduce time to first byte (TTFB)?

When optimizing a WordPress site’s load time, many people overlook the server side. Reducing TTFB (Time To First Byte) will help speed up your site and provide a better user experience.

In this article, we will show you how to reduce TTFB in WordPress.

How to reduce TTFB in WordPress step by step

To help you navigate this post, simply click the links below to jump ahead to your preferred section:

What is Time to First Byte (TTFB)?

TTFB, or time to first byte, is the time a server takes to respond to a request and load a web page in the user’s browser.

In simpler terms, TTFB measures the time between a user clicking on a web page and the browser first starts receiving a response from the website server.

The longer it takes for a server to send the first byte of data, the longer it takes a browser to display your website. Several factors go into calculating TTFB. For instance, it takes into account DNS lookup, TLS handshake, SSL connection, and more.

That said, let’s see why it is important to reduce TTFB.

Why Reduce TTFB in WordPress?

Time to first byte is one of the factors that can impact the overall speed of your WordPress site, and it is an important metric to keep an eye on.

TTFB tells the responsiveness of your site’s server, and reducing it will help you provide a better user experience. Your visitors won’t have to wait for web pages to load. In return, it will help boost your conversion, get more leads, and generate sales.

According to research, a 1 second delay in page load time can lead to a 7% drop in conversions, a 16% decrease in customer satisfaction, and an 11% loss in page views.

Strangeloop speed study

Besides that, improving the TTFB score can also boost your WordPress SEO.

Google uses what it calls Core Web Vitals to measure performance and overall user experience on a website.

TTFB is not a Core Web Vitals metric, but it can be used for diagnosis purposes. Since it measures how fast a web server responds, you can use TTFB to figure out if something is wrong and impacting the overall Core Web Vitals of your website.

That said, let’s look at different ways to measure time to first byte.

How to Check TTFB on Your Website

You can use different tools and software to check the time to first byte (TTFB) of your WordPress website.

Measure TTFB Using Google PageSpeed Insights

Google PageSpeed Insights is a free tool by Google that analyzes your page speed on mobile and desktop. It gives an overall rating out of 100 and measures Core Web Vitals along with other metrics, including time to first byte.

First, you’ll need to visit the Google PageSpeed Insights website and enter your website URL. After that, simply click the ‘Analyze’ button.

Google Pagespeed insights

The tool will then analyze your website and show results.

You can then view the time to first byte (TTFB) score and other metrics.

View time to first byte score

Measure TTFB Using Google Chrome

You can also use your Google Chrome’s developer tools to view the time to first byte.

First, you can right-click on your webpage and go to the ‘Inspect’ option. Alternatively, you can also press Ctrl + Shift + I for Windows or Cmd + Opt + I for Mac on your keyboard to open inspect element tools.

The Google Chrome Inspect tool

Next, you can switch to the ‘Network’ tab.

After that, simply hover your mouse over the green bars under the Waterfall column.

Hover mouse over waterfall

You now see a popup with different metrics.

Go ahead and note the ‘Waiting for server response’ time, as this will show you the TTFB for your website.

View waiting time for server response

Measure TTFB Using GTmetrix

Another way to measure the TTFB of your WordPress site is by using GTmetrix. It is a free tool that also measures your site speed.

Simply visit the GTmetrix website and enter your site URL. After that, go ahead and click the ‘Analyze’ button.

GTmetrix Test Without a Plugin

It will take a few minutes for the tool to analyze your site and show the results.

Next, you can switch to the ‘Waterfall’ tab to view the response time for your web page resources and elements. GTmetrix will show TTFB as ‘Waiting’ in the data.

View waiting time in GTmetrix

Expert Tips to Reduce TTFB in WordPress

Now that you know how to measure TTFB, the next step is to lower it and improve the site’s performance.

Let’s look at different steps you can take to reduce time to first byte on your WordPress website.

1. Ensure WordPress, Plugins, and Themes Are Up to Date

When you’re optimizing your site for TTFB and improving overall performance, the easiest thing to do is make sure that you’re running the latest version of WordPress.

Each new WordPress version comes with performance improvements. This could mean optimizing the queries that run code in the database, resolving bugs that would slow down your site, and boosting the overall efficiency of your site.

You can learn more by following our guide on how to safely update WordPress.

Updating WordPress Core From the Dashboard

Similarly, you should also ensure that WordPress plugins and themes are up to date. Just like WordPress, newer versions of plugins and themes can include performance optimization that can speed up your site.

Plus, you should also check if a plugin or theme is slowing down your website and increasing TTFB. You can measure TTFB and run a website speed test by first activating the plugin and then deactivating it to rule out any issues.

If you’re running older versions of plugins and themes and not sure how to update them, then please see our guide on how to properly update WordPress plugins and how to update WordPress themes without losing customization.

2. Update Your WordPress Site’s PHP Version

Updating the PHP version can also significantly improve your site’s performance and lower the time to first byte.

PHP is an open-source programming language on which WordPress is written. Each new version of PHP improves performance by making processes more efficient and reducing memory usage. This reduces the load on your website server when loading web pages.

Getting the PHP version on your WordPress website

Updating the PHP version also helps strengthen your WordPress security. It prevents hackers from exploiting an older PHP version and accessing your website.

You can follow our complete guide on how to update the PHP version in WordPress to learn more.

3. Use a Caching WordPress Plugin

Another simple way to reduce time to the first byte (TTFB) is by using a caching plugin for WordPress.

Caching stores a temporary copy of your web page after the first load that can be accessed quickly upon request. This speeds up the process, as WordPress won’t have to go through all the steps of generating a page. It also lowers server response time and lowers TTFB.

Most WordPress hosting providers offer caching with their hosting plans. However, you can also use standalone caching plugins for WordPress.

For instance, WP Rocket is one of the best caching plugins that is beginner-friendly to use. It automatically optimizes your site to improve performance and offers features like lazy image loading, DNS pre-fetching, and more.

You can also see our guide to improve WordPress speed and performance for more tips.

4. Add Content Delivery Network (CDN) to WordPress

Along with a caching plugin, you can also use a content delivery network (CDN) to reduce the TTFB of your WordPress site.

A CDN is a network of servers that delivers cached content from your websites to a user based on their geographic location.

Content Delivery Network (CDN)

This speeds up the process of displaying web pages to users that are located far away from your website server. People won’t have to wait for the page request to travel all the way to the server location. Instead, a CDN will instantly show a cached version of that page.

You can see our list of the best WordPress CDN services to choose the most suitable option for your business.

5. Optimize Your WordPress Database

You can also optimize your database and compress website files to lower the time to first byte and improve performance.

If your site’s database contains unnecessary information and hasn’t been cleaned in a while, then it can lower TTFB. For instance, trashed posts, post revisions, and spam comments can sit in the database and impact the TTFB.

You can manually delete these to clear the database or use a WordPress plugin to handle everything for you. To learn more, please see our guide on WordPress database management.

6. Switch to the Fastest Hosting Service

Choosing the right hosting provider for your WordPress website is important. A reputable hosting service is optimized for speed and ensures high performance.

At WPBeginner, we conducted a test to find the fastest hosting service. We used multiple third-party looks like Pingdom, Load Impact (k6), and Bitcatcha to test the performance of each provider.

The results revealed Hostinger to be the fastest hosting service, followed by DreamHost and WP Engine.

You can find all the details in our guide on the fastest WordPress hosting performance test.

FAQs About Time to First Byte (TTFB)

Here are some common questions our users have asked us about the time to first byte (TTFB).

What is a good TTFB?

According to Google Chrome developers, a good TTFB used to be under 0.8 seconds. However, this number depends on the content you have on your page. For instance, a static page would have a lower TTFB compared to a dynamic page.

What is included in TTFB?

TTFB measures the time it takes a user’s browser to receive the first byte of data from the website server. It includes multiple things like DNS lookup, TLS handshake, SSL connection, and more.

How is TTFB measured?

You can use different third-party tools like GTmetrix or Google PageSpeed Insights to measure TTFB. You can also use the dev tools in Google Chrome to view the ‘Waiting for server response’ time and check TTFB.

Why is my TTFB so high?

There can be several reasons for high TTFB. For instance, a slow website server, location of the server, slow DNS response time, content that has a lot of images and videos, and configuration issues can lead to a high TTFB.

We hope this article helped you learn how to reduce TTFB in WordPress. You may also want to see our guide on how to speed up your WooCommerce store and the most common WordPress errors.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post How to Reduce Time to First Byte (TTFB) in WordPress – Expert Tips first appeared on WPBeginner.

How We Optimized Performance To Serve A Global Audience

I work for Bookaway, a digital travel brand. As an online booking platform, we connect travelers with transport providers worldwide, offering bus, ferry, train, and car transfers in over 30 countries. We aim to eliminate the complexity and hassle associated with travel planning by providing a one-stop solution for all transportation needs.

A cornerstone of our business model lies in the development of effective landing pages. These pages serve as a pivotal tool in our digital marketing strategy, not only providing valuable information about our services but also designed to be easily discoverable through search engines. Although landing pages are a common practice in online marketing, we were trying to make the most of it.

SEO is key to our success. It increases our visibility and enables us to draw a steady stream of organic (or “free”) traffic to our site. While paid marketing strategies like Google Ads play a part in our approach as well, enhancing our organic traffic remains a major priority. The higher our organic traffic, the more profitable we become as a company.

We’ve known for a long time that fast page performance influences search engine rankings. It was only in 2020, though, that Google shared its concept of Core Web Vitals and how it impacts SEO efforts. Our team at Bookaway recently underwent a project to improve Web Vitals, and I want to give you a look at the work it took to get our existing site in full compliance with Google’s standards and how it impacted our search presence.

SEO And Web Vitals

In the realm of search engine optimization, performance plays a critical role. As the world’s leading search engine, Google is committed to delivering the best possible search results to its users. This commitment involves prioritizing websites that offer not only relevant content but also an excellent user experience.

Google’s Core Web Vitals is a set of performance metrics that site owners can use to evaluate performance and diagnose performance issues. These metrics provide a different perspective on user experience:

  • Largest Contentful Paint (LCP)
    Measures the time it takes for the main content on a webpage to load.
  • First Input Delay (FID)
    Assesses the time it takes for a page to become interactive.
    Note: Google plans to replace this metric with another one called Interaction to Next Paint (INP) beginning in 2024.
  • Cumulative Layout Shift (CLS)
    Calculates the visual stability of a page.

While optimizing for FID and CLS was relatively straightforward, LCP posed a greater challenge due to the multiple factors involved. LCP is particularly vital for landing pages, which are predominantly content and often the first touch-point a visitor has with a website. A low LCP ensures that visitors can view the main content of your page sooner, which is critical for maintaining user engagement and reducing bounce rates.

Largest Contentful Paint (LCP)

LCP measures the perceived load speed of a webpage from a user’s perspective. It pinpoints the moment during a page’s loading phase when the primary — or “largest” — content has been fully rendered on the screen. This could be an image, a block of text, or even an embedded video. LCP is an essential metric because it gives a real-world indication of the user experience, especially for content-heavy sites.

However, achieving a good LCP score is often a multi-faceted process that involves optimizing several stages of loading and rendering. Each stage has its unique challenges and potential pitfalls, as other case studies show.

Here’s a breakdown of the moving pieces.

Time To First Byte (TTFB)

This is the time it takes for the first piece of information from the server to reach the user’s browser. You need to beware that slow server response times can significantly increase TTFB, often due to server overload, network issues, or un-optimized logic on the server side.

Download Time of HTML

This is the time it takes to download the page’s HTML file. You need to beware of large HTML files or slow network connections because they can lead to longer download times.

HTML Processing

Once a web page’s HTML file has been downloaded, the browser begins to process the contents line by line, translating code into the visual website that users interact with. If, during this process, the browser encounters a <script> or <style> tag that lacks either an async or deferred attribute, the rendering of the webpage comes to a halt.

The browser must then pause to fetch and parse the corresponding files. These files can be complex and potentially take a significant amount of time to download and interpret, leading to a noticeable delay in the loading and rendering of the webpage. This is why the async and deferred attributes are crucial, as they ensure an efficient, seamless web browsing experience.

Fetching And Decoding Images

This is the time taken to fetch, download, and decode images, particularly the largest contentful image. You need to look out for large image file sizes or improperly optimized images that can delay the fetching and decoding process.

First Contentful Paint (FCP)

This is the time it takes for the browser to render the first bit of content from the DOM. You need to beware of slow server response times, particularly render-blocking JavaScript or CSS, or slow network connections, all of which can negatively affect FCP.

Rendering the Largest Contentful Element

This is the time taken until the largest contentful element (like a hero image or heading text) is fully rendered on the page. You need to watch out for complex design elements, large media files, or slow browser rendering can delay the time it takes for the largest contentful element to render.

Understanding and optimizing each of these stages can significantly improve a website’s LCP, thereby enhancing the user experience and SEO rankings.

I know that is a lot of information to unpack in a single sitting, and it definitely took our team time to wrap our minds around what it takes to achieve a low LCP score. But once we had a good understanding, we knew exactly what to look for and began analyzing the analytics of our user data to identify areas that could be improved.

Analyzing User Data

To effectively monitor and respond to our website’s performance, we need a robust process for collecting and analyzing this data.

Here’s how we do it at Bookaway.

Next.js For Performance Monitoring

Many of you reading this may already be familiar with Next.js, but it is a popular open-source JavaScript framework that allows us to monitor our website’s performance in real-time.

One of the key Next.js features we leverage is the reportWebVitals function, a hook that allows us to capture the Web Vitals metrics for each page load. We can then forward this data to a custom analytics service. Most importantly, the function provides us with in-depth insights into our user experiences in real-time, helping us identify any performance issues as soon as they arise.

Storing Data In BigQuery For Comprehensive Analysis

Once we capture the Web Vitals metrics, we store this data in BigQuery, Google Cloud’s fully-managed, serverless data warehouse. Alongside the Web Vitals data, we also record a variety of other important details, such as the date of the page load, the route, whether the user was on a mobile or desktop device, and the language settings. This comprehensive dataset allows us to examine our website’s performance from multiple angles and gain deeper insights into the user experience.

The screenshot features an SQL query from a data table, focusing on the LCP web vital. It shows the retrieval of LCP values (in milliseconds) for specific visits across three unique page URLs that, in turn, represent three different landing pages we serve:

These values indicate how quickly major content items on these pages become fully visible to users.

Visualizing Data with Looker Studio

We visualize performance data using Google’s Looker Studio (formerly called Data Studio). By transforming our raw data into interactive dashboards and reports, we can easily identify trends, pinpoint issues, and monitor improvements over time. These visualizations empower us to make data-driven decisions that enhance our website’s performance and, ultimately, improve our users’ experience.

Looker Studio offers a few key advantages:

  • Easy-to-use interface
    Looker Studio is intuitive and user-friendly, making it easy for anyone on our team to create and customize reports.
  • Real-time data
    Looker Studio can connect directly to BigQuery, enabling us to create reports using real-time data.
  • Flexible and customizable
    Looker Studio enables us to create customized reports and dashboards that perfectly suit our needs.

Here are some examples:

This screenshot shows a crucial functionality we’ve designed within Looker Studio: the capability to filter data by specific groups of pages. This custom feature proves to be invaluable in our context, where we need granular insights about different sections of our website. As the image shows, we’re honing in on our “Route Landing Page” group. This subset of pages has experienced over one million visits in the last week alone, highlighting the significant traffic these pages attract. This demonstration exemplifies how our customizations in Looker Studio help us dissect and understand our site’s performance at a granular level.

The graph presents the LCP values for the 75th percentile of our users visiting the Route Landing Page group. This percentile represents the user experience of the “average” user, excluding outliers who may have exceptionally good or poor conditions.

A key advantage of using Looker Studio is its ability to segment data based on different variables. In the following screenshot, you can see that we have differentiated between mobile and desktop traffic.

Understanding The Challenges

In our journey, the key performance data we gathered acted as a compass, pointing us toward specific challenges that lay ahead. Influenced by factors such as global audience diversity, seasonality, and the intricate balance between static and dynamic content, these challenges surfaced as crucial areas of focus. It is within these complexities that we found our opportunity to refine and optimize web performance on a global scale.

Seasonality And A Worldwide Audience

As an international platform, Bookaway serves a diverse audience from various geographic locations. One of the key challenges that come with serving a worldwide audience is the variation in network conditions and device capabilities across different regions.

Adding to this complexity is the effect of seasonality. Much like physical tourism businesses, our digital platform also experiences seasonal trends. For instance, during winter months, our traffic increases from countries in warmer climates, such as Thailand and Vietnam, where it’s peak travel season. Conversely, in the summer, we see more traffic from European countries where it’s the high season for tourism.

The variation in our performance metrics, correlated with geographic shifts in our user base, points to a clear area of opportunity. We realized that we needed to consider a more global and scalable solution to better serve our global audience.

This understanding prompted us to revisit our approach to content delivery, which we’ll get to in a moment.

Layout Shifts From Dynamic And Static Content

We have been using dynamic content serving, where each request reaches our back-end server and triggers processes like database retrievals and page renderings. This server interaction is reflected in the TTFB metric, which measures the duration from the client making an HTTP request to the first byte being received by the client’s browser. The shorter the TTFB, the better the perceived speed of the site from the user’s perspective.

While dynamic serving provides simplicity in implementation, it imposes significant time costs due to the computational resources required to generate the pages and the latency involved in serving these pages to users at distant locations.

We recognize the potential benefits of serving static content, which involves delivering pre-generated HTML files like you would see in a Jamstack architecture. This could significantly improve the speed of our content delivery as it eliminates the need for on-the-fly page generation, thereby reducing TTFB. It also opens up the possibility for more effective use of caching strategies, potentially enhancing load times further.

As we envisage a shift from dynamic to static content serving, we anticipate it to be a crucial step toward improving our LCP metrics and providing a more consistent user experience across all regions and seasons.

In the following sections, we’ll explore the potential challenges and solutions we could encounter as we consider this shift. We’ll also discuss our thoughts on implementing a Content Delivery Network (CDN), which could allow us to fully leverage the advantages of static content serving.

Leveraging A CDN For Content Delivery

I imagine many of you already understand what a CDN is, but it is essentially a network of servers, often referred to as “edges.” These edge servers are distributed in data centers across the globe. Their primary role is to store (or “cache”) copies of web content — like HTML pages, JavaScript files, and multimedia content — and deliver it to users based on their geographic location.

When a user makes a request to access a website, the DNS routes the request to the edge server that’s geographically closest to the user. This proximity significantly reduces the time it takes for the data to travel from the server to the user, thus reducing latency and improving load times.

A key benefit of this mechanism is that it effectively transforms dynamic content delivery into static content delivery. When the CDN caches a pre-rendered HTML page, no additional server-side computations are required to serve that page to the user. This not only reduces load times but also reduces the load on our origin servers, enhancing our capacity to serve high volumes of traffic.

If the requested content is cached on the edge server and the cache is still fresh, the CDN can immediately deliver it to the user. If the cache has expired or the content isn’t cached, the CDN will retrieve the content from the origin server, deliver it to the user, and update its cache for future requests.

This caching mechanism also improves the website’s resilience to distributed denial-of-service (DDoS) attacks. By serving content from edge servers and reducing the load on the origin server, the CDN provides an additional layer of security. This protection helps ensure the website remains accessible even under high-traffic conditions.

CDN Implementation

Recognizing the potential benefits of a CDN, we decided to implement one for our landing pages. As our entire infrastructure is already hosted by Amazon Web Services (AWS), choosing Amazon AWS CloudFront as our CDN solution was an immediate and obvious choice. Its robust infrastructure, scalability, and a wide network of edge locations around the world made it a strong candidate.

During the implementation process, we configured a key setting known as max-age. This determines how long a page remains “fresh.” We set this property to three days, and for those three days, any visitor who requests a page is quickly served with the cached version from the nearest edge location. After the three-day period, the page would no longer be considered “fresh.” The next visitor requesting that page wouldn’t receive the cached version from the edge location but would have to wait for the CDN to reach our origin servers and generate a fresh page.

This approach offered an exciting opportunity for us to enhance our web performance. However, transitioning to a CDN system also posed new challenges, particularly with the multitude of pages that were rarely visited. The following sections will discuss how we navigated these hurdles.

Addressing Many Pages With Rare Visits

Adopting the AWS CloudFront CDN significantly improved our website’s performance. However, it also introduced a unique problem: our “long tail” of rarely visited pages. With over 100,000 landing pages, each available in seven different languages, we managed a total of around 700,000 individual pages.

Many of these pages were rarely visited. Individually, each accounted for a small percentage of our total traffic. Collectively, however, they made up a substantial portion of our web content.

The infrequency of visits meant that our CDN’s max-age setting of three days would often expire without a page being accessed in that timeframe. This resulted in these pages falling out of the CDN’s cache. Consequently, the next visitor requesting that page would not receive the cached version. Instead, they would have to wait for the CDN to reach our origin server and fetch a fresh page.

To address this, we adopted a strategy known as stale-while-revalidate. This approach allows the CDN to serve a stale (or expired) page to the visitor, while simultaneously validating the freshness of the page with the origin server. If the server’s page is newer, it is updated in the cache.

This strategy had an immediate impact. We observed a marked and continuous enhancement in the performance of our long-tail pages. It allowed us to ensure a consistently speedy experience across our extensive range of landing pages, regardless of their frequency of visits. This was a significant achievement in maintaining our website’s performance while serving a global audience.

I am sure you are interested in the results. We will examine them in the next section.

Performance Optimization Results

Our primary objective in these optimization efforts was to reduce the LCP metric, a crucial aspect of our landing pages. The implementation of our CDN solution had an immediate positive impact, reducing LCP from 3.5 seconds to 2 seconds. Further applying the stale-while-revalidate strategy resulted in an additional decrease in LCP, bringing it down to 1.7 seconds.

A key component in the sequence of events leading to LCP is the TTFB, which measures the time from the user’s request to the receipt of the first byte of data by the user’s browser. The introduction of our CDN solution prompted a dramatic decrease in TTFB, from 2 seconds to 1.24 seconds.

Stale-While-Revalidate Improvement

This substantial reduction in TTFB was primarily achieved by transitioning to static content delivery, eliminating the need for back-end server processing for each request, and by capitalizing on CloudFront’s global network of edge locations to minimize network latency. This allowed users to fetch assets from a geographically closer source, substantially reducing processing time.

Therefore, it’s crucial to highlight that

The significant improvement in TTFB was one of the key factors that contributed to the reduction in our LCP time. This demonstrates the interdependent nature of web performance metrics and how enhancements in one area can positively impact others.

The overall LCP improvement — thanks to stale-while-revalidate — was around 15% for the 75th percentile.

User Experience Results

The “Page Experience” section in Google Search Console evaluates your website’s user experience through metrics like load times, interactivity, and content stability. It also reports on mobile usability, security, and best practices such as HTTPS. The screenshot below illustrates the substantial improvement in our site’s performance due to our implementation of the stale-while-revalidate strategy.

Conclusion

I hope that documenting the work we did at Bookaway gives you a good idea of the effort that it takes to tackle improvements for Core Web Vitals. Even though there is plenty of documentation and tutorials about them, I know it helps to know what it looks like in a real-life project.

And since everything I have covered in this article is based on a real-life project, it’s entirely possible that the insights we discovered at Bookaway will differ from yours. Where LCP was the primary focus for us, you may very well find that another Web Vital metric is more pertinent to your scenario.

That said, here are the key lessons I took away from my experience:

  • Optimize Website Loading and Rendering.
    Pay close attention to the stages of your website’s loading and rendering process. Each stage — from TTFB, download time of HTML, and FCP, to fetching and decoding of images, parsing of JavaScript and CSS, and rendering of the largest contentful element — needs to be optimized. Understand potential pitfalls at each stage and make necessary adjustments to improve your site’s overall user experience.
  • Implement Performance Monitoring Tools.
    Utilize tools such as Next.js for real-time performance monitoring and BigQuery for storing and analyzing data. Visualizing your performance data with tools like Looker Studio can help provide valuable insights into your website’s performance, enabling you to make informed, data-driven decisions.
  • Consider Static Content Delivery and CDN.
    Transitioning from dynamic to static content delivery can greatly reduce the TTFB and improve site loading speed. Implementing a CDN can further optimize performance by serving pre-rendered HTML pages from edge servers close to the user’s location, reducing latency and improving load times.

Further Reading On SmashingMag

Our Managed WordPress Hosting Test Results Are In…

Earlier this week we posted a detailed breakdown on how we’ve been performance testing WPMU DEV managed WordPress hosting against our primary competition.

In this post we’re going to share with you exactly how each host did.

And usually, whoever does these comparisons, wins them, right?

Well, not this time (ooooo!)……….

Here’s a quick recap of the hosting testing methodology we used, that you can replicate, for free, at home.

Basically, we…

  1. Took our top 8 hosting competitors based off general popularity and our members hosting usage, and tested the performance of their base managed WordPress plan versus ours, specifically: GoDaddy, Flywheel, WP Engine, Cloudways, SiteGround, BlueHost, Kinsta, and HostGator.
  2. Made an account with each host at their entry level (base) managed WordPress plan (apart from Cloudways as they don’t do managed WP) and created the same exact test website on each platform.
  3. Ran each host through a rigorous load test (to see how many users they can handle at the same time) using the awesome and freely available Loader.io – you can go run your own tests right now to see how you do.
  4. Put each hosts speed and Time To First Byte (TTFB) to the test with the equally free KeyCDN’s performance testing tool – again, go check it out and test your own host.
  5. Established how many parallel clients (read: users visiting the site at the same time) each host could take.
  6. Worked out TTFB in what we think is the fairest way (as they can vary dramatically based on server location): TTFB Average (Geo-Optimized), TTFB Average (All Locations).
  7. Did all this without implementing caching of CDNs, so you get to test the actual server in real dynamic conditions (much more on that decision in our methodology post, tl;dr you can put any host behind a great CDN and serve static pages like a gun, but WP isn’t about that… although we are open to adding that as a test too.)

Alright, now you’re all caught up, let’s not delay any further.

Dev Man is shocked at what he sees from these test results.
Dev Man might be in for a surprise with these results.

Here’s how our base plan fared against some of the most popular managed WordPress hosting providers on the web:

The raw results:

A look at the results of our WordPress hosting tests
*As of September 2020. Based on starting plans for each platform.

How each host ranked in each category:

Max Parallel Clients (how many users the host can handle at once)

1.Kinsta – 170
2.WPMU DEV – 140
3.Cloudways – 70
4.WP Engine – 50
4.Flywheel – 50
4.SiteGround – 50
5.Bluehost – 40
5.GoDaddy – 40
5.HostGator – 40

TTFB Average (speed of server response averaged across the globe)

1.GoDaddy – 332ms
2.Cloudways – 402ms
3.WPMU DEV – 476ms
4.WP Engine – 511ms
5.Kinsta – 622ms
6.SiteGround – 683ms
7.HostGator – 912ms
8.Bluehost – 1.5s
9.Flywheel – 1.7s

TTFB Best (the fastest response recorded, we assume this is down to geolocation)

1.Kinsta – 35.15ms
2.Cloudways – 53.34ms
3.GoDaddy – 66.5ms
4.WPMU DEV – 81.14ms
5.WP Engine – 170.23ms
6.SiteGround – 190.09ms
7.HostGator – 520.68ms
8.Bluehost – 1.2s
9.Flywheel – 1.35s

A quick summary of the results…

When it came to the maximum number of parallel clients each server handled during the load test, Kinsta came out on top with 170 concurrent users – followed closely by us with 140.

As we touched on in our methodology post, these hosts are the ones (metaphorically) letting the most people into the bar at the same time thanks to their higher parallel client numbers.

So that’s great work by Kinsta, being able to cope with that many users visiting your site on your base plan is pretty impressive, although we’re pretty chuffed about our second place.

In terms of speed, Kinsta also took out the TTFB (Geo-Optimized) category with the speediest TTFB time (35.15ms) of them all… we’re betting that KeyCDN and their servers are not all that far apart.

And lastly, the TTFB Average (All Locations) crown went to GoDaddy, with an average TTFB time of 332ms over the 10 locations that KeyCDN accounted for. Nice work to the big GD!

We came 3rd and 4th respectively in both TTFB categories, which we’re pretty happy about.

Of course, we do offer a selection of geolocation options on our base plan. So if you value speed in, say, the US East Coast ,or the UK, or Germany the most – we should hopefully win that for you with our geolocated servers.

Taking price into consideration…

If cost wasn’t an issue and we had to pick an overall winner from the testing, it would have to be Kinsta, as they took home first place in two of the three hosting performance categories. Nice work Kinsta!

But, of course, if we’re comparing apples with apples we have to also look at pricing. Which, handily, we include below:

Another look at the test results, and host prices as well
*Sept 2020, managed WP plans, renewal prices, annual discounts applied, rounded up.

A few notes on the pricing:

  • It’s accurate as of September 2020.
  • All prices are in USD and retrieved via US VPN in incognito.
  • We’re only listing renewal prices (no initial discounts or multi-year lock-ins) but we are including annual discounts.
  • We’ve rounded up .99 (GoDaddy & BlueHost) and .95 (HostGator).
  • Cloudways is not a managed WP platform but is included due to our members usage, so site limits don’t apply, we’re choosing Digital Ocean with them.

So… how does WPMU DEV hosting rate now?

Considering the cost, we’d like to think that we offer the best value for money in terms of performance and load.

While Kinsta is obviously great choice for high performance on their base plan, you’d have to realistically test them against our silver or gold plans ($18.75 and $37.50 respectively) if you’re looking at a fair comparison.

GoDaddy is clearly fast (their CDN is great too btw) but we reckon we’ve given them a good run for their money.

But probably, after all this, we’d say that the host that’s most comparable with us is Cloudways because, well, we use the same partner (Digital Ocean) and as you can see we rank very similarly.

A big advantage for some users for Cloudways would be that you can install as many applications as you like on a Digital Ocean platform, whereas with us you just get the 1 WordPress site. However, that has enabled us to build a stack that vastly outperforms them when it comes to load testing.

Overall though, we’d say that either our hosting or Cloudways is probably your best bet based on these tests… although you could do a lot worse than using Kinsta or GoDaddy.

Our take on how WPMU DEV Hosting did.

Dev Man celebrating his (almost) win
Even though WPMU DEV didn’t come out on top in terms of performance, we’re still wrapped with the results.

Overall, we were really pleased with how WPMU DEV Hosting fared against the competition.

But that doesn’t mean that we can’t do better. In fact it’s energized us to try harder and get you better results.

Specifically we’d like to improve:

  • Our pricing… we’re working to offer you an even more affordable plan that delivers similar results (and better than our competitors).
  • Our TTFB… we’re adding new locations as I type this (Australia we’re coming for ya soon) that should improve our overall speed.
  • Our overall offering… in addition to all of the above, we’re hoping to provide you, by the end of the year, a managed WP platform for free on top of this.

As amazing as it would have been to take out first place and rule everything, in the grand scheme of things, we’re still new to hosting (just over a year old in fact!), and to already be up there with the best in the biz feels great, and we’re excited about doing better.

Some other key takeaways from this host performance testing experience:

  1. We feel like a lot of hosts rely too heavily on caching or CDN mechanisms to save them, but that they give you an unrealistic feel for the capacity of your hosting in a genuine and dynamic sense… anyone can serve a static html page to a bazillion visitors.
  2. TTFB is hard to measure fairly, it’d be great if more hosts let us know *where* they were hosting you for their base plan.
  3. We reckon the number of clients your server can handle is MORE important than the speed at which you’re serving them. Back to our bar analogy: Would you rather server 140 people in a timely manner? Or serve 40 at a slightly faster pace before 41 enters, and you’re forced to close and deny more potential customers?

Check out the full comparisons of each host vs. WPMU DEV Hosting.

A preview of our WPMU DEV compared page
Our comparison page gives you a full view of WPMU DEV vs. a range of other hosting options.

As touched on earlier, when comparing hosts it’s important to take EVERYTHING into account, not just performance.

So at the same time as running these performance tests, we also put together some insightful hosting comparison pages which square DEV hosting off against all the hosts mentioned above.

What’s great about these pages is that as well as the performance results, we’ve also included up to date feature and cost comparison tables you can use as reference.

That way you get a well-rounded idea of what host is going to suit you or your business best. So definitely check them out if you get a chance.

Let’s do this more often…

And that’s all there is to it.

We hope you’ve enjoyed this inside look at how we tested WPMU DEV Hosting.

Our team has taken a lot of valuable insights from this experience, and we hope you did too.

Anything you’d have us do differently? Were there some big hosting players we left off the list?

Let us know below.

The whole point of this process has been to be completely fair and transparent with all of our processes and findings. And if you think there’s a better (or fairer) way we could have tested, please let us know, we’re open to discussing anything and everything in the comments!

But in fact, you really don’t even have to take our word for it…

See how WPMU DEV Hosting performs for yourself.

If our findings have piqued your interest, feel free to run your own tests following our methodology (or any other you prefer).

Check out our hosting plans or take a WPMU DEV membership (Incl. 1 Bronze level site) for a free 7 day trial.

Want to test for longer than 7 days? Everything WPMU DEV comes with an automatic 30 day money-back guarantee.

Until the next round of hosting testing.✌

How To Accurately Test Your WordPress Host (For Free)

Take a behind the scenes look at how we performance-tested our hosting against some of the biggest WordPress hosts on the web.

What started as a simple in-house exercise to see how our hosting measured up, quickly turned into a fascinating journey of self-discovery.

A journey we’ve decided to share with you, dear blog readers.

After all, we pride ourselves on honesty and integrity round these parts. And once we decided we’d bring you along for the ride – one of the main goals (apart from kicking a**!) was to be completely open and transparent. Both with the results published, and our testing methods.

That way you can trust everything is legit and nothing has been swayed in our favor (which benefits no one BTW).

So that’s what you’re getting in this article.

An inside look into how one of our in-house experts tested WPMU DEV hosting against some of the most popular platforms in the biz.

Follow along and feel free to recreate our methodology for yourself.

*BTW, all of the tools mentioned in this article are completely free!

Here’s how it all went down…

A creative take on all the hosting companies WPMU DEV will be up against.
Dev Man has his work cut out for him in this battle of the hosts.

The first step was to obviously create accounts with the hosting providers we wanted to pit WPMU DEV against.

Speaking of, here are the brave hosting providers DEV battled in this comparison (you’ll recognize ALL of them, no host dodging here):

To make the testing as fair as possible, we compared the base level plans of each hosting provider.

We also used the same basic test website and added it to each hosting plan.

Here’s a peek at the test site we used (dog lovers prepare to “awwww”):

We tested every host with this simple (and darn cute!) pet website.

Time to get [host] testing!

Now for the fun part.

Once we’d established the basic (and fair) comparison points, it was time to start the testing process.

We wanted to see how each hosting server performed under pressure. After all, the last thing you want is your server to fail if you have a sudden influx of visitors.

We also wanted to test the speed of each host, as it’s important to serve your clients in a timely manner or they might get frustrated and click away.

So we ran two primary performance tests on each host:

  1. A hosting load test.
  2. A speed (TTFB) test.

Here’s how both tests unfolded, starting with the hosting load test:

Testing how many parallel users each hosting server could handle.

For this load test we used “https://loader.io/” a free load testing service that allows you to stress test your web-apps and APIs with thousands of concurrent connections.

We used Loader.io to run tests on each host
Loader.io is the place to visit if you want to see what your host is really made of.

Loader.io allows you to run three different kinds of tests:

1.”Clients per test” – You specify the total number of clients to connect over the duration of the test.

The clients per test which Loader.io allows you to run

2.”Clients per second” – Similar to “Clients per test”, but instead of specifying the total, you specify the number of clients to start each second.

The clients per second test that Loader.io allows you to run

3.”Maintain client load” – This test allows you to specify a from and a to value for clients.

A look at the maintain load client test that Loader.io allows you to run

Since we were aiming to test how each hosting server coped under user pressure – we chose to run the “Maintain client load” test.

As mentioned, this test works by allowing you to specify a from and a to value.

What this means is that if you specify “0” and “2000” for example, the test will start with 0 clients and increase up to 2,000 simultaneous clients by the end.

Setting the client load test boundaries.

When running each load test, we set a max limit of 5000 clients. We found this to be an appropriate limit – as most hosts didn’t end up reaching 1000 clients anyway.

All of the tests ran for 5 minutes and the error failure was set to 1% as soon as errors started to appear. These errors include timeouts, 400/500, and network errors (all accumulating to 1%).

We chose 1% as the lowest possible value so the test would stop immediately and give the most accurate reading of max parallel clients.

This is important because if we had the fail setting at 50% for example, parallel client numbers would be a lot higher, but only because more users are being allowed (due to the higher error setting).

When in reality they shouldn’t count, as they would’ve got an error response – meaning they were essentially lost visitors.

The measurements we took into account.

With this particular test, we were most concerned with the “Response Count,” and “Parallel Client” metrics.

Response Count shows you the overall success/failed responses:

A look at the response count metric which is part of the Loader.io test

Parallel Clients measures the amount of users the server can handle at one time before maxing out:

The parallel clients metric which is an integral part of the hosting load test

Why is the number of parallel clients a server can handle important?

Before we continue let’s break down this idea of “parallel clients” a little further…

In simple terms, max parallel clients is the number of people who can send the first HTTP request to your site at exactly the same time.

For example, let’s say your max number of parallel clients was 50. This means 50 people can access the site at the exact same time before the server crashes.

So if 60 people try to access at the same time, the server will restart and show an internal server error for the next few minutes while it gets back up and running – meaning you will lose visitors

Here’s a good analogy we like to use:

“If you prefer to have a bar serving beer to 10 clients and then closing it down because the 11th started a fire, fine by us.”

“We’d prefer a bar that serves 140 people in a timely manner. Even if it is a tad slower.”

Basically, it’s worth having a host with a higher parallel client number (even if the response time is a little slower) because having less parallel client capability puts you at more risk of your server failing and losing visitors.

Watch a simulation of one of the load tests.

Another cool thing about Loader.io is it lets you watch a simulation of each test and how it all went down.

Watch an example of how WPMU DEV’s load test turned out here.

As well as running the test, Loader.io also allows you to watch a simulation of how the test unfolds.

Next, we put the speed of each host to the test.

To test speed we used KeyCDN’s performance testing tool.

In a nutshell, the tool tests and measures the performance of any URL from 10 different locations from around the world.

There isn’t a lot to the test itself, simply paste in the URL you want to test and hit the button. Remember it’s also free, so you can use it for your own testing.

A look at the tool we used to test each host's speed and TTFB
KeyCDN’s performance testing tool provided a simple way to test each host’s TTFB.

The results you get back then give a breakdown of the loading times and HTTP response headers. As below:

An example of the results you get back from this performance test
A nice breakdown of your host’s speed and performance by location.

Looking at the table above, the metric we were most interested in for this test was “TTFB.”

TTFB measures the time from a client making an HTTP request, to then receiving the first byte of data from the server.

The big problem with comparing TTFB results…

The only problem is, TTFB (or the speed of a host in general) isn’t so straightforward to compare. This is because the speed will vary depending on the location of the hosts server in relation to the user.

With WPMU DEV hosting for example, our server is located in The Netherlands, which means the TTFB reading from Amsterdam is always going to be the strongest.

So in order to be fair to all the hosts involved, we chose to present the TTFB readings in two different ways:

  1. ”Average TTFB” (Geo optimized) – This was the lowest (A.K.A best) TTFB reading out of all the locations tested.
  2. ”Average TTFB” (Across all locations) – The average TTFB time across all the tested locations.

Levelling the playing field even further.

Another important aspect about our testing is the fact all tests were run WITHOUT taking caching into consideration.

Basically this means we tested the hosting servers themselves, not factoring in any caching or CDN implementations each host may have. This was done by forcing WP to be logged-in so everything is by-passed.

Why we think it’s better to test without caching (or a CDN) enabled.

Sorry Dev Man, no caching allowed
Sorry Dev Man, no caching allowed with these tests.

In our opinion, comparing full page cache performance is not a good idea in a situation like this.

We believe this to be true for a couple of reasons:

  1. Bypassing cache allows you to test the performance of the hosting servers themselves. This is important as it means you don’t have to rely on caching mechanisms (more on why this is important below).
  2. Testing with cache doesn’t take “dynamic” website actions into account.

Any hosting platform can put a CDN in front of their site, tell it to cache everything, and then claim to give you insanely fast and scalable sites.

The problem is, this is not usually practical in the real world as WordPress, and many of its plugins are meant to be dynamic.

For example, caching is a great way to speed up simple sites or pages. Like an “About Page” – which seldom changes and for the most part wouldn’t have much live or dynamic action happening.

Compare this with a full-blown eCommerce store that’s constantly performing dynamic actions (live checkout process etc.) which bypass cache and hit your server directly.

That’s why you’ll often hear of (or experience) eCommerce stores having issues during big sales or promotions. Their servers aren’t prepared (or haven’t been stress tested!) and can’t handle all the simultaneous dynamic action happening.

Basically, your friend Mr. Cache isn’t always going to be there to save you, so it’s better to view it as an added benefit, and to still ensure your server is going to be able to cope on its own.

So how did WPMU DEV fare against some of most popular WordPress hosts on the web?

Tune in to part 2 of this article to find out!

Yep sorry, we chose to be like all your fav Netflix shows and leave you with a good ol’ cliffhanger (it’s a brilliant little trick really).

Later in the week we’ll have the full results of our testing for you.

Until then, I’d be remiss if I didn’t invite you to check out our hosting plans, or take a WPMU DEV membership (Incl. 1 Bronze level hosted site) for a free 7 day trial.

That way you can see how our hosting performs for yourself and run your own tests following our methodology (or any other you prefer).

If you’d rather wait for the results before you give DEV hosting a try, that’s cool too.

See you on the next one for the reveal.

How To Reduce Your TTFB and Boost WordPress Page Speed

In this age of instant gratification, nobody likes to wait. This includes search engines and website visitors. Reducing the TTFB (time to first byte) of your WordPress site is essential to keep it ranking well and ensuring visitors don’t click away. Find out why in this article.

In today’s article, I’ll be going into detail on why TTFB is important, as well as the differences between TTFB and loading time.

I’ll also show you how to diagnose why your speed isn’t up to snuff, and improve TTFB with the help (wink, wink) of our hosting and Hummingbird plugin.

By the end, you’ll have a good idea of what you can do to ensure your viewers (or Google) aren’t impatiently tapping their fingers waiting for your website to load.

So WTH(eck) is TTFB?

Though it sounds like a text acronym, such as TTYL, TY, or TBD — it’s much more than that.

Fastest TTFB WordPress with Dev Man.
Please do not confuse TTFB with a text.

TTFB is a metric that determines when a user’s browser receives the first byte of data from your server.

A web page cannot render for any user until their browser receives that data. In a nutshell, if it’s too slow, your user may click away – thus affecting the UX, and the SEO of your site.

It’s also a way to troubleshoot a slow website by measuring how fast your website starts loading in a certain location, or with a variety of settings.

TTFB is composed of three main parts:

  1. The time needed to send an HTTP request
  2. Connection time
  3. The time needed to get the first byte of a web page
How TTFB basically works.
How TTFB basically works. (Image source: https://varvy.com/pagespeed/ttfb.html)

The calculation of TTFB in networking also includes network latency in measuring the length of time it takes for loading.

Many people use this reading as a way to test server speed. This works, but it’s only part of the big picture.

With a CMS (content management system) like WordPress, the server must do all CMS computations necessary to produce content.

The PHP service has to load your MySQL database, retrieve content, do a calculation for the appropriate HTML output, and finally return it to the site’s visitor. Whew!

So, if you have a lagging CMS, those steps can take time and you might get back some pretty awful TTFB results. And that doesn’t always mean your hosting server is sluggish.

Many factors can affect your TTFB. With WordPress, often it’s outdated plugins, old themes, or clunky ads that can impact performance.

To Be, or Not to TTFB

The next question you might be asking yourself is: “Is TTFB really that important?”

Well to start with, fast loading times and speed are important for SEO (btw, there’s a difference between load time and TTFB, which I’ll get to next). A quicker website can also increase conversions.

Plus, Google has made some big algorithm changes in the past few years that emphasize speed. So, yes — TTFB is important.

Of course, it’s not the only factor when it comes to making your WordPress website more efficient.

Quality content, design, simplicity, location, and other variables can affect the quality and rank of your site.

But, for usability, SERPs, and to stay ahead of the game, it’s important to be aware of TTFB and to monitor it. And lastly, keep it in good standings.

Get a Load of This

One thing to note about TTFB is that it shouldn’t be confused with load times. As they are two very different things.

As already mentioned, TTFB is when a user’s browser receives the first byte of data from your server.

Load time, on the other hand, is described as how long a specific page took to load in its entirety. That goes for all of the CSS, images, scripts, and any third-party resources.

Of course, this means that load time takes longer than TTFB. After all, there’s a lot more to the process than just the time it takes to connect the first byte of data from a server.

In a sense, TTFB is more “behind the scenes” before you see the big picture.

Having a Good Time (To First Byte)?

Now that we know what TTFB is… the next step is knowing what type of website speed you should be aiming for.

In the latest v5 PageSpeed API, the TTFB has only a pass or fail option. Anything above 600 ms will fail and anything below 600 ms will pass.

There are quite a few ways to test your TTFB time with tools such as Sucuri, GTmetrix, and our very own Hummingbird (which we’ll be discussing in detail soon).

But Whoah! Sloooow Down a Minute…

Say you run a report and, uh-oh, your TTFB is slow…

First, it’s a matter of figuring out the underlying problem of what’s causing it.

There are several factors behind why your TTFB usually isn’t up to par.

The main culprits are:

  • The amount of traffic
  • Web server configuration
  • Dynamic content creation
  • Network problems
  • Inefficient code on the origin server
  • Database design that results in slow queries
  • An origin server that has reached its capacity
What causes slow TTFB.
A few reasons can be the cause behind slow TTFB. (Image source: https://varvy.com/pagespeed/ttfb.html)

How To Get Up To Speed

As you can see, there are a variety of reasons why the TTFB might not be up to speed. Some of the reasons you have more control over than others.

One area you DON’T have control over is visitors to your website.

It’s great to have a ton of visitors (yay!). However, that can also lead to servers buckling under pressure and your TTFB takes a hit.

Another example is dynamic content creation. This on the other hand, is a factor that you DO have control over.

WordPress pages are dynamic, and a few things have to occur between the time it receives a request and when it offers a response.

You can look at it like this: static content is quickly handed over, and dynamic content needs to be built by getting php files and interacting with a database before it’s handed over.

Static and dynamic differences TTFB.
A few differences between static and dynamic. (Image source: https://varvy.com/pagespeed/ttfb.html)

This is a lot. It can take thousands of interactions to build just one page. And this process happens every time the page is called by a browser.

And it can be a big contributing factor that grinds TTFB to a halt.

Cache Me If You Can

A great way to fix this is to provide cached versions of your pages.

Website caching is used because it provides a much better experience for your visitors. It does this by saving a static copy of your site, and therefore your site will load much more quickly.

The most efficient way of doing this is choosing a great host and installing a caching plugin.

Humming To a New Tune (Up)

This is where our plugin Hummingbird can help.

Hummingbird plugin image.
Hummingbird is here to save the day!

Hummingbird provides an efficient page, browser, RSS, full-page and Gravatar caching.

Beyond that, it scans your site for potential speed improvements and helps improve your Google PageSpeed Insights score (and more).

It’s a great tool to analyze the WHY behind your site being slow, and recommendations on how to fix it in a full audit and report.

So, when it comes time for faster TTFB, Hummingbird can tell you exactly where you’re at.

Let’s Take a Quick Look at How You Can Check Your TTFB with Hummingbird

When you activate the plugin, the first thing you’ll want to do is run a test. You can do this in just several clicks and in about a minute.

From the dashboard under Hummingbird, click on Performance Test. From this point, click on New Test.

Hummingbird running a performance test.
Hummingbird performing a test.

It takes only a few moments for her to run the test and get you the results.

Hummingbird test results.
The results from the performance test.

This test will show you all the areas that can be improved upon, how to do it, and also all of the audits that passed. From this, you get a score out of 100. With this run, my site scored a 96. Not too bad.

Our concern today is TTFB results, so let’s check that out.

After running the test, click on Audits.

From there you’ll see all of the audits. Just scroll down until you see the TTFB audit.

It gives a dropdown to show you any detailed information in an overview, what the score is, and if you need to improve.

TTFB audit by Hummingbird.
A look at the TTFB report on Hummingbird.

As you can see, our TTFB scored 100 and took 540 ms.

Nice!

With a 5-star rating and over 100,000 downloads, Hummingbird is completely free. However, if you’re really serious about optimizing your site, there’s also Hummingbird Pro which comes with a WPMU DEV membership.

Also, this plugin works great with our other performance plugin Smush.

Smush plugin.

Smush optimizes your images by turning on lazy load, resizing, compressing, and improving your Google Page Speed — which helps with TTFB.

With Smush Pro, you can also decrease TTFB with its Content Delivery Network (CDN).

That means, if you have a WordPress site that’s serving visitors in many places across the globe, this can reduce your TTFB significantly.

Turn On, Tune (Plugin), Drop Out

Something else to keep in mind is to drop old outdated plugins and update essential ones.

The quality of your plugins can significantly impact your TTFB (Hummingbird will let you know about them). They can cause your WordPress site to become slow.

Also, unnecessary plugins can be canned. If they’re not necessary for your site, just get rid of them.

Get The Most Out Of Your Host

Last but not least, a great way to reduce TTFB is by choosing an excellent WordPress host.

By using a faster host, you can see a 20% decrease (or more) in TTFB globally, and a 32% decrease in TTFB across the United States and Canada.

Also, to ensure that you have the latest version of PHP, good hosting is important.

Combining a fast host with a well developed WordPress site can drastically improve your TTFB score.

This is another area we can help with our own WMPU DEV hosting. You see, with us there’s no shared hosting and no shared IPs. This keeps your site completely isolated and separated from any other sites.

It also includes object caching by default. Our hosting is optimized for WordPress and blazing fast, which is exactly what helps lower your – you guessed it — TTFB.

Plus, if you weren’t already aware, we’ve officially labeled April #HostingMonth, and to celebrate we’re giving away a cool $10K WPMU DEV credit!

Subscribe to our blog using the form below this article (you can’t miss it!) to automatically get yourself in the draw – and check out our announcement post for more about our giveaways.

We’re also offering new members 3-month FREE WPMU DEV trials. Giving you a plethora of time to test out our hosting, and everything else a membership has to offer if you’re not signed up yet.

*Unlock your 3 month free trial coupon here.

TTFB (Time To Finish Blog)

With all that I’ve gone over in this post, you can see that when it comes to achieving a faster website, there are quite a few solutions, including:

  • Effective caching
  • Keeping your PHP up to date
  • Choosing a great hosting provider
  • And more!

All of these help with improving (and keeping) and good TTFB score and scoring big with rankings — and visitors. Plus, it’s fairly simple to get TTFB in a good range with all of the tools provided here.

TTFB isn’t the only factor for SEO and won’t alone earn you a top spot on the SERPs, but slower speeds will prevent you from ranking higher. So, optimizing your site for peak performance is always a win.

I won’t keep you waiting any longer. Go for it and get your site’s TTFB up to speed.

TTYL.

How to Properly Run a Website Speed Test (8 Best Tools)

Do you want to run a website speed test? Most beginners don’t know where to begin and what to look for in their website speed test.

There are a ton of online website speed test tools that you can use. However, all of them present results in a way that it becomes incomprehensible for non-tech savvy users.

In this article, we’ll show you how to properly run a website speed test and the best tools to run your speed tests.

Running a website speed test with proper tools

Best Tools to Run a Website Speed Test

There are a lot of free and paid website speed test and performance monitoring tools that you can use. Each one of them has some really cool features that distinguish them.

You don’t need to just test your website with one tool. You can use multiple tools and run multiple tests to be thorough.

However, we recommend users to just use these tools to improve your website performance. Trying to achieve a perfect grade or score on these tools is often extremely difficult and quite impossible in most cases for real-world functioning websites.

Your goal should be to improve your page load speed for your users, so they can enjoy a faster and consistent user experience on your website.

Having said that, let’s take a look at the best tools to run a website speed test.

1. IsItWP Website Speed Test Tool

IsItWP Website Speed Test Tool

IsItWP’s free website speed test tool is the most beginner-friendly website speed testing tool. It allows you to quickly check your website performance, run multiple tests, and drill down the results to find out what’s slowing down your website.

You also get improvement suggestions neatly organized. You can click on each category to see the steps you can take to troubleshoot performance issues. The website also offers server uptime monitoring and other useful tools for website owners.

2. Pingdom

Pingdom

Pingdom is one of the most popular website performance monitoring tool. It is easy to use and allows you to select different geographical locations to run a test which is really handy.

The results are presented with an easy to understand overview, which is followed by the detailed report. You get performance improvement suggestions at the top and individual resources as they loaded.

3. Google Pagespeed Insights

Google Pagespeed insights

Google Pagespeed Insights is a website performance monitoring tool created by Google. It gives you website performance reports for both mobile and desktop views. You can switch between these reports and find some issues that are common among both reports and some that Google recommends being fixed in the mobile view.

You also get detailed recommendations for each issue, which is helpful for developers. However, the tool itself is a bit intimidating for beginners and non-developer users.

4. GTmetrix

GTmetrix

GTmetrix is another powerful website speed testing tool. It allows you to test your website using popular tools like pagespeed and YSlow. You can change geographic location and browser by creating an account.

It shows detailed reports with a brief summary of the results. You can switch between the two tools and view recommendations. Clicking on each recommendation will provide you with more details.

5. WebPageTest

WebPageTest

WebPageTest tool is another free online speed test tool that you can use. It is a bit more advanced than some other tools on our list. However, it does allow you to choose a browser and geographic location for your tests.

By default, it runs the test 3 times to get your website speed test results. It shows a detailed view of each result which you can click to expand and view the full report.

6. Load Impact

Load Impact

Load Impact is slightly different than other website speed test tools on this list. It allows you to see how your website slows down when more visitors arrive at the same time.

It is a paid service with a limited free test, which allows you to send 25 virtual users within 3 minutes. The paid version allows you to test larger traffic loads. This helps you test website speed test, while also testing how increased traffic affects your website.

7. Uptrends

Uptrends

Uptrends is another free website speed test tool. It allows you to select a geographic region, browser, and switch between mobile and desktop tests.

Results are simple and easy to understand as it also shows your Google pagespeed score in the summary. You can scroll down for details and comb through your resources to understand the performance issues.

8. Byte Check

Byte Check

Byte Check is another free website response time checker. It is made specifically to check TTFB (time to first byte) measurement, which is the time your website takes to deliver the first byte of data back to user’s browser. It is a highly effective way to test how faster your WordPress hosting server is.

You can use any of the tools mentioned above to check your website speed and performance. However, simply running the tests alone would not help you much.

You’ll need to learn how to run these tests properly and use the data to optimize your website.

How to Properly Run a Website Speed Test

Running website speed tests is not guaranteed to tell you exactly how your website performs.

You see, the internet is like a highway. Sometimes there is more traffic or congestion which may slow you down. Other times, everything is clear and you can run through it much quicker.

There are several other factors involved which would affect the quality and accuracy of your results. It is important to run these tests thoroughly before you start analyzing the data.

Let’s see how to properly run a website speed test to get more accurate results.

1. Run Multiple Tests

There are multiple factors that can affect your test. Even though most website speed test tools run over the cloud at the fastest internet speeds, each test would show you slightly different results.

The most important difference you will notice is the time it took to download the complete webpage. We recommend running at least 3 tests to get a more accurate picture.

Run multiple tests

You can then take out an average result and use it to decide whether or not your website needs improvement.

2. Test from Different Geographic Locations

If most of your customers visit your website from Asia, then testing your website speed using servers located in the USA would not be ideal.

The test results will show you a different user experience than what your actual users are feeling when they visit your website.

Geo locations

This is why you need to use Google Analytics to see where your users are coming from. After that, use that information to select a geographic region for your tests.

For example, if you learned that most of your website users are coming from Europe, then choosing a test server in Germany will give you the closest results.

If your website visitors are from all over the world, then you can run multiple tests to find out how your website performance varies for different regions.

3. Make Sure Your Website Caching is Turned On

Make sure that your website caching is turned on before running the tests. This would allow you to test website caching and how effective it is in improving the performance.

Now the problem is that some caching solutions only store cache when a user requests the page. This means cache takes some time to build and may expire by the time you run the tests.

This is why we recommend WP Rocket. It is the best WordPress caching plugin that lets you setup your WordPress cache with a few clicks and without learning technical stuff.

The best part is that it proactively builds your website cache, which significantly improves your website performance. See our guide on how to set up WordPress cache using WP Rocket for more details.

4. Check the Performance of Your Website Firewall / CDN Service

While WordPress caching plugins can do a lot, they definitely have their limitations. For example, it cannot block DDOS attacks and brute force attempts. It also does nothing against spambots which means your server resources get wasted a lot.

This is where you need Sucuri. It is the best WordPress firewall plugin which improves your server performance by blocking malicious requests.

Now, normally all your website files are served from the same server. You can improve this by adding a CDN service to your website. We recommend using MaxCDN (by StackPath), which is the best CDN solution for beginners.

A CDN service allows you to serve static website files like images, stylesheets, and scripts through a network of servers spread around the globe. This reduces the server load on your website, makes it load faster, and improves user experience for all your users.

Turning on your CDN service and the firewall will improve your test results significantly.

Understanding Website Speed Test Results

The most important parameter that you should look into is the time it takes your website to load.

Page load time

This is the parameter that affects your users the most. If your website takes longer to load, then users may decide to hit the back button, have a bad impression of your brand, and consider your website of low quality.

If your website is taking longer than 2 seconds to load, then look at the drill-down reports. Find out which resources are taking longer to load.

Usually, these are images, stylesheets, scripts loading from third-party websites, video embeds, and so on. You would want to make sure that those images are served from the cache or your CDN service.

Looking at individual resources

You would also want to pay attention to how long your server takes to respond to each request and how much time time it takes to deliver the first byte.

You would also want to make sure that browser compression (also called gzip compression) is working. This reduces the filesizes between your server and user’s browser by compressing them.

If your page has lots of images and videos, then you may want to consider deferred loading techniques also called lazy loading. This allows content to be loaded when a user scrolls down and only loads the content that is visible on the user’s screen.

As always, you definitely want to make sure your images are optimized for web by using an image compression tool.

The second important parameter you would want to test is the TTFB (time to first byte). If your web server is continuously showing a slower time to the first byte, then you may need to talk with your web hosting company.

All top WordPress hosting companies like Bluehost, SiteGround, and WP Engine have their own caching solutions. Turning on your host’s caching solution may significantly improve TTFB results.

We hope this article helped you learn how to properly run a website speed test and the best tools to run your tests. You may also want to follow our step by step WordPress speed and performance guide to boost your website speed.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post How to Properly Run a Website Speed Test (8 Best Tools) appeared first on WPBeginner.

A Look at JAMstack’s Speed, By the Numbers

People say JAMstack sites are fast — let’s find out why by looking at real performance metrics! We’ll cover common metrics, like Time to First Byte (TTFB) among others, then compare data across a wide section of sites to see how different ways to slice those sites up compare.

First, I’d like to present a small analysis to provide some background. According to the HTTPArchive metrics report on page loading, users wait an average of 6.7 seconds to see primary content.

First Contentful Paint (FCP) - measures the point at which text or graphics are first rendered to the screen.

The FCP distribution for the 10th, 50th and 90th percentile values as reported on August 1, 2019.

If we are talking about engagement with a page (Time to Interactive), users wait even longer. The average time to interactive is 9.3 seconds.

Time to Interactive (TTI) - a time when user can interact with a page without delay.

TTI distribution for the 10th, 50th and 90th percentile values as reported on August 1, 2019.

State of the real user web performance

The data above is from lab monitoring and doesn't fully represent real user experience. Real users data based taken from the Chrome User Experience Report (CrUX) shows an even wider picture.

​​I’ll use data aggregated from users who use mobile devices. Specifically, we will use metrics like:


Time To First Byte

TTFB represents the time browser waits to receive first bytes of the response from server. TTFB takes from 200ms to 1 second for users around the world. It’s a pretty long time to receive the first chunks of the page.

TTFB mobile speed distribution (CrUX, July 2019)

First Contentful Paint

FCP happens after 2.5 seconds for 23% of page views around the world.

FCP mobile speed distribution (CrUX, July 2019)

First Input Delay

FID metrics show how fast web pages respond to user input (e.g. click, scroll, etc.).

CrUX doesn’t have TTI data due to different restrictions, but has FID, which is even better can reflect page interactivity. Over 75% of mobile user experiences have input delay for 50ms and users didn't experience any jank.

FID mobile speed distribution (CrUX, July 2019)

You can use the queries below and play with them on this site.

Data from July 2019
[
    {
      "date": "2019_07_01",
      "timestamp": "1561939200000",
      "client": "desktop",
      "fastTTFB": "27.33",
      "avgTTFB": "46.24",
      "slowTTFB": "26.43",
      "fastFCP": "48.99",
      "avgFCP": "33.17",
      "slowFCP": "17.84",
      "fastFID": "95.78",
      "avgFID": "2.79",
      "slowFID": "1.43"
    },
    {
      "date": "2019_07_01",
      "timestamp": "1561939200000",
      "client": "mobile",
      "fastTTFB": "23.61",
      "avgTTFB": "46.49",
      "slowTTFB": "29.89",
      "fastFCP": "38.58",
      "avgFCP": "38.28",
      "slowFCP": "23.14",
      "fastFID": "75.13",
      "avgFID": "17.95",
      "slowFID": "6.92"
    }
  ]
BigQuery
#standardSQL
  SELECT
    REGEXP_REPLACE(yyyymm, '(\\d{4})(\\d{2})', '\\1_\\2_01') AS date,
    UNIX_DATE(CAST(REGEXP_REPLACE(yyyymm, '(\\d{4})(\\d{2})', '\\1-\\2-01') AS DATE)) * 1000 * 60 * 60 * 24 AS timestamp,
    IF(device = 'desktop', 'desktop', 'mobile') AS client,
    ROUND(SUM(fast_fcp) * 100 / (SUM(fast_fcp) + SUM(avg_fcp) + SUM(slow_fcp)), 2) AS fastFCP,
    ROUND(SUM(avg_fcp) * 100 / (SUM(fast_fcp) + SUM(avg_fcp) + SUM(slow_fcp)), 2) AS avgFCP,
    ROUND(SUM(slow_fcp) * 100 / (SUM(fast_fcp) + SUM(avg_fcp) + SUM(slow_fcp)), 2) AS slowFCP,
    ROUND(SUM(fast_fid) * 100 / (SUM(fast_fid) + SUM(avg_fid) + SUM(slow_fid)), 2) AS fastFID,
    ROUND(SUM(avg_fid) * 100 / (SUM(fast_fid) + SUM(avg_fid) + SUM(slow_fid)), 2) AS avgFID,
    ROUND(SUM(slow_fid) * 100 / (SUM(fast_fid) + SUM(avg_fid) + SUM(slow_fid)), 2) AS slowFID
  FROM
    `chrome-ux-report.materialized.device_summary`
  WHERE
    yyyymm = '201907'
  GROUP BY
    date,
    timestamp,
    client
  ORDER BY
    date DESC,
    client

State of Content Management Systems (CMS) performance

CMSs should have become our saviors, helping us build faster sites. But looking at the data, that is not the case. The current state of CMS performance around the world is not so great.

TTFB mobile speed distribution comparison between all web and CMS (CrUX, July 2019)
Data from July 2019
[
    {
      "freq": "1548851",
      "fast": "0.1951",
      "avg": "0.4062",
      "slow": "0.3987"
    }
  ]
BigQuery
#standardSQL
  SELECT
    COUNT(DISTINCT origin) AS freq,
      
    ROUND(SUM(IF(ttfb.start < 200, ttfb.density, 0)) / SUM(ttfb.density), 4) AS fastTTFB,
    ROUND(SUM(IF(ttfb.start >= 200 AND ttfb.start < 1000, ttfb.density, 0)) / SUM(ttfb.density), 4) AS avgTTFB,
    ROUND(SUM(IF(ttfb.start >= 1000, ttfb.density, 0)) / SUM(ttfb.density), 4) AS slowTTFB
  
  FROM
    `chrome-ux-report.all.201907`,
    UNNEST(experimental.time_to_first_byte.histogram.bin) AS ttfb
  JOIN (
    SELECT
      url,
      app
    FROM
      `httparchive.technologies.2019_07_01_mobile`
    WHERE
      category = 'CMS'
    )
  ON CONCAT(origin, '/') = url
  ORDER BY
    freq DESC

And here are the FCP results:

FCP mobile speed distribution comparison between all web and CMS (CrUX, July 2019)

At least the FID results are a bit better:

FID mobile speed distribution comparison between all web and CMS (CrUX, July 2019)
Data from July 2019
[
    {
      "freq": "546415",
      "fastFCP": "0.2873",
      "avgFCP": "0.4187",
      "slowFCP": "0.2941",
      "fastFID": "0.8275",
      "avgFID": "0.1183",
      "slowFID": "0.0543"
    }
  ]
BigQuery
#standardSQL
  SELECT
    COUNT(DISTINCT origin) AS freq,
    ROUND(SUM(IF(fcp.start < 1000, fcp.density, 0)) / SUM(fcp.density), 4) AS fastFCP,
    ROUND(SUM(IF(fcp.start >= 1000 AND fcp.start < 2500, fcp.density, 0)) / SUM(fcp.density), 4) AS avgFCP,
    ROUND(SUM(IF(fcp.start >= 2500, fcp.density, 0)) / SUM(fcp.density), 4) AS slowFCP,
    ROUND(SUM(IF(fid.start < 50, fid.density, 0)) / SUM(fid.density), 4) AS fastFID,
    ROUND(SUM(IF(fid.start >= 50 AND fid.start < 250, fid.density, 0)) / SUM(fid.density), 4) AS avgFID,
    ROUND(SUM(IF(fid.start >= 250, fid.density, 0)) / SUM(fid.density), 4) AS slowFID
  FROM
    `chrome-ux-report.all.201907`,
    UNNEST(first_contentful_paint.histogram.bin) AS fcp,
    UNNEST(experimental.first_input_delay.histogram.bin) AS fid
  JOIN (
    SELECT
      url,
      app
    FROM
      `httparchive.technologies.2019_07_01_mobile`
    WHERE
      category = 'CMS'
    )
  ON CONCAT(origin, '/') = url
  ORDER BY
    freq DESC

As you can see, sites built with a CMS perform not much better than the overall performance of sites on web.

You can find performance distribution across different CMSs on this HTTPArchive forum discussion.

E-Commerce websites, a good example of sites that are typically built on a CMS, have really bad stats for page views:

  • ~40% - 1second for TTFB
  • ~30% - more than 1.5 second for FCP
  • ~12% - lag for page interaction.

I faced clients who requested support of IE10-IE11 because the traffic from those users represented 1%, which equalled millions of dollars in revenue. Please, calculate your losses in case 1% of users leave immediately and never came back because of bad performance. If users aren’t happy, business will be unhappy, too.

To get more details about how web performance correlates with revenue, check out WPO Stats. It’s a list of case studies from real companies and their success after improving performance.

JAMstack helps improve web performance

Credit: Snipcart

With JAMstack, developers do as little rendering on the client as possible, instead using server infrastructure for most things. Not to mention, most JAMstack workflows are great at handling deployments, and helping with scalability, among other benefits. Content is stored statically on a static file hosts and provided to the users via CDN.

Read Mathieu Dionne's "New to JAMstack? Everything You Need to Know to Get Started" for a great place to become more familiar with JAMstack.

I had two years of experience working with one of the popular CMSs for e-commerce and we had a lot of problems with deployments, performance, scalability. The team would spend days and fixing them. It’s not what customers want. These are the sorts of big issues JAMstack solves.

Looking at the CrUX data, JAMstack sites performance looks really solid. The following values are based on sites served by Netlify and GitHub. There is some discussion on the HTTPArchive forum where you can participate to make data more accurate.

Here are the results for TTFB:

TTFB mobile speed distribution comparison between all web, CMS and JAMstack sites (CrUX, July 2019)
Data from July 2019
[
  {
    "n": "7627",
    "fastTTFB": "0.377",
    "avgTTFB": "0.5032",
    "slowTTFB": "0.1198"
  }
]
BigQuery
#standardSQL
SELECT
  COUNT(DISTINCT origin) AS n,
  ROUND(SUM(IF(ttfb.start < 200, ttfb.density, 0)) / SUM(ttfb.density), 4) AS fastTTFB,
  ROUND(SUM(IF(ttfb.start >= 200 AND ttfb.start < 1000, ttfb.density, 0)) / SUM(ttfb.density), 4) AS avgTTFB,
  ROUND(SUM(IF(ttfb.start >= 1000, ttfb.density, 0)) / SUM(ttfb.density), 4) AS slowTTFB
FROM
  `chrome-ux-report.all.201907`,
  UNNEST(experimental.time_to_first_byte.histogram.bin) AS ttfb
JOIN
  (SELECT url, REGEXP_EXTRACT(LOWER(CONCAT(respOtherHeaders, resp_x_powered_by, resp_via, resp_server)),
      '(netlify|x-github-request)')
    AS platform
  FROM `httparchive.summary_requests.2019_07_01_mobile`)
ON
  CONCAT(origin, '/') = url
WHERE
  platform IS NOT NULL
ORDER BY
  n DESC

Here's how FCP shook out:

FCP mobile speed distribution comparison between all web, CMS and JAMstack sites (CrUX, July 2019)

Now let's look at FID:

FID mobile speed distribution comparison between all web, CMS and JAMstack sites (CrUX, July 2019)
Data from July 2019
[
    {
      "n": "4136",
      "fastFCP": "0.5552",
      "avgFCP": "0.3126",
      "slowFCP": "0.1323",
      "fastFID": "0.9263",
      "avgFID": "0.0497",
      "slowFID": "0.024"
    }
  ]
BigQuery
#standardSQL
  SELECT
    COUNT(DISTINCT origin) AS n,
    ROUND(SUM(IF(fcp.start < 1000, fcp.density, 0)) / SUM(fcp.density), 4) AS fastFCP,
    ROUND(SUM(IF(fcp.start >= 1000 AND fcp.start < 2500, fcp.density, 0)) / SUM(fcp.density), 4) AS avgFCP,
    ROUND(SUM(IF(fcp.start >= 2500, fcp.density, 0)) / SUM(fcp.density), 4) AS slowFCP,
    ROUND(SUM(IF(fid.start < 50, fid.density, 0)) / SUM(fid.density), 4) AS fastFID,
    ROUND(SUM(IF(fid.start >= 50 AND fid.start < 250, fid.density, 0)) / SUM(fid.density), 4) AS avgFID,
    ROUND(SUM(IF(fid.start >= 250, fid.density, 0)) / SUM(fid.density), 4) AS slowFID
  FROM
    `chrome-ux-report.all.201907`,
    UNNEST(first_contentful_paint.histogram.bin) AS fcp,
    UNNEST(experimental.first_input_delay.histogram.bin) AS fid
  JOIN
    (SELECT url, REGEXP_EXTRACT(LOWER(CONCAT(respOtherHeaders, resp_x_powered_by, resp_via, resp_server)),
        '(netlify|x-github-request)')
      AS platform
    FROM `httparchive.summary_requests.2019_07_01_mobile`)
  ON
    CONCAT(origin, '/') = url
  WHERE
    platform IS NOT NULL
  ORDER BY
    n DESC

The numbers show the performance of JAMstack sites is the best. The numbers are pretty much the same for mobile and desktop which is even more amazing!

Some highlights from engineering leaders

Let me show you a couple of examples from some prominent folks in the industry:

JAMstack sites are generally CDN-hosted and mitigate TTFB. Since the file hosting is handled by infrastructures like Amazon Web Services or similar, all sites performance can be improved in one fix.

One more real investigation says that it is better to deliver static HTML for better FCP.

Here's a comparison for all results shown above together:

Mobile speed distribution comparison between all web, CMS and JAMstack sites (CrUX, July 2019)

JAMstack brings better performance to the web by statically serving pages with CDNs. This is important because a fast back-end that takes a long time to reach users will be slow, and likewise, a slow back-end that is quick to reach users will also be slow.

JAMstack hasn’t won the perf race yet, because the number of sites built with it not so huge as for example for CMS, but the intention to win it is really great.

Adding these metrics to a performance budget can be one way make sure you are building good performance into your workflow. Something like:

  • TTFB: 200ms
  • FCP: 1s
  • FID: 50ms

Spend it wisely 🙂


Editor’s note: Artem Denysov is from Stackbit, which is a service that helps tremendously with spinning up JAMstack sites and more upcoming tooling to smooth out some of the workflow edges with JAMstack sites and content. Artem told me he’d like to thank Rick Viscomi, Rob Austin, and Aleksey Kulikov for their help in reviewing the article.

The post A Look at JAMstack’s Speed, By the Numbers appeared first on CSS-Tricks.