Improving The Performance Of Wix Websites (Case Study)

A website’s performance can make or break its success, yet in August 2020, despite many improvements we had previously made, such as implementing Server-Side Rendering (SSR), the ratio of Wix websites with good Google Core Web Vitals (CWV) scores was only 4%. It was at this point that we realized we needed to make a significant change in our approach towards performance, and that we must embrace performance as part of our culture.

Implementing this change enabled us to take major steps such as updating our infrastructure along with completely rewriting our core functionality from the ground up. We deployed these enhancements gradually over time to ensure that our users didn’t experience any disruptions, but instead only a consistent improvement of their site speed.

Since implementing these changes, we have seen a dramatic improvement in the performance of websites built and hosted on our platform. In particular, the worldwide ratio of Wix websites that receive a good (green) CWV score has increased from 4% to over 33%, which means an increase of over 750%. We also expect this upwards trend to continue as we roll out additional improvements to our platform.

You can see the impact of these efforts in the Core Web Vitals Technology Report from Google Chrome User Experience Report (CrUX) / HTTP Archive:

These performance improvements provide a lot of value to our users because sites that have good Google CWV scores are eligible for the maximum performance ranking boost in the Google search results (SERP). They also likely have increased conversion rates and lower bounce rates due to the improved visitor experience.

Now, let’s take a deeper look into the actions and processes we put in place in order to achieve these significant results.

The Wix Challenge

Let’s begin by describing who we are, what are our use-cases, and our challenges.

Wix is a SaaS platform providing products and services for any type of user to create an online presence. This includes building websites, hosting websites, managing campaigns, SEO, analytics, CRM, and much more. It was founded in 2006 and has since grown to have over 210 million users in 190 countries, and hosts over five million domains. In addition to content websites, Wix also supports e-commerce, blogs, forums, bookings and events, and membership and authentication. And Wix has its own app store with apps and themes for restaurants, fitness, hotels, and much more. To support all this, we have over 5,000 employees spread around the globe.

This high rate of growth, coupled with the current scale and diversity of offerings presents a huge challenge when setting out to improve performance. It’s one thing to identify bottlenecks and implement optimizations for a specific website or a few similar websites, and quite another when dealing with many millions of websites, having such a wide variety of functionality, and an almost total freedom of design. As a result, we cannot optimize for a specific layout or set of features that are known in advance. Instead, we have to accommodate all of this variability, mostly on-demand. On the positive side, since there are so many users and websites on Wix, improvements that we make benefit millions of websites, and can have a positive impact on the Web as a whole.

There are more challenges for us in addition to scale and diversity:

  • Retaining existing design and behavior
    A key requirement we set for ourselves was to improve the performance of all existing websites built on Wix without altering any aspect of their look and feel. So essentially, they need to continue to look and work exactly the same, only operate faster.
  • Maintaining development velocity
    Improving performance requires a significant amount of resources and effort. And the last thing we want is to negatively impact our developers' momentum, or our ability to release new features at a high rate. So once a certain level of performance is achieved, we want to be able to preserve it without being constantly required to invest additional effort, or slow down the development process. In other words, we needed to find a way to automate the process of preventing performance degradations.
  • Education
    In order to create change across our entire organization, we needed to get all the relevant employees, partners, and even customers up to speed about performance quickly and efficiently. This required a lot of planning and forethought, and quite a bit of trial and error.
Creating A Performance Culture

Initially, at Wix, performance was a task assigned to a relatively small dedicated group within the company. This team was tasked with identifying and addressing specific performance bottlenecks, while others throughout the organization were only brought in on a case-by-case basis. While some noticeable progress was made, it was challenging to implement significant changes just for the sake of speed.

This was because the amount of effort required often exceeded the capacity of the performance team, and also because ongoing work on various features and capabilities often got in the way. Another limiting factor was the lack of data and insight into exactly what the bottlenecks were so that we could know exactly where to focus our efforts for maximum effect.

About two years ago, we came to the conclusion that we cannot continue with this approach. That in order to provide the level of performance that our users require and expect we need to operate at the organizational level. And that if we do not provide this level of performance it will be detrimental to our business and future success. There were several catalysts for this understanding, some due to changes in the Web ecosystem in general, and others to our own market segment in particular:

  • Changes in device landscape
    Six years ago, over 70% of sessions for Wix websites originated from desktops, with under 30% coming from mobile devices. Since then the situation has flipped, and now over 70% of sessions originate on mobile. While mobile devices have come a long way in terms of network and CPU speed, many of them are still significantly underpowered when compared to desktops, especially in countries where mobile connectivity is still poor. As a result, unless performance improves, many visitors experience a decline in the quality of experience they receive over time.
  • Customer expectations
    Over the past few years, we’ve seen a significant shift in customer expectations regarding performance. Thanks to activities by Google and others, website owners now understand that having good loading speed is a major factor in the success of their sites. As a result, customers prefer platforms that provide good performance — and avoid or leave those that don’t.
  • Google search ranking
    Back in 2018 Google announced that sites with especially slow pages on mobile would be penalized. But starting in 2021, Google shifted its approach to instead boost the ranking of mobile sites that have good performance. This has increased the motivation of site owners and SEOs to use platforms that can create fast sites.
  • Heavier websites
    As the demand for faster websites increases, so does the expectation that websites provide a richer and more engaging experience. This includes features like videos and animations, sophisticated interactions, and greater customization. As websites become heavier and more complex, the task of maintaining performance becomes ever more challenging.
  • Better tooling and metrics standardization
    Measuring website performance used to be challenging and required specific expertise. But in recent years the ability to gauge the speed and responsiveness of websites has improved significantly and has become much simpler, thanks to tools like Google Lighthouse and PageSpeed Insights. Moreover, the industry has primarily standardized on Google’s Core Web Vitals (CWV) performance metrics, and monitoring them is now integrated into services such as the Google Search Console.

These changes dramatically shifted our perception of website performance from being just a part of our offerings to become an imperative company focus and a strategic priority. And that in order to achieve this strategy implementing a culture of performance throughout the organization is a must. In order to accomplish this, we took a two-pronged approach. First, at an “all hands” company update, our CEO announced that going forward ensuring good performance for websites built on our platform will be a strategic priority for the company as a whole. And that the various units within the company will be measured on their ability to deliver on this goal.

At the same time, the performance team underwent a huge transformation in order to support the company-wide prioritization of performance. It went from working on specific speed enhancements to interfacing with all levels of the organization, in order to support their performance efforts. The first task was providing education on what website performance actually means, and how it can be measured. And once the teams started working off of the knowledge, it meant organizing performance-focused design and code reviews, training and education, plus providing tools and assets to support these ongoing efforts.

To this end, the team built on the expertise that it had already gained while working on specific performance projects. And it also engaged with the performance community as a whole, for example by attending conferences, bringing in domain experts, and studying up on modern architectures such as the Jamstack.

Measuring And Monitoring

Peter Drucker, one of the best-known management consultants, famously stated:

“If you can’t measure it, you can’t improve it.”

This statement is true for management, and it’s undoubtedly true for website performance.

But which metrics should be measured in order to determine website performance? Over the years many metrics have been proposed and used, which made it difficult to compare results taken from different tools. In other words, the field lacked standardization. This changed approximately two years ago when Google introduced three primary metrics for measuring website performance, known collectively as Google Core Web Vitals (CWV).

The three metrics are:

  1. LCP: Largest Contentful Paint (measures visibility)
  2. FID: First Input Delay (measures response time)
  3. CLS: Cumulative Layout Shift (measures visual stability)

CWV have enabled the industry to focus on a small number of metrics that cover the main aspects of the website loading experience. And the fact that Google is now using CWV as a search ranking signal provides additional motivation for people to improve them.

Recommended Reading: An In-Depth Guide To Measuring Core Web Vitals” by Barry Pollard

At Wix, we focus on CWV when analyzing field data, but also use lab measurements during the development process. In particular, lab tests are critical for implementing performance budgets in order to prevent performance degradations. The best implementations of performance budgets integrate their enforcement into the CI/CD process, so they are applied automatically, and prevent deployment to production when a regression is detected. When such a regression does occur it breaks the build, forcing the team to fix it before deployment can proceed.

There are various performance budgeting products and open-source tools available, but we decided to create our own custom budgeting service called Perfer. This is because we operate at a much larger scale than most web development operations, and at any given moment hundreds of different components are being developed at Wix and are used in thousands of different combinations in millions of different websites.

This requires the ability to test a very large number of configurations. Moreover, in order to avoid breaking builds with random fluctuations, tests that measure performance metrics or scores are run multiple times and an aggregate of the results is used for the budget. In order to accommodate such a high number of test runs without negatively impacting build time, Perfer executes the performance measurements in parallel on a cluster of dedicated servers called WatchTower. Currently, WatchTower is able to execute up to 1,000 Lighthouse tests per minute.

After deployment performance data is collected anonymously from all Wix sessions in the field. This is especially important in our case because the huge variety of Wix websites makes it effectively impossible to test all relevant configurations and scenarios “in the lab.” By collecting and analyzing RUM data, we ensure that we have the best possible insight into the experiences of actual visitors to the websites. If we identify that a certain deployment degrades performance and harms that experience, even though this degradation was not identified by our lab tests, we can quickly roll it back.

Another advantage of field measurements is that they match the approach taken by Google in order to collect performance data into the CrUX database. Since it is the CrUX data that is used as an input for Google’s performance ranking signal, utilizing the same approach for performance analysis is very important.

All Wix sessions contain custom instrumentation code that gathers performance metrics and transmits this information anonymously back to our telemetry servers. In addition to the three CWV, this code also reports Time To First Byte (TTFB), First Contentful Paint (FCP), Total Blocking Time (TBT), and Time To Interactive (TTI), and also low-level metrics such as DNS lookup time and SSL handshake time. Collecting all this information makes it possible for us to not only quickly identify performance issues in production, but also to analyze the root causes of such issues. For example, we can determine if an issue was caused by changes in our own software by the changes in our infrastructure configuration, or even by issues affecting third-party services that we utilize (such as CDNs).

Upgrading Our Services And Infrastructure

Back when I joined Wix seven years ago, we only had a single data center (along with a fallback data center) in the USA which was used to serve users from all around the world. Since then we’ve expanded the number of data centers significantly, and have multiple such centers spread around the globe. This ensures that wherever our users connect from, they’ll be serviced both quickly and reliably. In addition, we use CDNs from multiple providers to ensure rapid content delivery regardless of location. This is especially important given that we now have users in 190 countries.

In order to make the best possible use of this enhanced infrastructure, we completely redesigned and rewrote significant portions of our front-end code. The goal was to shift as much of the computation as possible off of the browsers and onto fast servers. This is especially beneficial in the case of mobile devices, which are often less powerful and slower. In addition, this significantly reduced the amount of JavaScript code that needs to be downloaded by the browser.

Reducing JavaScript size almost always benefits performance because it decreases the overhead of the actual download as well as parsing and execution. Our measurements showed a direct correlation between the JavaScript size reduction and performance improvements:

Another benefit of moving computations from browsers to servers is that the results of these computations can often be cached and reused between sessions even for unrelated visitors, thus reducing per-session execution time dramatically. In particular, when a visitor navigates to a Wix site for the first time, the HTML of the landing page is generated on the server by Server-Side Rendering (SSR) and the resulting HTML can then be propagated to a CDN.

Navigations to the same site — even by unrelated visitors — can then be served directly from the CDN, without even accessing our servers. If this workflow sounds familiar that’s because it’s essentially the same as the on-demand mechanism provided by some advanced Jamstack services.

Note: “On-demand” means that instead of Static Site Generation performed at build time, the HTML is generated in response to the first visitor request, and propagated to a CDN at runtime.

Similarly to Jamstack, client-side code can enhance the user interface, making it more dynamic by invoking backend services using APIs. The results of some of these APIs are also cached in a CDN as appropriate. For example, in the case of a shopping cart checkout icon, the HTML for the button is generated on the server, but the actual number of items in the cart is determined on the client-side and then rendered into that icon. This way, the page HTML can be cached even though each visitor is able to see a different item count value. If the HTML of the page does need to change, for example, if the site owner publishes a new version, then the copy in the CDN is immediately purged.

In order to reduce the impact of computations on end-point devices, we moved business logic that does need to run in the browsers into Web Workers. For example, business logic that is invoked in response to user interactions. The code that runs in the browser’s main thread is mostly dedicated to the actual rendering operations. Because Web Workers execute their JavaScript code off of the main thread, they don’t block event handling, enabling the browser to quickly respond to user interactions and other events.

Examples of code that runs in Web Workers include the business logic of various vertical solutions such as e-commerce and bookings. Sending requests to backend services is mostly done from Web Workers, and the responses are parsed, stored and managed in the Web Workers as well. As a result, using Web Workers can reduce blocking and improve the FID metric significantly, providing better responsiveness in general. In lab measurements, this improved TBT measurements.

Enhanced Media Delivery

Modern websites often provide a richer user experience by downloading and presenting much more media resources, such as images and videos, than ever before. Over the past decade the median amount of bytes of images downloaded by websites, according to the Google CrUX database, has increased more than eightfold! This is more than the median improvement in network speeds during the same period, which results in slower loading times. Additionally, our RUM data (field measurements) shows that for almost ¾ of Wix sessions the LCP element is an image. All of this highlights the need to deliver images to the browsers as efficiently as possible and to quickly display the images that are in a webpage’s initially visible viewport area.

At the same time, it is crucial to deliver the highest quality of images possible in order to provide an engaging and delightful user experience. This means that improving performance by noticeably degrading visual experience is almost always out of the question. The performance enhancements we implement need to preserve the original quality of images used, unless explicitly specified otherwise by the user.

One technique for improving media-related performance is optimizing the delivery process. This means downloading required media resources as quickly as possible. In order to achieve this for Wix websites, we use a CDN to deliver the media content, as we do with other resources such as the HTML itself. And by specifying a lengthy caching duration in the HTTP response header, we allow images to be cached by browsers as well. This can improve the loading speed for repeat visits to the same page significantly by completely avoiding downloading the images over the network again.

Another technique for improving performance is to deliver the required image information more efficiently by reducing the number of bytes that need to be downloaded while preserving the desired image quality. One method to achieve this is to use a modern image format such as WebP. Images encoded as WebP are generally 25% to 35% smaller than equivalent images encoded as PNG or JPG. Images uploaded to Wix are automatically converted to WebP before being delivered to browsers that support this format.

Very often images need to be resized, cropped, or otherwise manipulated when displayed within a webpage. This manipulation can be performed inside the browser using CSS, but this usually means that more data needs to be downloaded than is actually used. For example, all the pixels of an image that have been cropped out aren’t actually needed but are still delivered. We also take into account viewport size and resolution, and display pixel depth, to optimize the image size. For Wix sites, we perform these manipulations on the server-side before the images are downloaded, this way we can ensure that only the pixels that are actually required are transmitted over the network. On the servers, we employ AI and ML models to generate resized images at the best quality possible.

Yet another technique that is used for reducing the amount of image data that needs to be downloaded upfront is lazy loading images. This means not loading images that are wholly outside the visible viewport until they are about to scroll in. Deferring image download in this way, and even avoiding it completely (if a visitor never scrolls to that part of the page), reduces network contention for resources that are already required as soon as the page loads, such as an LCP image. Wix websites automatically utilize lazy loading for images, and for various other types of resources as well.

Looking Forward

Over the past two years, we have deployed numerous enhancements to our platform intended to improve performance. The result of all these enhancements is a dramatic increase in the percentage of Wix websites that get a good score for all three CWVs compared to a year ago. But performance is a journey, not a destination, and we still have many more action items and future plans for improving websites’ speed. To that end, we are investigating new browser capabilities as well as additional changes to our own infrastructure. The performance budgets and monitoring that we have implemented provide safeguards that these changes provide actual benefits.

New media formats are being introduced that have the potential to reduce download sizes even more while retaining image quality. We are currently investigating AVIF, which looks to be especially promising for photographic images that can use lossy compression. In such scenarios, AVIF can provide significantly reduced download sizes even compared to WebP, while retaining image quality. AVIF also supports progressive rendering which may improve perceived performance and user experience, especially on slower connections, but currently won’t provide any benefits for CWV.

Another promising browser innovation that we are researching is the content-visibility CSS property. This property enables the browser to skip the effort of rendering an HTML element until it’s actually needed. In particular, when content-visibility:auto setting is applied to an element that is off-screen its descendants are not rendered. This enables the browser to skip most of the rendering work, such as styling and layout of the element’s subtree. This is especially desirable for many Wix pages that tend to be lengthy and content-rich. In particular, Wix’s new EditorX responsive sites editor support sophisticated grid and flexbox layouts that can be expensive for the browser to render, so that avoiding unnecessary rendering operations is especially desirable. Unfortunately, this property is currently only supported in Chromium-based browsers. Also, it’s challenging to implement this functionality in such a way that no Wix website is ever adversely affected in terms of its visual appearance or behavior.

Priority Hints is an upcoming browser feature that we are also investigating, which promises to improve performance by providing greater control over when and how browsers download resources. This feature will inform browsers about which resources are more urgent and should be downloaded ahead of other resources. For example, a foreground image could be assigned a higher priority than a background image since it’s more likely to contain significant content. On the other hand, if applied incorrectly, priority hints can actually degrade download speed, and hence also CWV scores. Priority hints are currently undergoing Origin Trial in Chrome.

In addition to enhancing Wix’s own infrastructure, we’re also working on providing better tooling for our users so that they can design and implement faster websites. Since Wix is highly customizable, users have the freedom and flexibility to create both fast and slow websites on our platform, depending on the decisions they make while building these sites. Our goal is to inform users about the performance of their decisions so that they can make appropriate choices. This is similar to the SEO Wiz tool that we already provide.

Summary

Implementing a performance culture at Wix enabled us to apply performance enhancements to almost every part of our technological stack — from infrastructure to software architecture and media formats. While some of these enhancements have had a greater impact than others, it’s the cumulative effect that provides the overall benefits. And these benefits aren’t just measurable at a large scale; they’re also apparent to our users, thanks to tools like WebPageTest and Google PageSpeed Insights and actual feedback that they receive from their own users.

The feedback we ourselves receive, from our users and the industry at large, and the tangible benefits we experience, drive us forward to continue improving our speed. The performance culture that we’ve implemented is here to stay.

Related Resources

Solving CLS Issues In A Next.js-Powered E-Commerce Website (Case Study)

Fairprice is one of the largest online grocery stores in Singapore. We are continuously looking out for areas of opportunities to improve the user’s online shopping experience. Performance is one of the core aspects to ensure our users are having a delightful user experience irrespective of their devices or network connection.

There are many key performance indicators (KPI) that measure different points during the lifecycle of the web page (such as TTFB, domInteractiveand onload), but these metrics don’t reflect how the end-user experiences the page.

We wanted to use a few KPIs which correspond closely to the actual experience of the end-users so we know that if any of those KPIs are not performing well, then it will be directly impacting the end-user experience. We found out user-centric performance metrics to be the perfect fit for this purpose.

There are many user-centric performance metrics to measure different points in a page’s life cycle such as FCP, LCP, FID, CLS, and so on. For this case study, we are mainly going to focus on CLS.

CLS measures the total score of all unexpected layout shifts happening between when the page starts loading and till it is unloaded.

Therefore having a low CLS value for a page ensures there are no random layout shifts causing user frustration. Barry Pollard has written an excellent in-depth article about CLS.

How We Discovered CLS Issue In Our Product Page

We use Lighthouse and WebPagetest as our synthetic testing tools for performance to measure CLS. We also use the web-vitals library to measure CLS for real users. Apart from that, we check the Google Search Console Core Web Vitals Report section to get an idea of any potential CLS issues in any of our pages. While exploring the report section, we found many URLs from the product detail page had more than 0.1 CLS value hinting there is some major layout shift event happening there.

Debugging CLS Issue Using Different Tools

Now that we know that there is a CLS issue on the product detail page, the next step was to identify which element was causing it. At first, we decided to run some tests using synthetic testing tools.

So we ran the lighthouse to check if it could find any element which could be triggering a major layout shift, it reported CLS to .004 which is quite low.

The Lighthouse report page has a diagnostic section. That also did not show any element causing a high CLS value.

Then we ran WebpageTest and decided to check the filmstrip view:

We find this feature very helpful since we can find out which element at which point in time caused the layout to shift. But when we run the test to see if any layout shifts are highlighted, there wasn’t anything contributing to the huge LCS:

The quirk with CLS is that it records individual layout shift scores during the entire lifespan of the page and adds them.

Note: How CLS is measured has been changed since June 2021.

Since Lighthouse and WebpageTest couldn’t detect any element that triggered a major layout shift which means it was happening after the initial page load possibly due to some user action. So we decided to use Web Vitals Google Chrome extension since it can record CLS on a page while the user is interacting with it. After performing different actions we found the layout shift score is getting increased when the user uses the image magnify feature.

I have also created a PR to the original repo so that other developers using this library can get rid of the CLS issue.

The Impact Of The Change

After the code was deployed to production, the CLS was fixed on the product details page and the number of pages impacted with CLS was reduced by 98%:

Since we used transform, it also helped to make the image magnify a smoother experience to the users.

Note: Paul Irish has written an excellent article on this topic.

Other Key Changes We Made For CLS

There are also some other issues we faced through many pages in our website which contribute to CLS. Let’s go through those elements and components and see how we tried to mitigate layout shifts arising from them.

  • Web-fonts:
    We have noticed that late loading of fonts causes user frustrations since the content flashes and it also causes some amount of layout shifts. To minimize this we have done few changes:

    • We have self-hosted the fonts instead of loading from 3rd party CDN.
    • We preload the fonts.
    • We use font-display optional.
  • Images:
    Missing height or width value in the image causes the element after the image to shift once the image is loaded. This ends up becoming a major contributor to CLS. Since we are using Next.js, we took advantage of the built-in image component called next/images. This component incorporates several image-related best practices. It is built on top of <img> HTML tag and can help to improve LCP and CLS. I highly recommend reading this RFC to find out the key features and advantages of using it.

  • Infinite Scroll:
    On our website, product listing pages have infinite scrolling. So initially, when users scroll to the bottom of the page they see a footer for a fraction of seconds before the next set of data is loaded, this causes layout shifts. To solve this we took few steps:

    • We call the API to load data even before the user reaches the absolute bottom of the list.
    • We have reserved enough space for the loading state and we show product skeletons during the loading status. So now when the user scrolls, they don’t see the footer for a fraction of seconds while the products are getting loaded.

Addy Osmani has written a detailed article on this approach which I highly recommend checking.

Key Takeaways
  • While Lighthouse and WebpageTest help to discover performance issues happening till page load, they can’t detect performance issues after page load.
  • Web Vitals extensions can detect CLS changes triggered by user interactions so if a page has a high CLS value but Lighthouse or WebpageTest reports low CLS then the Web Vitals extension can help to pinpoint the issue.
  • Google Search Console data is based on real users' data so that also can point to potential perf issues happening at any point in the life cycle of a page. Once an issue is detected and fixed, checking the report section again can help verify the effectiveness of the performance fix. The changes are reflected within days in the web vitals report section.
Final Thoughts

While CLS issues are comparatively harder to debug, using a combination of different tools till page load (Lighthouse, WebPageTest) and Web Vitals extension (after page load) can help us pinpoint the issue. It is also one of the metrics which is going through lots of active development to cover a wide range of scenarios and this means that how it is measured is going to be changed in the future. We are following https://web.dev/evolving-cls/ to know about any upcoming changes.

As for us, we are continuously working to improve other Core Web Vitals too. Recently, we have implemented responsive image preload and started serving images in WebP format which helped us to reduce 75% of image payload, reduce LCP by 62%, and Speed Index by 24%. You can read more details of optimization for improving LCP and Speed Index or follow our engineering blog to know about other exciting work we are doing.

We would like to thank Alex Castle for helping us debug the CLS issue on the product page and solve the quirks in the next/images implementation.

How We Improved Our Core Web Vitals (Case Study)

Last year, Google started emphasizing the importance of Core Web Vitals and how they reflect a person’s real experience when visiting sites around the web. Performance is a core feature of our company, Instant Domain Search—it’s in the name. Imagine our surprise when we found that our vitals scores were not great for a lot of people. Our fast computers and fiber internet masked the experience real people have on our site. It wasn’t long before a sea of red “poor” and yellow “needs improvement” notices in our Google Search Console needed our attention. Entropy had won, and we had to figure out how to clean up the jank—and make our site faster.

I founded Instant Domain Search in 2005 and kept it as a side-hustle while I worked on a Y Combinator company (Snipshot, W06), before working as a software engineer at Facebook. We’ve recently grown to a small group based mostly in Victoria, Canada and we are working through a long backlog of new features and performance improvements. Our poor web vitals scores, and the looming Google Update, brought our focus to finding and fixing these issues.

When the first version of the site was launched, I’d built it with PHP, MySQL, and XMLHttpRequest. Internet Explorer 6 was fully supported, Firefox was gaining share, and Chrome was still years from launch. Over time, we’ve evolved through a variety of static site generators, JavaScript frameworks, and server technologies. Our current front-end stack is React served with Next.js and a backend service built-in Rust to answer our domain name searches. We try to follow best practice by serving as much as we can over a CDN, avoiding as many third-party scripts as possible, and using simple SVG graphics instead of bitmap PNGs. It wasn’t enough.

Next.js lets us build our pages and components in React and TypeScript. When paired with VS Code the development experience is amazing. Next.js generally works by transforming React components into static HTML and CSS. This way, the initial content can be served from a CDN, and then Next can “hydrate” the page to make elements dynamic. Once the page is hydrated, our site turns into a single-page app where people can search for and generate domain names. We do not rely on Next.js to do much server-side work, the majority of our content is statically exported as HTML, CSS, and JavaScript to be served from a CDN.

When someone starts searching for a domain name, we replace the page content with search results. To make the searches as fast as possible, the front-end directly queries our Rust backend which is heavily optimized for domain lookups and suggestions. Many queries we can answer instantly, but for some TLDs we need to do slower DNS queries which can take a second or two to resolve. When some of these slower queries resolve, we will update the UI with whatever new information comes in. The results pages are different for everyone, and it can be hard for us to predict exactly how each person experiences the site.

The Chrome DevTools are excellent, and a good place to start when chasing performance issues. The Performance view shows exactly when HTTP requests go out, where the browser spends time evaluating JavaScript, and more:

There are three Core Web Vitals metrics that Google will use to help rank sites in their upcoming search algorithm update. Google bins experiences into “Good”, “Needs Improvement”, and “Poor” based on the LCP, FID, and CLS scores real people have on the site:

  • LCP, or Largest Contentful Paint, defines the time it takes for the largest content element to become visible.
  • FID, or First Input Delay, relates to a site’s responsiveness to interaction—the time between a tap, click, or keypress in the interface and the response from the page.
  • CLS, or Cumulative Layout Shift, tracks how elements move or shift on the page absent of actions like a keyboard or click event.

Chrome is set up to track these metrics across all logged-in Chrome users, and sends anonymous statistics summarizing a customer’s experience on a site back to Google for evaluation. These scores are accessible via the Chrome User Experience Report, and are shown when you inspect a URL with the PageSpeed Insights tool. The scores represent the 75th percentile experience for people visiting that URL over the previous 28 days. This is the number they will use to help rank sites in the update.

A 75th percentile (p75) metric strikes a reasonable balance for performance goals. Taking an average, for example, would hide a lot of bad experiences people have. The median, or 50th percentile (p50), would mean that half of the people using our product were having a worse experience. The 95th percentile (p95), on the other hand, is hard to build for as it captures too many extreme outliers on old devices with spotty connections. We feel that scoring based on the 75th percentile is a fair standard to meet.

To get our scores under control, we first turned to Lighthouse for some excellent tooling built into Chrome and hosted at web.dev/measure/, and at PageSpeed Insights. These tools helped us find some broad technical issues with our site. We saw that the way Next.js was bundling our CSS and slowed our initial rendering time which affected our FID. The first easy win came from an experimental Next.js feature, optimizeCss, which helped improve our general performance score significantly.

Lighthouse also caught a cache misconfiguration that prevented some of our static assets from being served from our CDN. We are hosted on Google Cloud Platform, and the Google Cloud CDN requires that the Cache-Control header contains “public”. Next.js does not allow you to configure all of the headers it emits, so we had to override them by placing the Next.js server behind Caddy, a lightweight HTTP proxy server implemented in Go. We also took the opportunity to make sure we were serving what we could with the relatively new stale-while-revalidate support in modern browsers which allows the CDN to fetch content from the origin (our Next.js server) asynchronously in the background.

It’s easy—maybe too easy—to add almost anything you need to your product from npm. It doesn’t take long for bundle sizes to grow. Big bundles take longer to download on slow networks, and the 75th percentile mobile phone will spend a lot of time blocking the main UI thread while it tries to make sense of all the code it just downloaded. We liked BundlePhobia which is a free tool that shows how many dependencies and bytes an npm package will add to your bundle. This led us to eliminate or replace a number of react-spring powered animations with simpler CSS transitions:

Through the use of BundlePhobia and Lighthouse, we found that third-party error logging and analytics software contributed significantly to our bundle size and load time. We removed and replaced these tools with our own client-side logging that take advantage of modern browser APIs like sendBeacon and ping. We send logging and analytics to our own Google BigQuery infrastructure where we can answer the questions we care about in more detail than any of the off-the-shelf tools could provide. This also eliminates a number of third-party cookies and gives us far more control over how and when we send logging data from clients.

Our CLS score still had the most room for improvement. The way Google calculates CLS is complicated—you’re given a maximum “session window” with a 1-second gap, capped at 5 seconds from the initial page load, or from a keyboard or click interaction, to finish moving things around the site. If you’re interested in reading more deeply into this topic, here’s a great guide on the topic. This penalizes many types of overlays and popups that appear just after you land on a site. For instance, ads that shift content around or upsells that might appear when you start scrolling past ads to reach content. This article provides an excellent explanation of how the CLS score is calculated and the reasoning behind it.

We are fundamentally opposed to this kind of digital clutter so we were surprised to see how much room for improvement Google insisted we make. Chrome has a built-in Web Vitals overlay that you can access by using the Command Menu to “Show Core Web Vitals overlay”. To see exactly which elements Chrome considers in its CLS calculation, we found the Chrome Web Vitals extension’s “Console Logging” option in settings more helpful. Once enabled, this plugin shows your LCP, FID, and CLS scores for the current page. From the console, you can see exactly which elements on the page are connected to these scores. Our CLS scores had the most room for improvement.

Of the three metrics, CLS is the only one that accumulates as you interact with a page. The Web Vitals extension has a logging option that will show exactly which elements cause CLS while you are interacting with a product. Watch how the CLS metrics add when we scroll on Smashing Magazine’s home page:

The best way to track progress from one deploy to the next is to measure page experiences the same way Google does. If you have Google Analytics set up, an easy way to do this is to install Google’s web-vitals module and hook it up to Google Analytics. This provides a rough measure of your progress and makes it visible in a Google Analytics dashboard.

This is where we hit a wall. We could see our CLS score, and while we’d improved it significantly, we still had work to do. Our CLS score was roughly 0.23 and we needed to get this below 0.1—and preferably down to 0. At this point, though, we couldn’t find something that told us exactly which components on which pages were still affecting the score. We could see that Chrome exposed a lot of detail in their Core Web Vitals tools, but that the logging aggregators threw away the most important part: exactly which page element caused the problem.

To capture all of the detail we need, we built a serverless function to capture web vitals data from browsers. Since we don’t need to run real-time queries on the data, we stream it into Google BigQuery’s streaming API for storage. This architecture means we can inexpensively capture about as many data points as we can generate.

After learning some lessons while working with Web Vitals and BigQuery, we decided to bundle up this functionality and release these tools as open-source at vitals.dev.

Using Instant Vitals is a quick way to get started tracking your Web Vitals scores in BigQuery. Here’s an example of a BigQuery table schema that we create:

Integrating with Instant Vitals is easy. You can get started by integrating with the client library to send data to your backend or serverless function:

import { init } from "@instantdomain/vitals-client";

init({ endpoint: "/api/web-vitals" });

Then, on your server, you can integrate with the server library to complete the circuit:

import fs from "fs";

import { init, streamVitals } from "@instantdomain/vitals-server";

// Google libraries require service key as path to file
const GOOGLE_SERVICE_KEY = process.env.GOOGLE_SERVICE_KEY;
process.env.GOOGLE_APPLICATION_CREDENTIALS = "/tmp/goog_creds";
fs.writeFileSync(
  process.env.GOOGLE_APPLICATION_CREDENTIALS,
  GOOGLE_SERVICE_KEY
);

const DATASET_ID = "web_vitals";
init({ datasetId: DATASET_ID }).then().catch(console.error);

// Request handler
export default async (req, res) => {
  const body = JSON.parse(req.body);
  await streamVitals(body, body.name);
  res.status(200).end();
};

Simply call streamVitalswith the body of the request and the name of the metric to send the metric to BigQuery. The library will handle creating the dataset and tables for you.

After collecting a day’s worth of data, we ran this query like this one:

SELECT
  `<project_name>.web_vitals.CLS`.Value,
  Node
FROM
  `<project_name>.web_vitals.CLS`
JOIN
  UNNEST(Entries) AS Entry
JOIN
  UNNEST(Entry.Sources)
WHERE
  Node != ""
ORDER BY
  value
LIMIT
  10

This query produces results like this:

Value Node
4.6045324800736724E-4 /html/body/div[1]/main/div/div/div[2]/div/div/blockquote
7.183070668914928E-4 /html/body/div[1]/header/div/div/header/div
0.031002668277977697 /html/body/div[1]/footer
0.035830703317463526 /html/body/div[1]/main/div/div/div[2]
0.035830703317463526 /html/body/div[1]/footer
0.035830703317463526 /html/body/div[1]/main/div/div/div[2]
0.035830703317463526 /html/body/div[1]/main/div/div/div[2]
0.035830703317463526 /html/body/div[1]/footer
0.035830703317463526 /html/body/div[1]/footer
0.03988482067913317 /html/body/div[1]/footer

This shows us which elements on which pages have the most impact on CLS. It created a punch list for our team to investigate and fix. On Instant Domain Search, it turns out that slow or bad mobile connections will take more than 500ms to load some of our search results. One of the worst contributors to CLS for these users was actually our footer.

The layout shift score is calculated as a function of the size of the element moving, and how far it goes. In our search results view, if a device takes more than a certain amount of time to receive and render search results, the results view would collapse to a zero-height, bringing the footer into view. When the results come in, they push the footer back to the bottom of the page. A big DOM element moving this far added a lot to our CLS score. To work through this properly, we need to restructure the way the search results are collected and rendered. We decided to just remove the footer in the search results view as a quick hack that’d stop it from bouncing around on slow connections.

We now review this report regularly to track how we are improving — and use it to fight declining results as we move forward. We have witnessed the value of extra attention to newly launched features and products on our site and have operationalized consistent checks to be sure core vitals are acting in favor of our ranking. We hope that by sharing Instant Vitals we can help other developers tackle their Core Web Vitals scores too.

Google provides excellent performance tools built into Chrome, and we used them to find and fix a number of performance issues. We learned that the field data provided by Google offered a good summary of our p75 progress, but did not have actionable detail. We needed to find out exactly which DOM elements were causing layout shifts and input delays. Once we started collecting our own field data—with XPath queries—we were able to identify specific opportunities to improve everyone’s experience on our site. With some effort, we brought our real-world Core Web Vitals field scores down into an acceptable range in preparation for June’s Page Experience Update. We’re happy to see these numbers go down and to the right!

Speed Report replaced by Core Web Vitals

As of this morning, the Speed Report (Experimental) in Google Search Console has been replaced by a more robust Core Web Vitals report.

It still is broken up into a Mobile and Desktop version. However, instead of just monitoring FCP (First Contentful Paint) and FID (First Input Delay), it now seems to monitor CLS and LCP ... or at least those are the issues it's flagging for my site.

Issue type is the status of the various measures:

LCP (largest contentful paint): How long it takes the page to render the largest visible element
FID (first input delay): How long it takes the page to start responding to user actions
CLS (cumulative layout shift): How much the page UI shifts during page loading.

Speed issues are assigned separately for desktop and mobile users.

As you may recall, the experimental Speed Report was disallowing any revalidation because of an alert message saying that it would be changing soon. Well it looks like it's here!

Is anyone else seeing this yet?

I want to know the error in my code to be able to run and compile it

#include<iostream>
#include<conio.h>
//using namespace std;
int main()
{
char name[30], pizza1[]="Chicken Fazita" ,pizza2[]="Chicken Bar BQ" ,pizza3[]="Peri Peri" ,pizza4[]="Creamy Max", roll1[]="Chicken Chatni Roll", roll2[]="Chicken Mayo Roll", roll3[]="Veg Roll With Fries",bur1[]="Zinger Burger",bur2[]="Chicken Burger",bur3[]="Beef Burger";
char sand1[]="Club Sandwich", sand2[]="Chicken Crispy Sandwich", sand3[]="Extream Veg Sandwich";
char bir1[]="Chicken Biryani", bir2[]="Prawn Biryani", bir3[]="Beef Biryani",gotostart ;
int choice=0,pchoice,pchoice1, quantity;// time=40;
beginning:
system("CLS");
cout<<"\t\t\t----------Carl's Jr. Fast Food-----------\n\n";
cout<<"Please Enter Your Name: ";
cin.getline(name, 20);
cout<<"Hello "<<name<<"\n\nWhat would you like to order?\n\n";

cout<<"\t\t\t\t--------Menu--------\n\n";

cout<<"1) Pizzas\n";
cout<<"2) Burgers\n";
cout<<"3) Sandwich\n";
cout<<"4) Rolls\n";
cout<<"5) Biryani\n\n";
cout<<"\nPlease Enter your Choice: ";
cin>>choice;

if(choice==1)
{
cout<<"\n1) "<<pizza1<<"\n";
cout<<"2) "<<pizza2<<"\n";
cout<<"3) "<<pizza3<<"\n";
cout<<"4) "<<pizza4<<"\n";
cout<<"\nPlease Enter which Flavour would you like to have?:";
cin>>pchoice;
if(pchoice>=1 && pchoice<=5)
{
cout<<"\n1) Small Rs.250\n"<<"2) Regular Rs.500\n"<<"3) Large Rs.900\n";
cout<<"\nChoose Size Please:";
cin>>pchoice1;
if(pchoice1>=1 && pchoice1<=3)
cout<<"\nPlease Enter Quantity: ";
cin>>quantity;
switch(pchoice1)
{
case 1: choice = 250*quantity;
break;

case 2: choice = 500*quantity;
break;

case 3: choice = 900*quantity;
break;

     }
system("CLS");
switch (pchoice1)
{
case 1:
cout<<"\t\t\t--------Your Order---------\n";
cout<<""<<quantity<<" "<<pizza1;
cout<<"\nYour Total Bill is"<<choice<<"\nYour Order Will be delivered in 40 Minutes";
cout<<"\n\nThank you For Ordering From Carl's Jr. Fast Food\n";
break;
case 2:
cout<<"\t\t--------Your Order---------\n";
cout<<""<<quantity<<"  "<<pizza2;
cout<<"\nYour Total Bill is "<<choice<<"\nYour Order Will be delivered in 40 Minutes";
cout<<"\nThank you For Ordering From Carl's Jr. Fast Food\n";
break;
case 3:
cout<<"\t\t--------Your Order---------\n";
cout<<""<<quantity<<" "<<pizza3;
cout<<"\nYour Total Bill is"<<choice<<"\nYour Order Will be delivered in 40 Minutes";
cout<<"\nThank you For Ordering From Carl's Jr. Fast Food\n";
break;
case 4:
cout<<"\t\t--------Your Order---------\n";
cout<<""<<quantity<<" "<<pizza4;
cout<<"\nYour Total Bill is"<<choice<<"\nYour Order Will be delivered in 40 Minutes";
cout<<"\nThank you For Ordering From Carl's Jr. Fast Food\n";
break;

}
cout<<"Would you like to order anything else? Y / N:";
cin>>gotostart;
if(gotostart=='Y' || gotostart=='y')
{
 goto beginning;
 //return 0;
}

}

}

else if(choice==2)
{
cout<<"\n1 "<<bur1<<" Rs.180"<<"\n";
cout<<"2 "<<bur2<<" Rs.150"<<"\n";
cout<<"3 "<<bur3<<" Rs.160"<<"\n";
//cout<<"4 "<<pizza4<<"\n";
cout<<"\nPlease Enter which Burger you would like to have?: ";
cin>>pchoice1;
if(pchoice1>=1 && pchoice1<=3)
{
cout<<"\nPlease Enter Quantity: ";
cin>>quantity;
switch(pchoice1)
{
case 1: choice = 180*quantity;
break;

case 2: choice = 150*quantity;
break;

case 3: choice = 160*quantity;
break;

}
system("CLS");
switch (pchoice1)
{
case 1:
cout<<"\t\t--------Your Order---------\n";
cout<<""<<quantity<<" "<<bur1;
cout<<"\nYour Total Bill is"<<choice<<"\nYour Order Will be delivered in 40 Minutes";
cout<<"\nThank you For Ordering From Carl's Jr. Fast Food \n";
break;
case 2:
cout<<"\t\t--------Your Order---------\n";
cout<<""<<quantity<<"  "<<bur2;
cout<<"\nYour Total Bill is"<<choice<<"\nYour Order Will be delivered in 40 Minutes";
cout<<"\nThank you For Ordering From Carl's Jr. Pizza\n";
break;
case 3:
cout<<"\t\t--------Your Order---------\n";
cout<<""<<quantity<<" "<<bur3;
cout<<"\nYour Total Bill is"<<choice<<"\nYour Order Will be delivered in 40 Minutes";
cout<<"\nThank you For Ordering From Carl's Jr. Fast Food\n";
break;

}
cout<<"\nWould you like to order anything else? Y / N:";
cin>>gotostart;
if(gotostart=='Y' || gotostart=='y')
{
 goto beginning;
 //return 0;
}

 }
}
else if(choice==3)
{
cout<<"\n1  "<<sand1<<" Rs.240"<<"\n";
cout<<"2  "<<sand2<<" Rs.160"<<"\n";
cout<<"3  "<<sand3<<" Rs.100"<<"\n";
//cout<<"4 "<<pizza4<<"\n";
cout<<"\nPlease Enter which Sandwich you would like to have?:";
cin>>pchoice1;
if(pchoice1>=1 && pchoice1<=3)
{
cout<<"\nPlease Enter Quantity: ";
cin>>quantity;
switch(pchoice1)
{
case 1: choice = 240*quantity;
break;

case 2: choice = 160*quantity;
break;

case 3: choice = 100*quantity;
break;

}
system("CLS");
switch (pchoice1)
{
case 1:
cout<<"\t\t--------Your Order---------\n";
cout<<""<<quantity<<" "<<sand1;
cout<<"\nYour Total Bill is"<<choice<<"\nYour Order Will be delivered in 40 Minutes";
cout<<"\nThank you For Ordering From Carl's Jr. Fast Food\n";
break;
case 2:
cout<<"\t\t--------Your Order---------\n";
cout<<""<<quantity<<"  "<<sand2;
cout<<"\nYour Total Bill is"<<choice<<"\nYour Order Will be delivered in 40 Minutes";
cout<<"\nThank you For Ordering From Carl's Jr. Fast Food\n";
break;
case 3:
cout<<"\t\t--------Your Order---------\n";
cout<<""<<quantity<<" "<<sand2;
cout<<"\nYour Total Bill is"<<choice<<"\nYour Order Will be delivered in 40 Minutes";
cout<<"\nThank you For Ordering From Carl's Jr. Fast Food\n";
break;

}
cout<<"Would you like to order anything else? Y / N:";
cin>>gotostart;
if(gotostart=='Y' || gotostart=='y')
{
 goto beginning;
 //return 0;
}
}
}

else if(choice==4)
{
cout<<"\n1 "<<roll1<<" Rs.150"<<"\n";
cout<<"2 "<<roll2<<" Rs.100"<<"\n";
cout<<"3 "<<roll3<<" Rs.120"<<"\n";
//cout<<"4 "<<pizza4<<"\n";
cout<<"\nPlease Enter which you would like to have?: ";
cin>>pchoice1;
if(pchoice1>=1 && pchoice1<=3)
{
cout<<"\nHow Much Rolls Do you want: ";
cin>>quantity;
switch(pchoice1)
{
case 1: choice = 150*quantity;
break;

case 2: choice = 100*quantity;
break;

case 3: choice = 120*quantity;
break;

}
system("CLS");
switch (pchoice1)
{
case 1:
cout<<"\t\t--------Your Order---------\n";
cout<<""<<quantity<<" "<<roll1;
cout<<"\nYour Total Bill is"<<choice<<"\nYour Order Will be delivered in 40 Minutes";
cout<<"\nThank you For Ordering From Carl's Jr. Fast Food\n";
break;
case 2:
cout<<"\t\t--------Your Order---------\n";
cout<<""<<quantity<<"  "<<roll2;
cout<<"\nYour Total Bill is"<<choice<<"\nYour Order Will be delivered in 40 Minutes";
cout<<"\nThank you For Ordering From Carl's Jr. Fast Food\n";
break;
case 3:
cout<<"\t\t--------Your Order---------\n";
cout<<""<<quantity<<" "<<roll3;
cout<<"\nYour Total Bill is"<<choice<<"\nYour Order Will be delivered in 40 Minutes";
cout<<"\nThank you For Ordering From Carl's Jr. Fast Food\n";
break;

}

 }
}
else if(choice==5)
{
cout<<"\n1 "<<bir1<<" Rs.160"<<"\n";
cout<<"2 "<<bir2<<" Rs.220"<<"\n";
cout<<"3 "<<bir3<<" Rs.140"<<"\n";
//cout<<"4 "<<pizza4<<"\n";
cout<<"\nPlease Enter which Biryani you would like to have?:";
cin>>pchoice1;
if(pchoice1>=1 && pchoice1<=3)
{
cout<<"\nPlease Enter Quantity: ";
cin>>quantity;
switch(pchoice1)
{
case 1: choice = 160*quantity;
break;

case 2: choice = 220*quantity;
break;

case 3: choice = 140*quantity;
break;

}
system("CLS");
switch (pchoice1)
{
case 1:
cout<<"\t\t--------Your Order---------\n";
cout<<""<<quantity<<" "<<bir1;
cout<<"\nYour Total Bill is"<<choice<<"\nYour Order Will be delivered in 40 Minutes";
cout<<"\nThank you For Ordering From Carl's Jr. Fast Food \n";
break;
case 2:
cout<<"\t\t--------Your Order---------\n";
cout<<""<<quantity<<"  "<<bir2;
cout<<"\nYour Total Bill is"<<choice<<"\nYour Order Will be delivered in 40 Minutes";
cout<<"\nThank you For Ordering From Carl's Jr. Fast Food\n";
break;
case 3:
cout<<"\t\t--------Your Order---------\n";
cout<<""<<quantity<<" "<<bir3;
cout<<"\nYour Total Bill is"<<choice<<"\nYour Order Will be delivered in 40 Minutes";
cout<<"\nThank you For Ordering From Carl's Jr. Fast Food\n";
break;

}
cout<<"Would you like to order anything else? Y / N:";
cin>>gotostart;
if(gotostart=='Y' || gotostart=='y')
{
 goto beginning;
 //return 0;
}
}
}

else
{
system("CLS");
cout<<"Please Select Right Option: \n";
cout<<"Would You like to Start the program again? Y / N: " ;
cin>>gotostart;

if(gotostart=='Y' || gotostart=='y')
{
 goto beginning;
 //return 0;
}
}

     getch();
}