Scaling Success: Key Insights And Practical Takeaways

Building successful web products at scale is a multifaceted challenge that demands a combination of technical expertise, strategic decision-making, and a growth-oriented mindset. In Success at Scale, I dive into case studies from some of the web’s most renowned products, uncovering the strategies and philosophies that propelled them to the forefront of their industries.

Here you will find some of the insights I’ve gleaned from these success stories, part of an ongoing effort to build a roadmap for teams striving to achieve scalable success in the ever-evolving digital landscape.

Cultivating A Mindset For Scaling Success

The foundation of scaling success lies in fostering the right mindset within your team. The case studies in Success at Scale highlight several critical mindsets that permeate the culture of successful organizations.

User-Centricity

Successful teams prioritize the user experience above all else.

They invest in understanding their users’ needs, behaviors, and pain points and relentlessly strive to deliver value. Instagram’s performance optimization journey exemplifies this mindset, focusing on improving perceived speed and reducing user frustration, leading to significant gains in engagement and retention.

By placing the user at the center of every decision, Instagram was able to identify and prioritize the most impactful optimizations, such as preloading critical resources and leveraging adaptive loading strategies. This user-centric approach allowed them to deliver a seamless and delightful experience to their vast user base, even as their platform grew in complexity.

Data-Driven Decision Making

Scaling success relies on data, not assumptions.

Teams must embrace a data-driven approach, leveraging metrics and analytics to guide their decisions and measure impact. Shopify’s UI performance improvements showcase the power of data-driven optimization, using detailed profiling and user data to prioritize efforts and drive meaningful results.

By analyzing user interactions, identifying performance bottlenecks, and continuously monitoring key metrics, Shopify was able to make informed decisions that directly improved the user experience. This data-driven mindset allowed them to allocate resources effectively, focusing on the areas that yielded the greatest impact on performance and user satisfaction.

Continuous Improvement

Scaling is an ongoing process, not a one-time achievement.

Successful teams foster a culture of continuous improvement, constantly seeking opportunities to optimize and refine their products. Smashing Magazine’s case study on enhancing Core Web Vitals demonstrates the impact of iterative enhancements, leading to significant performance gains and improved user satisfaction.

By regularly assessing their performance metrics, identifying areas for improvement, and implementing incremental optimizations, Smashing Magazine was able to continuously elevate the user experience. This mindset of continuous improvement ensures that the product remains fast, reliable, and responsive to user needs, even as it scales in complexity and user base.

Collaboration And Inclusivity

Silos hinder scalability.

High-performing teams promote collaboration and inclusivity, ensuring that diverse perspectives are valued and leveraged. The Understood’s accessibility journey highlights the power of cross-functional collaboration, with designers, developers, and accessibility experts working together to create inclusive experiences for all users.

By fostering open communication, knowledge sharing, and a shared commitment to accessibility, The Understood was able to embed inclusive design practices throughout its development process. This collaborative and inclusive approach not only resulted in a more accessible product but also cultivated a culture of empathy and user-centricity that permeated all aspects of their work.

Making Strategic Decisions for Scalability

Beyond cultivating the right mindset, scaling success requires making strategic decisions that lay the foundation for sustainable growth.

Technology Choices

Selecting the right technologies and frameworks can significantly impact scalability. Factors like performance, maintainability, and developer experience should be carefully considered. Notion’s migration to Next.js exemplifies the importance of choosing a technology stack that aligns with long-term scalability goals.

By adopting Next.js, Notion was able to leverage its performance optimizations, such as server-side rendering and efficient code splitting, to deliver fast and responsive pages. Additionally, the developer-friendly ecosystem of Next.js and its strong community support enabled Notion’s team to focus on building features and optimizing the user experience rather than grappling with low-level infrastructure concerns. This strategic technology choice laid the foundation for Notion’s scalable and maintainable architecture.

Ship Only The Code A User Needs, When They Need It

This best practice is so important when we want to ensure that pages load fast without over-eagerly delivering JavaScript a user may not need at that time. For example, Instagram made a concerted effort to improve the web performance of instagram.com, resulting in a nearly 50% cumulative improvement in feed page load time. A key area of focus has been shipping less JavaScript code to users, particularly on the critical rendering path.

The Instagram team found that the uncompressed size of JavaScript is more important for performance than the compressed size, as larger uncompressed bundles take more time to parse and execute on the client, especially on mobile devices. Two optimizations they implemented to reduce JS parse/execute time were inline requires (only executing code when it’s first used vs. eagerly on initial load) and serving ES2017+ code to modern browsers to avoid transpilation overhead. Inline requires improved Time-to-Interactive metrics by 12%, and the ES2017+ bundle was 5.7% smaller and 3% faster than the transpiled version.

While good progress has been made, the Instagram team acknowledges there are still many opportunities for further optimization. Potential areas to explore could include the following:

  • Improved code-splitting, moving more logic off the critical path,
  • Optimizing scrolling performance,
  • Adapting to varying network conditions,
  • Modularizing their Redux state management.

Continued efforts will be needed to keep instagram.com performing well as new features are added and the product grows in complexity.

Accessibility Integration

Accessibility should be an integral part of the product development process, not an afterthought.

Wix’s comprehensive approach to accessibility, encompassing keyboard navigation, screen reader support, and infrastructure for future development, showcases the importance of building inclusivity into the product’s core.

By considering accessibility requirements from the initial design stages and involving accessibility experts throughout the development process, Wix was able to create a platform that empowered its users to build accessible websites. This holistic approach to accessibility not only benefited end-users but also positioned Wix as a leader in inclusive web design, attracting a wider user base and fostering a culture of empathy and inclusivity within the organization.

Developer Experience Investment

Investing in a positive developer experience is essential for attracting and retaining talent, fostering productivity, and accelerating development.

Apideck’s case study in the book highlights the impact of a great developer experience on community building and product velocity.

By providing well-documented APIs, intuitive SDKs, and comprehensive developer resources, Apideck was able to cultivate a thriving developer community. This investment in developer experience not only made it easier for developers to integrate with Apideck’s platform but also fostered a sense of collaboration and knowledge sharing within the community. As a result, ApiDeck was able to accelerate product development, leverage community contributions, and continuously improve its offering based on developer feedback.

Leveraging Performance Optimization Techniques

Achieving optimal performance is a critical aspect of scaling success. The case studies in Success at Scale showcase various performance optimization techniques that have proven effective.

Progressive Enhancement and Graceful Degradation

Building resilient web experiences that perform well across a range of devices and network conditions requires a progressive enhancement approach. Pinafore’s case study in Success at Scale highlights the benefits of ensuring core functionality remains accessible even in low-bandwidth or JavaScript-constrained environments.

By leveraging server-side rendering and delivering a usable experience even when JavaScript fails to load, Pinafore demonstrates the importance of progressive enhancement. This approach not only improves performance and resilience but also ensures that the application remains accessible to a wider range of users, including those with older devices or limited connectivity. By gracefully degrading functionality in constrained environments, Pinafore provides a reliable and inclusive experience for all users.

Adaptive Loading Strategies

The book’s case study on Tinder highlights the power of sophisticated adaptive loading strategies. By dynamically adjusting the content and resources delivered based on the user’s device capabilities and network conditions, Tinder ensures a seamless experience across a wide range of devices and connectivity scenarios. Tinder’s adaptive loading approach involves techniques like dynamic code splitting, conditional resource loading, and real-time network quality detection. This allows the application to optimize the delivery of critical resources, prioritize essential content, and minimize the impact of poor network conditions on the user experience.

By adapting to the user’s context, Tinder delivers a fast and responsive experience, even in challenging environments.

Efficient Resource Management

Effective management of resources, such as images and third-party scripts, can significantly impact performance. eBay’s journey showcases the importance of optimizing image delivery, leveraging techniques like lazy loading and responsive images to reduce page weight and improve load times.

By implementing lazy loading, eBay ensures that images are only loaded when they are likely to be viewed by the user, reducing initial page load time and conserving bandwidth. Additionally, by serving appropriately sized images based on the user’s device and screen size, eBay minimizes the transfer of unnecessary data and improves the overall loading performance. These resource management optimizations, combined with other techniques like caching and CDN utilization, enable eBay to deliver a fast and efficient experience to its global user base.

Continuous Performance Monitoring

Regularly monitoring and analyzing performance metrics is crucial for identifying bottlenecks and opportunities for optimization. The case study on Yahoo! Japan News demonstrates the impact of continuous performance monitoring, using tools like Lighthouse and real user monitoring to identify and address performance issues proactively.

By establishing a performance monitoring infrastructure, Yahoo! Japan News gains visibility into the real-world performance experienced by their users. This data-driven approach allows them to identify performance regression, pinpoint specific areas for improvement, and measure the impact of their optimizations. Continuous monitoring also enables Yahoo! Japan News to set performance baselines, track progress over time, and ensure that performance remains a top priority as the application evolves.

Embracing Accessibility and Inclusive Design

Creating inclusive web experiences that cater to diverse user needs is not only an ethical imperative but also a critical factor in scaling success. The case studies in Success at Scale emphasize the importance of accessibility and inclusive design.

Comprehensive Accessibility Testing

Ensuring accessibility requires a combination of automated testing tools and manual evaluation. LinkedIn’s approach to automated accessibility testing demonstrates the value of integrating accessibility checks into the development workflow, catching potential issues early, and reducing the reliance on manual testing alone.

By leveraging tools like Deque’s axe and integrating accessibility tests into their continuous integration pipeline, LinkedIn can identify and address accessibility issues before they reach production. This proactive approach to accessibility testing not only improves the overall accessibility of the platform but also reduces the cost and effort associated with retroactive fixes. However, LinkedIn also recognizes the importance of manual testing and user feedback in uncovering complex accessibility issues that automated tools may miss. By combining automated checks with manual evaluation, LinkedIn ensures a comprehensive approach to accessibility testing.

Inclusive Design Practices

Designing with accessibility in mind from the outset leads to more inclusive and usable products. Success With Scale\’s case study on Intercom about creating an accessible messenger highlights the importance of considering diverse user needs, such as keyboard navigation and screen reader compatibility, throughout the design process.

By embracing inclusive design principles, Intercom ensures that their messenger is usable by a wide range of users, including those with visual, motor, or cognitive impairments. This involves considering factors such as color contrast, font legibility, focus management, and clear labeling of interactive elements. By designing with empathy and understanding the diverse needs of their users, Intercom creates a messenger experience that is intuitive, accessible, and inclusive. This approach not only benefits users with disabilities but also leads to a more user-friendly and resilient product overall.

User Research And Feedback

Engaging with users with disabilities and incorporating their feedback is essential for creating truly inclusive experiences. The Understood’s journey emphasizes the value of user research and collaboration with accessibility experts to identify and address accessibility barriers effectively.

By conducting usability studies with users who have diverse abilities and working closely with accessibility consultants, The Understood gains invaluable insights into the real-world challenges faced by their users. This user-centered approach allows them to identify pain points, gather feedback on proposed solutions, and iteratively improve the accessibility of their platform.

By involving users with disabilities throughout the design and development process, The Understood ensures that their products not only meet accessibility standards but also provide a meaningful and inclusive experience for all users.

Accessibility As A Shared Responsibility

Promoting accessibility as a shared responsibility across the organization fosters a culture of inclusivity. Shopify’s case study underscores the importance of educating and empowering teams to prioritize accessibility, recognizing it as a fundamental aspect of the user experience rather than a mere technical checkbox.

By providing accessibility training, guidelines, and resources to designers, developers, and content creators, Shopify ensures that accessibility is considered at every stage of the product development lifecycle. This shared responsibility approach helps to build accessibility into the core of Shopify’s products and fosters a culture of inclusivity and empathy. By making accessibility everyone’s responsibility, Shopify not only improves the usability of their platform but also sets an example for the wider industry on the importance of inclusive design.

Fostering A Culture of Collaboration And Knowledge Sharing

Scaling success requires a culture that promotes collaboration, knowledge sharing, and continuous learning. The case studies in Success at Scale highlight the impact of effective collaboration and knowledge management practices.

Cross-Functional Collaboration

Breaking down silos and fostering cross-functional collaboration accelerates problem-solving and innovation. Airbnb’s design system journey showcases the power of collaboration between design and engineering teams, leading to a cohesive and scalable design language across web and mobile platforms.

By establishing a shared language and a set of reusable components, Airbnb’s design system enables designers and developers to work together more efficiently. Regular collaboration sessions, such as design critiques and code reviews, help to align both teams and ensure that the design system evolves in a way that meets the needs of all stakeholders. This cross-functional approach not only improves the consistency and quality of the user experience but also accelerates the development process by reducing duplication of effort and promoting code reuse.

Knowledge Sharing And Documentation

Capturing and sharing knowledge across the organization is crucial for maintaining consistency and enabling the efficient onboarding of new team members. Stripe’s investment in internal frameworks and documentation exemplifies the value of creating a shared understanding and facilitating knowledge transfer.

By maintaining comprehensive documentation, code examples, and best practices, Stripe ensures that developers can quickly grasp the intricacies of their internal tools and frameworks. This documentation-driven culture not only reduces the learning curve for new hires but also promotes consistency and adherence to established patterns and practices. Regular knowledge-sharing sessions, such as tech talks and lunch-and-learns, further reinforce this culture of learning and collaboration, enabling team members to learn from each other’s experiences and stay up-to-date with the latest developments.

Communities Of Practice

Establishing communities of practice around specific domains, such as accessibility or performance, promotes knowledge sharing and continuous improvement. Shopify’s accessibility guild demonstrates the impact of creating a dedicated space for experts and advocates to collaborate, share best practices, and drive accessibility initiatives forward.

By bringing together individuals passionate about accessibility from across the organization, Shopify’s accessibility guild fosters a sense of community and collective ownership. Regular meetings, workshops, and hackathons provide opportunities for members to share their knowledge, discuss challenges, and collaborate on solutions. This community-driven approach not only accelerates the adoption of accessibility best practices but also helps to build a culture of inclusivity and empathy throughout the organization.

Leveraging Open Source And External Expertise

Collaborating with the wider developer community and leveraging open-source solutions can accelerate development and provide valuable insights. Pinafore’s journey highlights the benefits of engaging with accessibility experts and incorporating their feedback to create a more inclusive and accessible web experience.

By actively seeking input from the accessibility community and leveraging open-source accessibility tools and libraries, Pinafore was able to identify and address accessibility issues more effectively. This collaborative approach not only improved the accessibility of the application but also contributed back to the wider community by sharing their learnings and experiences. By embracing open-source collaboration and learning from external experts, teams can accelerate their own accessibility efforts and contribute to the collective knowledge of the industry.

The Path To Sustainable Success

Achieving scalable success in the web development landscape requires a multifaceted approach that encompasses the right mindset, strategic decision-making, and continuous learning. The Success at Scale book provides a comprehensive exploration of these elements, offering deep insights and practical guidance for teams at all stages of their scaling journey.

By cultivating a user-centric, data-driven, and inclusive mindset, teams can prioritize the needs of their users and make informed decisions that drive meaningful results. Adopting a culture of continuous improvement and collaboration ensures that teams are always striving to optimize and refine their products, leveraging the collective knowledge and expertise of their members.

Making strategic technology choices, such as selecting performance-oriented frameworks and investing in developer experience, lays the foundation for scalable and maintainable architectures. Implementing performance optimization techniques, such as adaptive loading, efficient resource management, and continuous monitoring, helps teams deliver fast and responsive experiences to their users.

Embracing accessibility and inclusive design practices not only ensures that products are usable by a wide range of users but also fosters a culture of empathy and user-centricity. By incorporating accessibility testing, inclusive design principles, and user feedback into the development process, teams can create products that are both technically sound and meaningfully inclusive.

Fostering a culture of collaboration, knowledge sharing, and continuous learning is essential for scaling success. By breaking down silos, promoting cross-functional collaboration, and investing in documentation and communities of practice, teams can accelerate problem-solving, drive innovation, and build a shared understanding of their products and practices.

The case studies featured in Success at Scale serve as powerful examples of how these principles and strategies can be applied in real-world contexts. By learning from the successes and challenges of industry leaders, teams can gain valuable insights and inspiration for their own scaling journeys.

As you embark on your path to scaling success, remember that it is an ongoing process of iteration, learning, and adaptation. Embrace the mindsets and strategies outlined in this article, dive deeper into the learnings from the Success at Scale book, and continually refine your approach based on the unique needs of your users and the evolving landscape of web development.

Conclusion

Scaling successful web products requires a holistic approach that combines technical excellence, strategic decision-making, and a growth-oriented mindset. By learning from the experiences of industry leaders, as showcased in the Success at Scale book, teams can gain valuable insights and practical guidance on their journey towards sustainable success.

Cultivating a user-centric, data-driven, and inclusive mindset lays the foundation for scalability. By prioritizing the needs of users, making informed decisions based on data, and fostering a culture of continuous improvement and collaboration, teams can create products that deliver meaningful value and drive long-term growth.

Making strategic decisions around technology choices, performance optimization, accessibility integration, and developer experience investment sets the stage for scalable and maintainable architectures. By leveraging proven optimization techniques, embracing inclusive design practices, and investing in the tools and processes that empower developers, teams can build products that are fast and resilient.

Through ongoing collaboration, knowledge sharing, and a commitment to learning, teams can navigate the complexities of scaling success and create products that make a lasting impact in the digital landscape.

We’re Trying Out Something New

In an effort to conserve resources here at Smashing, we’re trying something new with Success at Scale. The printed book is 304 pages, and we make an expanded PDF version available to everyone who purchases a print book. This accomplishes a few good things:

  • We will use less paper and materials because we are making a smaller printed book;
  • We’ll use fewer resources in general to print, ship, and store the books, leading to a smaller carbon footprint; and
  • Keeping the book at more manageable size means we can continue to offer free shipping on all Smashing orders!

Smashing Books have always been printed with materials from FSC Certified forests. We are committed to finding new ways to conserve resources while still bringing you the best possible reading experience.

Community Matters ❤️

Producing a book takes quite a bit of time, and we couldn’t pull it off without the support of our wonderful community. A huge shout-out to Smashing Members for the kind, ongoing support. The eBook is and always will be free for Smashing Members. Plus, Members get a friendly discount when purchasing their printed copy. Just sayin’! ;-)

More Smashing Books & Goodies

Promoting best practices and providing you with practical tips to master your daily coding and design challenges has always been (and will be) at the core of everything we do at Smashing.

In the past few years, we were very lucky to have worked together with some talented, caring people from the web community to publish their wealth of experience as printed books that stand the test of time. Heather and Steven are two of these people. Have you checked out their books already?

Understanding Privacy

Everything you need to know to put your users first and make a better web.

Get Print + eBook

Touch Design for Mobile Interfaces

Learn how touchscreen devices really work — and how people really use them.

Get Print + eBook

Interface Design Checklists

100 practical cards for common interface design challenges.

Get Print + eBook

How To Monitor And Optimize Google Core Web Vitals

This article is a sponsored by DebugBear

Google’s Core Web Vitals initiative has increased the attention website owners need to pay to user experience. You can now more easily see when users have poor experiences on your website, and poor UX also has a bigger impact on SEO.

That means you need to test your website to identify optimizations. Beyond that, monitoring ensures that you can stay ahead of your Core Web Vitals scores for the long term.

Let’s find out how to work with different types of Core Web Vitals data and how monitoring can help you gain a deeper insight into user experiences and help you optimize them.

What Are Core Web Vitals?

There are three web vitals metrics Google uses to measure different aspects of website performance:

  • Largest Contentful Paint (LCP),
  • Cumulative Layout Shift (CLS),
  • Interaction to Next Paint (INP).

Largest Contentful Paint (LCP)

The Largest Contentful Paint metric is the closest thing to a traditional load time measurement. However, LCP doesn’t track a purely technical page load milestone like the JavaScript Load Event. Instead, it focuses on what the user can see by measuring how soon after opening a page, the largest content element on the page appears.

The faster the LCP happens, the better, and Google rates a passing LCP score below 2.5 seconds.

Cumulative Layout Shift (CLS)

Cumulative Layout Shift is a bit of an odd metric, as it doesn’t measure how fast something happens. Instead, it looks at how stable the page layout is once the page starts loading. Layout shifts mean that content moves around, disorienting the user and potentially causing accidental clicks on the wrong UI element.

The CLS score is calculated by looking at how far an element moved and how big the element is. Aim for a score below 0.1 to get a good rating from Google.

Interaction to Next Paint (INP)

Even websites that load quickly often frustrate users when interactions with the page feel sluggish. That’s why Interaction to Next Paint measures how long the page remains frozen after user interaction with no visual updates.

Page interactions should feel practically instant, so Google recommends an INP score below 200 milliseconds.

What Are The Different Types Of Core Web Vitals Data?

You’ll often see different page speed metrics reported by different tools and data sources, so it’s important to understand the differences. We’ve published a whole article just about that, but here’s the high-level breakdown along with the pros and cons of each one:

  • Synthetic Tests
    These tests are run on-demand in a controlled lab environment in a fixed location with a fixed network and device speed. They can produce very detailed reports and recommendations.
  • Real User Monitoring (RUM)
    This data tells you how fast your website is for your actual visitors. That means you need to install an analytics script to collect it, and the reporting that’s available is less detailed than for lab tests.
  • CrUX Data
    Google collects from Chrome users as part of the Chrome User Experience Report (CrUX) and uses it as a ranking signal. It’s available for every website with enough traffic, but since it covers a 28-day rolling window, it takes a while for changes on your website to be reflected here. It also doesn’t include any debug data to help you optimize your metrics.
Start By Running A One-Off Page Speed Test

Before signing up for a monitoring service, it’s best to run a one-off lab test with a free tool like Google’s PageSpeed Insights or the DebugBear Website Speed Test. Both of these tools report with Google CrUX data that reflects whether real users are facing issues on your website.

Note: The lab data you get from some Lighthouse-based tools — like PageSpeed Insights — can be unreliable.

INP is best measured for real users, where you can see the elements that users interact with most often and where the problems lie. But a free tool like the INP Debugger can be a good starting point if you don’t have RUM set up yet.

How To Monitor Core Web Vitals Continuously With Scheduled Lab-Based Testing

Running tests continuously has a few advantages over ad-hoc tests. Most importantly, continuous testing triggers alerts whenever a new issue appears on your website, allowing you to start fixing them right away. You’ll also have access to historical data, allowing you to see exactly when a regression occurred and letting you compare test results before and after to see what changed.

Scheduled lab tests are easy to set up using a website monitoring tool like DebugBear. Enter a list of website URLs and pick a device type, test location, and test frequency to get things running:

As this process runs, it feeds data into the detailed dashboard with historical Core Web Vitals data. You can monitor a number of pages on your website or track the speed of your competition to make sure you stay ahead.

When regression occurs, you can dive deep into the results using DebuBears’s Compare mode. This mode lets you see before-and-after test results side-by-side, giving you context for identifying causes. You see exactly what changed. For example, in the following case, we can see that HTTP compression stopped working for a file, leading to an increase in page weight and longer download times.

How To Monitor Real User Core Web Vitals

Synthetic tests are great for super-detailed reporting of your page load time. However, other aspects of user experience, like layout shifts and slow interactions, heavily depend on how real users use your website. So, it’s worth setting up real user monitoring with a tool like DebugBear.

To monitor real user web vitals, you’ll need to install an analytics snippet that collects this data on your website. Once that’s done, you’ll be able to see data for all three Core Web Vitals metrics across your entire website.

To optimize your scores, you can go into the dashboard for each individual metric, select a specific page you’re interested in, and then dive deeper into the data.

For example, you can see whether a slow LCP score is caused by a slow server response, render blocking resources, or by the LCP content element itself.

You’ll also find that the LCP element varies between visitors. Lab test results are always the same, as they rely on a single fixed screen size. However, in the real world, visitors use a wide range of devices and will see different content when they open your website.

INP is tricky to debug without real user data. Yet an analytics tool like DebugBear can tell you exactly what page elements users are interacting with most often and which of these interactions are slow to respond.

Thanks to the new Long Animation Frames API, we can also see specific scripts that contribute to slow interactions. We can then decide to optimize these scripts, remove them from the page, or run them in a way that does not block interactions for as long.

Conclusion

Continuously monitoring Core Web Vitals lets you see how website changes impact user experience and ensures you get alerted when something goes wrong. While it’s possible to measure Core Web Vitals using a wide range of tools, those tools are limited by the type of data they use to evaluate performance, not to mention they only provide a single snapshot of performance at a specific point in time.

A tool like DebugBear gives you access to several different types of data that you can use to troubleshoot performance and optimize your website, complete with RUM capabilities that offer a historial record of performance for identifying issues where and when they occur. Sign up for a free DebugBear trial here.

The Fight For The Main Thread

This article is a sponsored by SpeedCurve

Performance work is one of those things, as they say, that ought to happen in development. You know, have a plan for it and write code that’s mindful about adding extra weight to the page.

But not everything about performance happens directly at the code level, right? I’d say many — if not most — sites and apps rely on some number of third-party scripts where we might not have any influence over the code. Analytics is a good example. Writing a hand-spun analytics tracking dashboard isn’t what my clients really want to pay me for, so I’ll drop in the ol’ Google Analytics script and maybe never think of it again.

That’s one example and a common one at that. But what’s also common is managing multiple third-party scripts on a single page. One of my clients is big into user tracking, so in addition to a script for analytics, they’re also running third-party scripts for heatmaps, cart abandonments, and personalized recommendations — typical e-commerce stuff. All of that is dumped on any given page in one fell swoop courtesy of Google Tag Manager (GTM), which allows us to deploy and run scripts without having to go through the pain of re-deploying the entire site.

As a result, adding and executing scripts is a fairly trivial task. It is so effortless, in fact, that even non-developers on the team have contributed their own fair share of scripts, many of which I have no clue what they do. The boss wants something, and it’s going to happen one way or another, and GTM facilitates that work without friction between teams.

All of this adds up to what I often hear described as a “fight for the main thread.” That’s when I started hearing more performance-related jargon, like web workers, Core Web Vitals, deferring scripts, and using pre-connect, among others. But what I’ve started learning is that these technical terms for performance make up an arsenal of tools to combat performance bottlenecks.

The real fight, it seems, is evaluating our needs as developers and stakeholders against a user’s needs, namely, the need for a fast and frictionless page load.

Fighting For The Main Thread

We’re talking about performance in the context of JavaScript, but there are lots of things that happen during a page load. The HTML is parsed. Same deal with CSS. Elements are rendered. JavaScript is loaded, and scripts are executed.

All of this happens on the main thread. I’ve heard the main thread described as a highway that gets cars from Point A to Point B; the more cars that are added to the road, the more crowded it gets and the more time it takes for cars to complete their trip. That’s accurate, I think, but we can take it a little further because this particular highway has just one lane, and it only goes in one direction. My mind thinks of San Francisco’s Lombard Street, a twisty one-way path of a tourist trap on a steep decline.

The main thread may not be that curvy, but you get the point: there’s only one way to go, and everything that enters it must go through it.

JavaScript operates in much the same way. It’s “single-threaded,” which is how we get the one-way street comparison. I like how Brian Barbour explains it:

“This means it has one call stack and one memory heap. As expected, it executes code in order and must finish executing a piece of code before moving on to the next. It's synchronous, but at times that can be harmful. For example, if a function takes a while to execute or has to wait on something, it freezes everything up in the meantime.”

— Brian Barbour

So, there we have it: a fight for the main thread. Each resource on a page is a contender vying for a spot on the thread and wants to run first. If one contender takes its sweet time doing its job, then the contenders behind it in line just have to wait.

Monitoring The Main Thread

If you’re like me, I immediately reach for DevTools and open the Lighthouse tab when I need to look into a site’s performance. It covers a lot of ground, like reporting stats about a page’s load time that include Time to First Byte (TTFB), First Contentful Paint (FCP), Largest Contentful Paint (LCP), Cumulative Layout Shift (CLS), and so on.

I love this stuff! But I also am scared to death of it. I mean, this is stuff for back-end engineers, right? A measly front-end designer like me can be blissfully ignorant of all this mumbo-jumbo.

Meh, untrue. Like accessibility, performance is everyone’s job because everyone’s work contributes to it. Even the choice to use a particular CSS framework influences performance.

Total Blocking Time

One thing I know would be more helpful than a set of Core Web Vitals scores from Lighthouse is knowing the time it takes to go from the First Contentful Paint (FCP) to the Time to Interactive (TTI), a metric known as the Total Blocking Time (TBT). You can see that Lighthouse does indeed provide that metric. Let’s look at it for a site that’s much “heavier” than Smashing Magazine.

There we go. The problem with the Lighthouse report, though, is that I have no idea what is causing that TBT. We can get a better view if we run the same test in another service, like SpeedCurve, which digs deeper into the metric. We can expand the metric to glean insights into what exactly is causing traffic on the main thread.

That’s a nice big view and is a good illustration of TBT’s impact on page speed. The user is forced to wait a whopping 4.1 seconds between the time the first significant piece of content loads and the time the page becomes interactive. That’s a lifetime in web seconds, particularly considering that this test is based on a desktop experience on a high-speed connection.

One of my favorite charts in SpeedCurve is this one showing the distribution of Core Web Vitals metrics during render. You can see the delta between contentful paints and interaction!

Spotting Long Tasks

What I really want to see is JavaScript, which takes more than 50ms to run. These are called long tasks, and they contribute the most strain on the main thread. If I scroll down further into the report, all of the long tasks are highlighted in red.

Another way I can evaluate scripts is by opening up the Waterfall View. The default view is helpful to see where a particular event happens in the timeline.

But wait! This report can be expanded to see not only what is loaded at the various points in time but whether they are blocking the thread and by how much. Most important are the assets that come before the FCP.

First & Third Party Scripts

I can see right off the bat that Optimizely is serving a render-blocking script. SpeedCurve can go even deeper by distinguishing between first- and third-party scripts.

That way, I can see more detail about what’s happening on the Optimizely side of things.

Monitoring Blocking Scripts

With that in place, SpeedCurve actually lets me track all the resources from a specific third-party source in a custom graph that offers me many more data points to evaluate. For example, I can dive into scripts that come from Optimizely with a set of custom filters to compare them with overall requests and sizes.

This provides a nice way to compare the impact of different third-party scripts that represent blocking and long tasks, like how much time those long tasks represent.

Or perhaps which of these sources are actually render-blocking:

These are the kinds of tools that allow us to identify bottlenecks and make a case for optimizing them or removing them altogether. SpeedCurve allows me to monitor this over time, giving me better insight into the performance of those assets.

Monitoring Interaction to Next Paint

There’s going to be a new way to gain insights into main thread traffic when Interaction to Next Paint (INP) is released as a new core vital metric in March 2024. It replaces the First Input Delay (FID) metric.

What’s so important about that? Well, FID has been used to measure load responsiveness, which is a fancy way of saying it looks at how fast the browser loads the first user interaction on the page. And by interaction, we mean some action the user takes that triggers an event, such as a click, mousedown, keydown, or pointerdown event. FID looks at the time the user sparks an interaction and how long the browser processes — or responds to — that input.

FID might easily be overlooked when trying to diagnose long tasks on the main thread because it looks at the amount of time a user spends waiting after interacting with the page rather than the time it takes to render the page itself. It can’t be replicated with lab data because it’s based on a real user interaction. That said, FID is correlated to TBT in that the higher the FID, the higher the TBT, and vice versa. So, TBT is often the go-to metric for identifying long tasks because it can be measured with lab data as well as real-user monitoring (RUM).

But FID is wrought with limitations, the most significant perhaps being that it’s only a measure of the first interaction. That’s where INP comes into play. Instead of measuring the first interaction and only the first interaction, it measures all interactions on a page. Jeremy Wagner has a more articulate explanation:

“The goal of INP is to ensure the time from when a user initiates an interaction until the next frame is painted is as short as possible for all or most interactions the user makes.”
— Jeremy Wagner

Some interactions are naturally going to take longer to respond than others. So, we might think of FID as merely a first impression of responsiveness, whereas INP is a more complete picture. And like FID, the INP score is closely correlated with TBT but even more so, as Annie Sullivan reports:

Thankfully, performance tools are already beginning to bake INP into their reports. SpeedCurve is indeed one of them, and its report shows how its RUM capabilities can be used to illustrate the correlation between INP and long tasks on the main thread. This correlation chart illustrates how INP gets worse as the total long tasks’ time increases.

What’s cool about this report is that it is always collecting data, providing a way to monitor INP and its relationship to long tasks over time.

Not All Scripts Are Created Equal

There is such a thing as a “good” script. It’s not like I’m some anti-JavaScript bloke intent on getting scripts off the web. But what constitutes a “good” one is nuanced.

Who’s It Serving?

Some scripts benefit the organization, and others benefit the user (or both). The challenge is balancing business needs with user needs.

I think web fonts are a good example that serves both needs. A font is a branding consideration as well as a design asset that can enhance the legibility of a site’s content. Something like that might make loading a font script or file worth its cost to page performance. That’s a tough one. So, rather than fully eliminating a font, maybe it can be optimized instead, perhaps by self-hosting the files rather than connecting to a third-party domain or only loading a subset of characters.

Analytics is another difficult choice. I removed analytics from my personal site long ago because I rarely, if ever, looked at them. And even if I did, the stats were more of an ego booster than insightful details that helped me improve the user experience. It’s an easy decision for me, but not so easy for a site that lives and dies by reports that are used to identify and scope improvements.

If the script is really being used to benefit the user at the end of the day, then yeah, it’s worth keeping around.

When Is It Served?

A script may very well serve a valid purpose and benefit both the organization and the end user. But does it need to load first before anything else? That’s the sort of question to ask when a script might be useful, but can certainly jump out of line to let others run first.

I think of chat widgets for customer support. Yes, having a persistent and convenient way for customers to get in touch with support is going to be important, particularly for e-commerce and SaaS-based services. But does it need to be available immediately? Probably not. You’ll probably have a greater case for getting the site to a state that the user can interact with compared to getting a third-party widget up front and center. There’s little point in rendering the widget if the rest of the site is inaccessible anyway. It is better to get things moving first by prioritizing some scripts ahead of others.

Where Is It Served From?

Just because a script comes from a third party doesn’t mean it has to be hosted by a third party. The web fonts example from earlier applies. Can the font files be self-hosted instead rather than needing to establish another outside connection? It’s worth asking. There are self-hosted alternatives to Google Analytics, after all. And even GTM can be self-hosted! That’s why grouping first and third-party scripts in SpeedCurve’s reporting is so useful: spot what is being served and where it is coming from and identify possible opportunities.

What Is It Serving?

Loading one script can bring unexpected visitors along for the ride. I think the classic case is a third-party script that loads its own assets, like a stylesheet. Even if you think you’re only loading one stylesheet &mdahs; your own — it’s very possible that a script loads additional external stylesheets, all of which need to be downloaded and rendered.

Getting JavaScript Off The Main Thread

That’s the goal! We want fewer cars on the road to alleviate traffic on the main thread. There are a bunch of technical ways to go about it. I’m not here to write up a definitive guide of technical approaches for optimizing the main thread, but there is a wealth of material on the topic.

I’ll break down several different approaches and fill them in with resources that do a great job explaining them in full.

Use Web Workers

A web worker, at its most basic, allows us to establish separate threads that handle tasks off the main thread. Web workers run parallel to the main thread. There are limitations to them, of course, most notably not having direct access to the DOM and being unable to share variables with other threads. But using them can be an effective way to re-route traffic from the main thread to other streets, so to speak.

Split JavaScript Bundles Into Individual Pieces

The basic idea is to avoid bundling JavaScript as a monolithic concatenated file in favor of “code splitting” or splitting the bundle up into separate, smaller payloads to send only the code that’s needed. This reduces the amount of JavaScript that needs to be parsed, which improves traffic along the main thread.

Async or Defer Scripts

Both are ways to load JavaScript without blocking the DOM. But they are different! Adding the async attribute to a <script> tag will load the script asynchronously, executing it as soon as it’s downloaded. That’s different from the defer attribute, which is also asynchronous but waits until the DOM is fully loaded before it executes.

Preconnect Network Connections

I guess I could have filed this with async and defer. That’s because preconnect is a value on the rel attribute that’s used on a <link> tag. It gives the browser a hint that you plan to connect to another domain. It establishes the connection as soon as possible prior to actually downloading the resource. The connection is done in advance, allowing the full script to download later.

While it sounds excellent — and it is — pre-connecting comes with an unfortunate downside in that it exposes a user’s IP address to third-party resources used on the page, which is a breach of GDPR compliance. There was a little uproar over that when it was found out that using a Google Fonts script is prone to that as well.

Non-Technical Approaches

I often think of a Yiddish proverb I first saw in Malcolm Gladwell’s Outliers; however, many years ago it came out:

To a worm in horseradish, the whole world is horseradish.

It’s a more pleasing and articulate version of the saying that goes, “To a carpenter, every problem looks like a nail.” So, too, it is for developers working on performance. To us, every problem is code that needs a technical solution. But there are indeed ways to reduce the amount of work happening on the main thread without having to touch code directly.

We discussed earlier that performance is not only a developer’s job; it’s everyone’s responsibility. So, think of these as strategies that encourage a “culture” of good performance in an organization.

Nuke Scripts That Lack Purpose

As I said at the start of this article, there are some scripts on the projects I work on that I have no idea what they do. It’s not because I don’t care. It’s because GTM makes it ridiculously easy to inject scripts on a page, and more than one person can access it across multiple teams.

So, maybe compile a list of all the third-party and render-blocking scripts and figure out who owns them. Is it Dave in DevOps? Marcia in Marketing? Is it someone else entirely? You gotta make friends with them. That way, there can be an honest evaluation of which scripts are actually helping and are critical to balance.

Bend Google Tag Manager To Your Will

Or any tag manager, for that matter. Tag managers have a pretty bad reputation for adding bloat to a page. It’s true; they can definitely make the page size balloon as more and more scripts are injected.

But that reputation is not totally warranted because, like most tools, you have to use them responsibly. Sure, the beauty of something like GTM is how easy it makes adding scripts to a page. That’s the “Tag” in Google Tag Manager. But the real beauty is that convenience, plus the features it provides to manage the scripts. You know, the “Manage” in Google Tag Manager. It’s spelled out right on the tin!

Wrapping Up

Phew! Performance is not exactly a straightforward science. There are objective ways to measure performance, of course, but if I’ve learned anything about it, it’s that subjectivity is a big part of the process. Different scripts are of different sizes and consist of different resources serving different needs that have different priorities for different organizations and their users.

Having access to a free reporting tool like Lighthouse in DevTools is a great start for diagnosing performance issues by identifying bottlenecks on the main thread. Even better are paid tools like SpeedCurve to dig deeper into the data for more targeted insights and to produce visual reports to help make a case for performance improvements for your team and other stakeholders.

While I wish there were some sort of silver bullet to guarantee good performance, I’ll gladly take these and similar tools as a starting point. Most important, though, is having a performance game plan that is served by the tools. And Vitaly’s front-end performance checklist is an excellent place to start.

What Is Google’s INP Score and How to Improve It in WordPress

Are you wondering what Google’s INP score is and how to improve it on your WordPress website?

Interaction to Next Paint (INP) is a Core Web Vitals metric from Google. Improving this score will make your website feel more responsive to your users.

In this article, we will show you how to improve your Google INP score in WordPress and explain what Google’s INP score is.

What is Google INP score and how to improve it in WordPress

Here is a quick overview of the topics we will cover in this guide:

What Are Google Core Web Vitals?

Google Core Web Vitals are website performance metrics that Google considers important for overall user experience. These web vital scores are part of Google’s overall page experience score, which will impact your SEO rankings.

These metrics are useful because, even if your WordPress website loads fast, it may not be fully functional for users. Even if a page has loaded, a visitor might not be able to do what they want or access the information they need.

Core Web Vitals are designed to help with this. They let you measure how quickly your website loads, becomes visible, and is ready for your visitors to use.

To do that, Google uses three quality tests:

  • Largest Contentful Paint (LCP)
  • First Input Delay (FID)
  • Cumulative Layout Shift (CLS)

You can learn more about these tests in our ultimate guide on how to optimize Core Web Vitals for WordPress.

However, Google is replacing FID with a new test called INP (Interaction to Next Paint).

This change currently has the status of ‘Pending’ and will be finalized in March 2024. This gives you time to prepare so that your SEO rankings are not impacted, and we will show you how later in this article.

What Is Google INP?

INP stands for ‘Interaction to Next Paint’. It is a new Google Core Web Vital metric that measures the user interactions that cause delays on your website.

The INP test measures how long it takes between a user interacting with your website, like clicking on something, and your content visually updating in response. This visual update is called the ‘next paint’.

For example, a user might submit a contact form on your site, click on a button, or select an image that opens in a lightbox. The INP test will measure the time taken between the user performing these interactions and actually seeing the updated content on your website.

The Google test then comes up with a single INP score based on the duration of most user interactions on your website. The score will either be ‘Good’, ‘Needs Improvement’, or ‘Poor’, depending on how long your website takes to update visually.

Why Is Google Changing the FID Metric to INP?

The current FID test measures how quickly your website responds to the first user input after the page loads, such as a mouse click or keyboard press. It does this by measuring the time between the first input from the user and when your website starts to act on that input.

In other words, it measures how responsive your website is when it first loads and the first impression that it gives to real users.

However, this metric isn’t as helpful as it could be. There are two limitations to the FID test:

  1. It only measures the first user interaction, not all of them.
  2. It only measures until the website starts to process the interaction, not when the user can actually see the visual feedback on the screen.

So Google is changing the test to give a more complete picture of the overall responsiveness of a web page. INP will measure the entire time the user spends there until they leave the page.

How to Measure Google INP Score in WordPress

The easiest way to test your Google Core Web Vitals score is by using the PageSpeed Insights tool. Simply enter the URL you want to test and click the ‘Analyze’ button.

Analyzing a Web Page for Page Speed Insights

The tool will analyze the web page for a few seconds and then show you the test results.

Note: You can also view Core Web Vitals using DebugBear’s Free Website Speed Test or Site Speed Chrome Extension, which are preferred by some developers.

Now, along with other Google Core Web Vitals, you will also see the page’s Interaction to Next Paint (INP) score.

There will be different scores for mobile and desktop users.

Page Insights Results

In the screenshot above, you can see the INP score for desktop users viewing this web page on WPBeginner is 47 ms. The green dot means that this is a good score.

Once you can see the score for your own site, you will probably be wondering how it compares with other websites and whether it needs to be improved.

Google has provided some guidelines for interpreting your INP score:

  • Faster than 200 milliseconds – good responsiveness
  • 200-500 milliseconds – needs improvement
  • Slower than 500 milliseconds – poor responsiveness
Interpreting Your INP Score

Make sure you check your score for both mobile and desktop users and aim for good responsiveness.

You can then improve your INP score by following the guidelines in the sections below.

Case Study: Finding Slow Interactions on Awesome Motive’s Websites

But first, it may be helpful to look at a case study. We have started measuring the INP scores on our brand sites, including All in One SEO, MonsterInsights, and WPForms.

When our team checked our website’s INP scores, the initial results showed that our most popular pages needed improvement.

Using the Chrome User Experience (CrUX) dashboard, we could see that:

  • 80% of our sessions were rated ‘good’
  • 12% of our sessions were rated ‘needs improvement’
  • 8% of our sessions were rated ‘poor’

Now, we don’t yet know which specific interactions on our pages are slow and need to be optimized. This information isn’t provided by Google while testing.

That means that next, we will need to run our own tests to find slow interactions on pages with lower INP scores. This is a detailed and advanced task that is best performed by a developer.

It is done by going to each page that needs improvement and then testing each interaction with actual clicks, taps, and key presses. These need to be timed and evaluated using tools.

The Chrome Developers Blog lists a number of tools that can be used for testing, such as the Chrome Web Vitals extension and the new timespan mode in the Lighthouse Panel in DevTools. You can also see Google’s article on how to debug using the Web Vitals extension.

It’s important to note that the sessions with lower ratings most likely took place on slower devices or connections. That means that while testing, it is recommended to throttle your browser’s speed, or you may not spot the slow interactions.

You can do that using Chrome’s Inspect Element feature by going to View » Developer » Inspect Elements. You can switch to the ‘Network’ tab and select a throttling option from the dropdown menu.

Using Chrome Inspect Elements to Throttle Your Browser

Once you have found the INP scores for your pages, you can use the tips in the next section of this tutorial to improve them.

How to Improve Google INP Score in WordPress

Most of the INP score optimization work will need to be done by developers. That includes the authors of the theme and plugins you use on your website, plus the developers of any custom JavaScript you are running.

That’s because the INP score is mostly related to the time required to perform JavaScript interactions on your website.

For example, when a user clicks a button, some JavaScript code is run to perform the function expected by clicking the button. This code is downloaded to the user’s computer and runs in their web browser.

To optimize your INP score, the delays that happen during JavaScript user interactions must be reduced. There are three components to this delay:

  1. Input delay, which happens when your website is waiting for background tasks on that page that prevent the event handler from running.
  2. Processing time, which is the time required to run event handlers in JavaScript.
  3. Presentation delay, which is the time required to recalculate the page and paint the page content on the screen.

As a website owner, there are steps you can take to improve the first and third delays. We will show you how in the next section.

However, to make real improvements to your INP score, you will need to improve the second delay, which is the processing time of the code itself. That’s not something that you can do yourself.

The developers of your WordPress theme, plugins, and custom JavaScript may need to optimize their code to give feedback to your users immediately. The good news is they are probably already working on this to meet the March 2024 deadline.

We offer some specific tips for developers with examples later in this article.

How Website Owners Can Optimize Their Sites for INP

While the most significant impact on your website’s INP score will come from developers optimizing their code, there are a few things that website owners can do.

In particular, you can make sure that your users’ mouse clicks and keystrokes are recognized as soon as possible by optimizing background processes on your site. Also, you can make sure the response to their input is displayed on the screen as quickly as possible.

Here are some steps you can take to achieve that.

1. Make Sure You Are Running the Latest Version of WordPress

The first thing you should do is make sure you are running the latest version of WordPress.

That’s because WordPress versions 6.2 and 6.3 introduced significant performance improvements. These will improve your website’s performance on the server side and client side, which will improve your INP score.

For detailed instructions, you can see our guide on how to safely update WordPress.

2. Optimize Background Processes in WordPress

Background processes are scheduled tasks in WordPress that run in the background. They might include checking for WordPress updates, publishing scheduled posts, and backing up your website.

If your website gets too busy running these background tasks, then it may not realize right away that the user has clicked the mouse or pressed a key, resulting in a poor INP score.

You may be able to configure your background scripts and plugins to reduce the amount of work they are doing, placing less strain on your website. Otherwise, you might be able to run them only when they are needed instead of leaving them running in the background.

For detailed instructions, you can see the Optimize Background Processes section of our ultimate guide on how to boost WordPress speed and performance.

3. Check the PageSpeed Insights Performance Recommendations

After you run the PageSpeed Insights test on your website, you can scroll down to the Performance section of the test results.

Here, you will find some opportunities to improve your site’s performance along with the estimated time savings if you follow the advice.

PageSpeed Insights Performance Opportunities and Diagnostics

For example, you may see recommendations to eliminate render-blocking resources. You can do this by following our guide on how to fix render-blocking JavaScript and CSS in WordPress.

You may also see a recommendation to reduce unused JavaScript. You will find a setting to do this in many of the best WordPress caching plugins, such as WP Rocket.

4. Minify JavaScript in WordPress

JavaScript needs to be downloaded to the user’s computer before it can be run. By making your JavaScript files as small as possible, you can make some small gains in performance.

Minifying your JavaScript makes the files smaller by removing white spaces, lines, and unnecessary characters from the source code.

This won’t have a dramatic effect on your performance, but if you are looking to shave a few extra milliseconds off your INP score, then you may find it worthwhile.

WP Rocket minify JavaScript files

To learn how, you can see our guide on how to minify CSS and JavaScript files in WordPress.

How Developers Can Optimize Their Code for INP

If you are a developer, then the biggest INP score gains will come from optimizing your code. Here are a few things you can do.

1. Visually Acknowledge User Input Immediately

Here’s the one thing that will make the most difference when optimizing your code’s INP score: You need to give visual feedback to all user input immediately.

The user should see right away that their input has been recognized and that you are acting on it. This will make your code feel more responsive to the user and result in a great INP score.

Here are a few examples:

  • If a user clicks on an element, then you should display something that shows that the element was clicked.
  • If a user submits a form, then you need to immediately display something to acknowledge that, such as a message or spinner.
  • If a user clicks on an image to open it in a lightbox, then don’t just wait for the image to load. Instead, you should show a demo image or spinner immediately. Then, when the image is loaded, you can display it in the lightbox.

More than anything else, this will improve your INP score, especially if you need to do heavy JavaScript processing in response to user input.

Just make sure you update the UI before starting the task. After that, you can do the CPU-heavy work in a setTimeout callback or on a separate thread using a web worker, and then finally present the results to the user.

Once you get that right, there are a few more things you can do to optimize your code.

2. Optimize Where the Browser Spends Most of Its Time

The next thing you should do is investigate where the browser is spending most of its time and then optimize those parts.

In Google Chrome, when you navigate to View » Developer » Developer Tools » Performance, it is possible to inspect the JavaScript functions and event handlers that are blocking the next paint.

With that knowledge, you can see what can be optimized in order to reduce the time until the next paint after user interaction.

3. Reduce Your Layouts

Sometimes, a lot of CPU activity consists of layout work.

When that happens, you should check to see if you can reduce the number of relayout functions in your code.

4. Show Above-the-Fold Content First

If rendering the page contents is slow, then your INP score may be affected.

You can consider showing only important ‘above-the-fold’ content first to deliver the next frame more quickly.

Examples of Good JavaScript Coding Practices for Developers

It may be helpful to show you some examples of how bad code can result in a poor INP score.

We put together an example project on CodePen that you can experiment with. You can examine our sample code, read our short explanations, and see the difference it makes by clicking the buttons.

Here’s an animation from that CodePen project. You can see that the unoptimized sample code results in a poor INP score of 965 milliseconds. The button press will feel laggy to users.

By contrast, the optimized code updates the button text immediately, resulting in the best possible INP score.

Animation of CodePen Example Project for Optimizing INP Score

Read on to see four examples of how you can improve your code to optimize the INP score.

Example 1: Update the Screen Before Running a Heavy CPU Task

CPU-heavy tasks take time, and this can lead to poor INP scores unless you write good code. In this case, it’s best to update the screen before running that task.

Here is a bad example where the user interface is updated after a heavy CPU task. This results in a high INP:

// Bad example
button.addEventListener('click', () => {
  // Heavy CPU task
  for (let i = 0; i < 10000000; i++) {
    console.log(i);
  }
  // UI update
  button.textContent = 'Clicked!';});

In this improved example, the user interface is updated immediately when the button is clicked.

After that, the heavy CPU task is moved to a setTimeout callback:

// Better example
button.addEventListener('click', () => {
  // UI update
  button.textContent = 'Processing...';

  // Heavy CPU task
  setTimeout(() => {
    for (let i = 0; i < 10000000; i++)
 {
      console.log(i);
    }
    // Final UI update
    button.textContent = 'Done!';
  }, 0);
});

This allows the browser to update the screen before starting the slow task, resulting in a good INP score.

Example 2: Schedule Non-Urgent Processing

You should also make sure that you don’t run non-urgent or non-essential work in a script immediately when it may delay the response the user is expecting.

You should start by updating the page immediately to acknowledge the user’s input. After that, you can use requestIdleCallback to schedule the rest of the script when there is free time at the end of a frame or when the user is inactive.

Here is an example:

button.addEventListener('click', () => {
  // Immediate UI update
  button.textContent = 'Processing...';

  // Non-essential processing  window.requestIdleCallback(() => {
    // Perform non-essential processing here...    button.textContent = 'Done!';
  });
});

This will make the web page feel more responsive to the user and get you a better INP score.

Example 3: Schedule a Function to Run Before the Next Paint

You can also userequestAnimationFrame to schedule a function to be run before the next repaint:

button.addEventListener('click', () => {
  // Immediate UI update
  button.textContent = 'Processing...';

  // Visual update
  window.requestAnimationFrame(() => {
    // Perform visual update here...    button.style.backgroundColor = 'green';    button.textContent = 'Done!';
  });
});

This can be useful for animations or visual updates in response to user interactions.

Again, you should give the user feedback by acknowledging their input immediately.

Example 4: Avoid Layout Thrashing

Layout thrashing occurs when you repeatedly read and write to the DOM (Document Object Model), causing the browser to recalculate the layout multiple times.

Here is an example of layout thrashing:

// Bad example
elements.forEach(element => {
  const height = element.offsetHeight; // read  element.style.height = height + 'px'; // write});

This can be avoided by batching your reads and writes.

This is a better example:

// Good example
const heights = elements.map(element => element.offsetHeight); // batched read
elements.forEach((element, index) => {
  element.style.height = heights[index] + 'px'; // batched write
});

We hope this tutorial helped you learn how to improve your Google INP score in WordPress. You may also want to see our ultimate guide to WordPress SEO or our expert picks for the best WordPress SEO plugins and tools.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post What Is Google’s INP Score and How to Improve It in WordPress first appeared on WPBeginner.

Collective #747




Our Sponsor

Start speaking a new language in just three weeks with Babbel

Learning to speak a new language goes beyond just vocabulary: it’s about being able to hold a real-life conversation with a local, and understanding the culture and the people of each place. Consider Babbel your expert-led passport to learning, with 10-minute lessons that are so effective, many users feel confident speaking a new language in just three weeks. Supplement those with the podcasts, games, articles and live online classes for a well-rounded education in weeks.

Start learning a new language (and culture) today for up to 55% off




Conditional CSS

Ahmad Shadeed goes over a few CSS features that we use every day, and shows how conditional they are.

Read it


Why Not document.write()?

This article by Harry Roberts provides a comprehensive examination of the reasons why using the document.write() JavaScript method should be avoided.

Check it out



Loops and Repetition

In the latest edition of Offscreen Canvas, Daniel Velasquez examines the technique of using sin/cosine for looping.

Read it


3D in CSS

In this piece from Brad Woods’ Digital Garden collection, readers will learn the proper techniques for creating a 3D space and manipulating the translation and rotation of an element using CSS.

Read it








Stylized Low Poly

Bruno Simon utilizes 3D Coat to create stunning, low poly “stylized” and “hand-painted” models reminiscent of the popular game World of Warcraft.

Check it out







Core Web Vitals Tools To Boost Your Web Performance Scores

The success of your website depends on the impression it leaves on its users. By optimizing your Core Web Vitals scores, you can gauge and improve user experience. Essentially, a web vital is a quality standard for UX and web performance set by Google. Each web vital represents a discrete aspect of a user’s experience. It can be measured based on real data from users visiting your sites (field metric) or in a lab environment (lab metric).

In fact, several user-centric metrics are used to quantify web vitals. They keep evoling, too: as there were conversations around slowly adding accessibility and responsiveness as web vitals as well. In fact, Core Web Vitals are just a part of this large set of vitals.

It’s worth mentioning that good Core Web Vitals scores don’t necessarily mean that your website scores in high 90s on Lighthouse. You might have a pretty suboptimal Lighthouse score while having green Core Web Vitals scores. Ultimately, for now it seems that it’s only the latter that contribute to SEO ranking — both on mobile and on desktop.

While most of the tools covered below only rely on field metrics, others use a mix of both field and lab metrics. 1

PageSpeed Compare

PageSpeed Compare is a page speed evaluation and benchmarking tool. It measures the web performance of a single page using Google PageSpeed Insights. It can also compare the performance of multiple pages of your site or those of your competitors’ websites. It evaluates lab metrics, field metrics, page resources, DOM size, CPU time, and potential savings for a website. PageSpeed Compare measures vitals like FCP, LCP, FID, CLS, and others using land and field data.

The report it generates lists the resources loaded by a page, the overall size for each resource type category, and the number of requests made for each type. Additionally, it examines the number of third-party requests and resources a page makes. It also lists cached resources and identifies unused Javascript. PageSpeed Compare checks the DOM of the page and breaks down its size, complexity, and children. It also identifies unused images and layout shifts in a graph.

When it comes to CPU time, the tool breaks down CPU time spent for various tasks, Javascript execution time, and CPU blocking. Lastly, it recommends optimizations you can make to improve your page. It graphs server, network, CSS, Javascript, critical content, and image optimizations to show the potential savings you could gain by incorporating fixes into your site. It gives resource-specific suggestions you could make to optimize the performance of your page. For example, it could recommend that you remove unused CSS and show you the savings this would give in a graph.

PageSpeed Compare provides web performance reports in a dashboard-alike overview with a set of graphs. You can compare up to 12 pages at once and presents the report in a simple and readable way since it uses PageSpeed Insights to generate reports. Network and CPU are throttled for lab data tests for more realistic conditions.

Bulk Core Web Vitals Check

Experte's Bulk Core Web Vitals Check is a free tool that crawls up to 500 pages of the entire domain and provides an overview of the Core Web Vitals scores for them. Once the tool has crawled all the pages, it starts performing a Core Web Vitals check for each page and returns the results in a table. Running the test takes a while, as each web page test is done one at a time. So it’s a good idea to let it run for 15-30 mins to get your results.

What’s the benefit then? As a result, you get a full overview of the pages that perform best, and pages that perform worst — and can compare the values over time. Under the hood, the tool uses Pagespeed Insights to measure Core Web Vitals.

You can export the results as a CSV file for Excel, Google Sheets or Apple Pages. The table format in which the results are returned makes it easy to compare web vitals across different pages. The tests can be run for both mobile and desktop.

Alternatively, you can also check David Gossage's article on How to review Core Web Vitals scores in bulk, in which he shares the scripts and how to get an API key to run the script manually without any external tools or services.

Treo

If you’re looking for a slightly more advanced option for bulk Core Web Vitals check, this tool will cover your needs well. Treo Site Speed also performs site speed audits using data from the Chrome UX Report, Lighthouse and PageSpeed Insights.

The audits can be performed across various devices and network conditions. Additionally though, with Treo, you can track the performance of all your pages across your sitemap, and even set up alerts for performance regressions. Additionally, you can receive monthly updates on your website’s performance.

With Treo Site Speed, you can also benchmark a website against competitors. The reports Treo generates are comprehensive, broken down by devices and geography. They are granular and available at domain and page levels. You can export the reports or access their data using an API. They are also shareable.

WebPageTest Core Web Vitals Test

WebPageTest is, of course, a performance testing suite on its own. Yet one of the useful features it provides is a detailed breakdown of Core Web Vitals metrics and pointers to problematic areas and how to fix them.

There are also plenty of Core Web Vitals-related details in the actual performance audit, along with suggestions for improvements which you can turn on without changing a line of code. For some, you will need a pro account though.

Cumulative Layout Shift Debuggers

Basically, the CLS Debugger helps you visualize CLS. It uses the Layout Instability API in Chromium to load pages and calculate their CLS. The CLS is calculated for both mobile and desktop devices and takes a few minutes to complete. The network and CPU are throttled during the test, and the pages are requested from the US.

The CLS debugger generates a GIF image with animations showing how the viewport elements shift. The generated GIF is important in practically visualizing layout shifts. The elements that contribute most to CLS are marked with squares to see their size and layout shift visually. They are also listed in a table together with their CLS scores.


CLS debugger in action: highlighting the shifts frame by frame.

Although the CLS is calculated as a lab metric initially, the CLS debugger receives CLS measurements from the Chrome UX Report as well. The CLS, then, is a rolling average of the past 28 days. The CLS debugger allows you to ignore cookie interstitials — plus, you can generate reports for specific countries, too.

Alternatively, you can also use the Layout Shift GIF Generator. The tool is available on its webpage or as a command line tool. With the CLI tool, you can specify additional options, such as the viewport width and height, cookies to supply to the page, the GIF output options, and the CLS calculation method.

Polypane Web Vitals

If you want to keep your Core Web Vitals scores nearby during development, Polypane Web Vitals is a fantastic feature worth looking into. Polypane is a standalone browser for web development, that includes tools for accessibility, responsive design and, most recently, performance and Core Web Vitals, too.

You can automatically gather Web Vitals scores for each page, and these are then shown at the bottom of your page. The tool also provides LCP visualization, and shows layout shifts as well.

Noteable Mentions
  • Calibre’s Core Web Vitals Checker allows you to check Core Web Vitals for your page with one click. It uses data from the Chrome UX Report and measures LCP, CLS, FID, TTFB, INP and FCP.

Web Performance Testing — What, Why, How of Core Web Vitals

A website needs to be constantly tested and optimized to be in line with Google's web and SEO guidelines. As a result, it has an advantage over others in terms of visibility, brand image, and driving traffic. However, to tactically assess the website's performance, it needs to be measured in a well-thought-out manner. Core Web Vitals is a key performance metric that analyzes the website's performance by investigating the data and provides a strategic platform to scale up the website's user experience. This article will learn about web performance testing and how Core Web Vitals plays a crucial and strategic part in it.

What Is Web Performance Testing?

Web performance testing is executed, so that accurate information is provided on the application's readiness by monitoring the server-side application and testing the website. This is done by simulating a load that is in line with real conditions so that the expected load can be supported by the application that has been evaluated. 

Core Web Vitals only shows a sampling of URLs?

The Coverage section of Google Search Console shows 206K valid URLs, of which 174K are submitted and indexed, nearly all of which are Q&A pages. The remaining 32K are indexed, but not submitted in sitemap.

However, the Core Web Vitals section only shows data on 28K URLs. In the Enhancements section, it says there are only 27K valid Q&A items.

What happened to the other 150K URLs?

Signals For Customizing Website User Experience

In my last article, I suggested using the SaveData API to deliver a different, more performant, experience to users that expressed that desire. This hopefully leads to a greater experience for all users. In this article, I want to spend a bit more time on this, and also look at other signals we can similarly use to help us make decisions on what to load on our websites.

That’s not to say the extraneous content is pointless — enhanced design and user interfaces can have an important impact on the brand of a website, and delightful little extras can really impact your users' relationship with your site. It’s when the cost of those “extras” starts to negatively impact your user’s experience of the site, then you should consider how essential they are, and if they can be turned off for some users.

Save Data API

Let’s have a quick recap on the Save Data API. That user preference is available in two (hopefully soon to be three!) ways:

  1. A Save-Data header is sent on each HTTP request.
    This allows dynamic backends to change the HTML returned.
  2. The NetworkInformation.saveData JavaScript API.
    This allows client-side JavaScript to check this and act accordingly.
  3. The upcoming prefers-reduced-data media query, which allows CSS to set different options depending on this setting.
    This is available behind a flag in Chrome, but not yet on by default while it finishes standardization.

Note: At the time of writing, the Save Data API, and in fact all the options we’ll talk about in this article, are only available in Chromium-based browsers (Chrome, Edge, Opera…etc.). This is a bit of a shame, as I believe they are useful for websites. If you believe the same, then let the other browsers know you want them to support this too. All of these are on various standard tracks rather than being proprietary Chrome APIs, so they can be implemented by other browsers (Safari and Firefox) if the demand is there. However, later in this article, I’ll explain why it’s perhaps more important that they are supported in Chromium-based browsers — and Chrome in particular.

Perhaps confusingly, iOS does have a Low Data mode, though that is used by iOS itself to reduce background tasks using data, and it is not exposed to the browser to allow websites to take advantage of that (even for Chrome on iOS which is more a skin on top of Safari than the full Chrome browser).

Websites can act on the Save Data preference to give a lighter website to… well.. . save the user’s data! This is helpful for those on poor or expensive networks, so they don’t have to pay an exorbitant cost just to visit your website. This setting is used by users in poorer countries but is also used by those with a capped data plan that might be running out just before your monthly cap renewal, or those traveling where roaming charges can be a lot more expensive than at home.

And Is It Used?

Again, I talked about this that previous article, and the answer is a resounding yes! Approximately two-thirds of Indian mobile Chrome users of Smashing Magazine have this setting turned on, for example. Expanding that to look at the top-10 mobile users that support Save Data, by volume for this site, we see the following:

Country % Data Saver
India 63%
USA 10%
Philippines 49%
China 0%
UK 35%
Nigeria 55%
Russia 55%
Canada 38%
Germany 35%
Pakistan 51%

Now, there are a few things to note about this. First of all, it’s, perhaps, no surprise to see high usage of this setting for what are often considered “poorer” countries — over 50% of mobile users having this setting seems common. What’s, perhaps, more surprising is the relatively high usage of a third of users using this in the likes of the UK, Germany, and France. In short, this is not a niche setting.

I’d love to know why China is so reluctant to use this if any readers know. Weirdly, they report as a range of browsers in our analytics including the Android WebView, Chrome and Safari (despite it's not supporting this!). Perhaps, these are imitation phones or other customized builds that do not expose this setting to the end-users to enable this. If you have any other theories or information on this — I’d love to know, so please drop a message in the comments below.

However, the above table is not actually representative of total traffic, and that’s another point to note about this data. If we compare the top-10 countries that visit SmashingMagazine.com by number of users across four different segments, we see the following:

All users Mobile user Mobile SaveData support Mobile SaveData on
1 USA USA India India
2 India India USA Philippines
3 UK UK Philippines Nigeria
4 Canada Germany China UK
5 Germany Philippines UK Russia
6 France Canada Nigeria USA
7 Russia China Russia Indonesia
8 Australia France Canada Pakistan
9 Philippines Nigeria Germany Brazil
10 Netherlands Russia Pakistan Canada

All users, and mobile users are not too dissimilar. Though some of the “poorer” countries like the Philippines and Nigeria are higher up in the table on mobile (desktop support of this site seems higher in Western countries).

However, looking at those with Save Data support (the same as the first table I showed), it is a completely different view; with India overtaking the USA at the top spot, and the Philippines shooting right up to number three. And finally looking at those with Save Data actually turned on, it’s an unrecognizable ordering compared to the first column.

Using signals like Save Data allows you to help those users that need help the most, compared to traditional analytics of looking at all users or even segmenting by device type.

I mentioned earlier that Save Data is only available in Chromium-based browsers, meaning we’re ignoring Safari users (a sizable proportion of mobile users), and Firefox. However, countless research (including the stats for our own site here, and others by the likes of Alex Russell) has shown that Android devices are the platform of choice for poorer countries with slower networks. This is hardly surprising given the cost difference between Android and iOS devices, but using the signals offered only to those devices doesn't mean neglecting half of your user base, but instead concentrating on the users that need the most help.

Additionally, as I mentioned in the previous article, the Core Web Vitals initiative being measured only in Chrome browsers (and not other Chromium browsers like Edge or Opera) is putting a spotlight on these users, while at the same time those are the users supporting this API and others to allow you to address them.

So, while I wish there wasn’t this inequality in this world, and while I wish all browsers would support these options better, I still believe that using these options to customize the delivery better is the right thing to do, and the fact that they are only available in Chromium-based browsers at the moment is not a reason to ignore these options.

How To Act Upon Save Data

How exactly websites use this information is entirely up to the website. In the past, Chrome used to perform changes to the website by proxying requests via their servers (similar to how Opera Mini works), but doing that is usually frowned upon these days. With the increase in the use of HTTPS, site content is more secured in part to avoid any interference (Chrome never performed these automatic optimizations on HTTPS sites, though as the browser they could in theory). Chrome will soon also be sunsetting this automatic altering of content on HTTP sites. So, now it’s down to websites to do change as they see fit if they want to act upon this user signal.

Websites should still deliver the core experience of the website, but drop optional extras. For Smashing Magazine, that involved dropping some of our web fonts. For others, it might involve using smaller images or not loading videos by default. Of course, for web performance reasons you should always use the smallest images you can, but in these days of high-density mobile screens, many prefer to give high-quality images to take advantage of those beautiful screens. If a user has indicated that its preference is to save data, you could use that as a signal to drop down a level there, even if it’s not quite as nice as a picture, but still gets the message across.

Tim Vereecke gave a fantastic talk on some of the Data S(h)aver strategies he uses on his site for users with this Save Data preference, including showing fewer articles by default, loading less on infinite scroll pages when reaching the bottom of the page, removing icon fonts, or reducing the number of ads, not auto-playing videos and loads more tips and tricks, some of which he’s summarised in an accompanying article.

One important point that Tim noted is using Save Data might not always improve performance. Some of the techniques he uses like loading less or turning off prefetching of likely future pages will result in data saving, but with the downside of loading taking a bit longer if users do want to see that content. In general, however, reducing data usually results in web performance gains.

Is Save Data The Only Option?

Save Data is a great API in my opinion, and I wish more sites used it, and more browsers supported it! The fact that the user has explicitly asked sites to send less data means doing that is acting upon their wishes.

The downside of Save Data, however, is that users have to know to enable this. While many Smashing Magazine readers may be more technical and may know about this option or may be comfortable delving into the settings of their browsers, others may not. Additionally, with the aforementioned change of Chrome removing the Save Data browser option, and perhaps switching to using the OS-level option, we may see some changes in its usage (for better or worse!).

So, what can we do to try to help users who don’t have this set? Well, there are a few more signals we can use, as they also might indicate users who might struggle with the full experience of the website. However, as we are making that decision for them (unlike Save Data which is an explicit user choice), we should tread carefully with any assumptions we make. We may even want to highlight to users that they are getting a different experience, and offer them a way of opting out of this. Perhaps this is a best practice even for those using Save Data, as perhaps they’re unaware or had forgotten that they turned this setting on, and so are getting a different experience.

In a similar vein, it’s also possible to offer a Save Data-like experience to all users (even in browsers and operating systems that don’t support it) with a front end-setting and then perhaps saving this value to a cookie and acting upon that (another trick Tim mentioned in his talk).

For the remainder of this article, I’d like to look at alternatives to Save Data that you can also act upon to customize your sites. In my opinion, these should be used in addition to Save Data, to squeeze a little more on top.

Other User Preference Signals

First up we will look at preferences that, like Save Data, a user can turn on and off. A new breed of user preference CSS media queries have been launched recently, which are being standardized in the Media Queries Level 5 draft specification and many are already available in browsers. These allow web developers to change their websites, based on various user preferences:

  • prefers-reduced-motion
    This indicates the user would prefer fewer motions, perhaps due to vestibular motion disorders. Adam Argyle has made a point of highlighting that reduced motion != no motion. Just tone it down a bit. If you were acting on the save data option, you wouldn’t hold back all data!
  • prefers-reduced-transparency
    To aid readability for those that find it difficult to distinguish content with translucent backgrounds.
  • prefers-contrast
    Similar to the above, this can be used as a request to increase the contrast between elements.
  • forced-colors
    This indicates the user agent is using a reduced color pallet, typically for accessibility reasons, such as Windows High Contrast mode.
  • prefers-color-scheme
    This can be set to light or dark to indicate the user's preferred color scheme.
  • prefers-reduced-data
    The CSS media query version of Save Data mentioned above.

Only some of these may have a different impact on web performance, which is my area of expertise, and the original starting point for this article with Save Data. However, they are important user preferences — particularly when considering the accessibility implications for motion sensitivity, and vision issues covered by the transparency, contrast, and even color scheme options. For more information, check out a previous Smashing Magazine article deep-diving into prefers-reduce-motion — the oldest and most well supported of these options.

Network Signals

Returning more to items to optimize web performance, the Effective Connection Type API is a property of the Network Information API and can be queried in JavaScript with the following code (again only in Chromium browsers for now):

navigator.connection.effectiveType;

This then returns one of four string values (4g, 3g, 2g, or slow-2g) — the theory being that you can reduce the network needs when the connection is slower and so give a faster experience even to those on slower networks. There are a few downsides to ECT. The main one is that the definitions of the 4 types are all fixed, and based on quite old network data. The result is that nearly all users now fall into the 4g category, a few into the 3g, and very few into the 2g or slow-2g categories.

Returning to our Indian mobile users, who we saw in the last article were getting much worse experiences, 84.2% are reported as 4g, 15.1% 3g, 0.4% 2g, and 0.3% slow-2g. It’s great that technology has advanced so that this is the case, but our dependency on it has grown too, and it does mean that its use as a differentiator of “slower” users is already limited and becoming more so as time goes on. Being able to identify the 16% of slowest users is not to be sniffed at, but it’s a far cry from the 63% of users asking us to Save Data in that region!

There are other options available in the navigator.connection API, but without the simplicity of a small number of categories:

navigator.connection.rtt;
navigator.connection.downlink;

Note: For privacy reasons, these return a rounded number, rather than a precise number, to avoid them being used as a fingerprinting vector. This is why we can’t have nice things. Still, for the non-tracking purposes, an imprecise number is all we need anyway.

The other downside of these APIs is that they are only available as a JavaScript API (where it’s thankfully very easy to use), or as a Client Hint HTTP Header (where it’s not as easy to use).

Client Hints HTTP Headers

The Save-Data HTTP header is a simple HTTP Header sent for all requests when a user has this turned on. This makes it nice and easy for backends to use this. However, we can’t get other information like ECT in similar HTTP headers without severely bulking up all requests for web browsing when the vast majority of websites will not use it. It also introduces privacy risks by making available more than we strictly need about our users.

Client Hints is a way to work around those limitations, by not sending any of this extra information by default, and instead of having websites “opting in” to this information when they will make use of this. They do this by letting browsers know, with the Accept CH HTTP Header, what Client Hint headers the page will make use of. For example, in the response to the initial request, we could include this HTTP Header:

accept-ch: ect, rtt, downlink

This can also be included in a meta element in the page contents:

<meta http-equiv="Accept-CH" content="ECT, RTT, Downlink">

This then means that any future requests to this website, will include those Client Hint HTTP Headers, as well as the usual HTTP Headers:

downlink: 10
ect: 4g
rtt: 50

Important! If making use of Client Hints and returning different results for the same URL based on these, do remember to include the client hint headers you are altering content based upon, in your Vary header, so any caches are aware of this and won’t serve the cached page for future visits unless they also have the same client hint headers set.

You can view all the client hints available for your browser at https://browserleaks.com/client-hints (hint: use a Chromium-based browser to view this website or you won’t see much!). This website opts into all the known Client Hints to show the potential information leaked by your browser but each site should only enable the hints they will use. Client Hints are also by default only sent on requests to the original origin and not to third-party requests loaded by the page (though this can be enabled through the use of Permission Policy header).

The main downside of this two-step process, which I agree is absolutely necessary for the reasons given above, is the very first request to a website does not get these client hints and this is in all likelihood the one that would benefit most from savings based on these client hints.

The BrowserLeaks demo above actually cheats, by loading that data in an iframe rather than in the main document, to get around this. I wouldn’t recommend that option for most sites meaning you are either left with using the JavaScript APIs instead, only optimizing for non-first page visits, or using the Client Hint information independent requests (Media, CSS or JavaScript files). That’s not to say using them independent requests is not powerful, and is particularly useful for image CDNs, but the fastest website is one that can start rendering all the critical content from the first response.

Device Capability Signals

Moving on from User and Network signals, we have the final category of device signals. These APIs explain the capabilities of the device, rather than the connection, including:

API JavaScript API Client Hint Example Output
Number of processors navigator.hardwareConcurrency N/A 4
Device Pixel Ratio devicePixelRatio Sec-CH-DPR, DPR 3
Device Memory navigator.deviceMemory Sec-CH-Device-Memory, Device-Memory 8

I’m not entirely convinced the first is that useful as nearly every device has multiple processors now, but it’s usually the power of those cores that are more important than the number, however, the next two have a lot of potential for optimizing for.

DPR has long been used to serve responsive images - usually through srcset or media queries than above APIs, but the JavaScript and Client Hint header options have been utilized less by websites (though many image CDNs support sending different images based on Client Hints). Utilizing them more could lead to valuable optimizations for sites — beyond the static media use cases we’ve typically seen up until now.

The one that I think could really be used as a performance indicator is Device Memory. Unlike the number of processors, or DPR, the amount of RAM a device has is often a great indicator as to whether it’s a “high end” device, or a cheaper, more limited device. I was encouraged to investigate how this correlated to Core Web Vitals by Gilberto Cocchi after my last article and the results are very interesting as shown in the graphs below. These were created with a customized version of the Web Vitals Report, altered to allow reporting on 4 dimensions.

Largest Contentful Paint (LCP) showed a clear correlation between poor LCP and RAM, with the 1 GB and 2 GB RAM p75 scores being red and amber, but even though the higher RAM both had green scores, there was still a clear and noticeable difference, particularly shown on the graph.

Whether this is directly caused by the lack of RAM, or that RAM is just a proxy measure of other factors (high end, versus low-end devices, device age, networks those devices are run on…etc.), doesn’t really matter at the end of the day. If it’s a good proxy that the experience is likely poorer for those users, then we can use that as a signal to optimize our site for them.

Cumulative Layout Shift (CLS) has some correlation, but even at the lowest memory is still showing green:

This is perhaps not so surprising since CLS can’t really be countered by the power of devices or networks. If a shift is going to happen the browser will notice — even if it happens so fast, that the user barely noticed.

Interestingly, there’s much less correlation for First Input Delay (FID). Note also that FID is often not measured, so can result in breaks in the chart when there are few users in that category — as shown by the 1GB devices series which has few data points.

To be honest, I would have expected Device Memory to have a much bigger impact on FID (whether directly, or indirectly for the reasons as discussed in the LCP section above), and again perhaps reflects that this metric isn’t actually that difficult to pass for many sites, something the Chrome team is well aware of and are working on.

For privacy reasons, device memory is basically only reported as one of a capped, fixed set of floating-point numbers: 0.25, 0.5, 1, 2, 4, 8, so even if you have 32 GB of RAM that will be reported as 8. But again, that lack of precision is fine as we’re probably only interested in devices with 2 GB of RAM or less, based on the above stats — though best advice would be to measure your own web visitors and based your information on that. I just hope over time, as technology advances, we’re not put into a similar situation as ECT where everything migrates to the top category, making the signal less useful. On the plus side, this should be easier to change just by increasing the upper capping amount.

Measure Your Users

The last section, on correlating Device Memory to Core Web Vitals, brings about an important topic: do not just take for granted that any of these options will prove useful for your site. Instead, measure your user population to see which of these options will be useful for your users.

This could be as simple as logging the values for these options in a Google Analytics Custom Dimension. That’s what we did here at Smashing for a number of them, and how we were able to create the graphs above to see the correlation as we were then able to slice and dice these measures against other data in Google Analytics (including the Core Web Vitals, we already logged in Google Analytics using the web-vitals library).

Alternatively, if you already use one of the many RUM solutions out there some, or all of these may already be being measured and you may already have the data to help start to make decisions as to whether to use these options or not. And if your RUM library of choice is not tracking these metrics, then maybe suggest that they do to benefit you and their other users.

Conclusion

I hope this article will convince you to consider these options for your own sites. Many of these options are easy to use if you know about them and can make a real difference to the users struggling the most. They also are not just for complicated web applications but can be used even on static article websites.

I’ve already mentioned that this site, smashingmagazine.com makes use of the Save Data API to avoid loading web fonts. Additionally, it uses the instant.page library to prefetch articles on mouse hover — except for slow ECTs or when a user has specified the Save Data option.

The Web Almanac (another site I work on), is another seemingly simple article website, where each chapter makes use of lots of graphs and other figures. These are initially loaded as lazy-loaded images and then upgraded to Google Sheet embeds, which have a handy hover effect to see the data points. The Google Sheet embeds are actually a little slow and resource-intensive, so this upgrade only happens for users that are likely to benefit from it: those on Desktop viewport widths, when Save Data is not turned off, when we’re on a fast connection using ECT, and when a high-resolution canvas is supported (not covered in this article, but old iPads did not support this but claimed to).

I encourage you to consider what parts of your website you should consider limiting to some users. Let us know in the comments how you’re using them.

How To Benchmark And Improve Web Vitals With Real User Metrics

How would you measure performance? Sometimes it’s the amount of time an application takes from initial request to fully rendered. Other times it’s about how fast a task is performed. It may also be how long it takes for the user to get feedback on an action. Rest assured, all these definitions (and others) would be correct, provided the right context.

Unfortunately, there is no silver bullet for measuring performance. Different products will have different benchmarks and two apps may perform differently against the same metrics, but still rank quite similarly to our subjective "good” and "bad” verdicts.

In an effort to streamline language and to promote collaboration and standardization, our industry has come up with widespread concepts. This way developers are capable of sharing solutions, defining priorities, and focusing on getting work done effectively.

Performance vs Perceived Performance

Take this snippet as an example:

const sum = new Array(1000)
  .fill(0)
  .map((el, idx) => el + idx)
  .reduce((sum, el) => sum + el, 0)
console.log(sum)

The purpose of this is unimportant, and it doesn’t really do anything except take a considerable amount of time to output a number to the console. Facing this code, one would (rightfully) say it does not perform well. It’s not fast code to run, and it could be optimized with different kinds of loops, or perform those tasks in a single loop.

Another important thing is that it has the potential to block the rendering of a web page. It freezes (or maybe even crashes) your browser tab. So in this case, the performance perceived by the user is hand in hand with the performance of the task itself.

However, we can execute this task in a web worker. By preventing render block, our task will not perform any faster— so one could say performance is still the same — but the user will still be able to interact with our app, and be provided with proper feedback. That impacts how fast our end-user will perceive our application. It is not faster, but it has better Perceived Performance.

Note: Feel free to explore my react-web-workers proof-of-concept on GitHub if you want to know more about Web-Workers and React.

Web Vitals

Web performance is a broad topic with thousands of metrics that you could potentially monitor and improve. Web Vitals are Google’s answer to standardizing web performance. This standardization empowers developers to focus on the metrics that have the greatest impact on the end-user experience.

  • First Contentful Paint (FCP)
    The time from when loading starts to when content is rendered on the screen.
  • Largest Contentful Paint (LCP)
    The render time of the largest image or text block is visible within the viewport. A good score is under 2.5s for 75% of page loads.
  • First Input Delay (FID)
    The time from when the user interacts with the page to the time the browser is able to process the request.
    A good score is under 100ms for 75% of page loads.
  • Cumulative Layout Shift (CLS)
    The total sum of all individual layout shifts for every unexpected shift that occurs in the page’s lifespan. A good score is 0.1 on 75% of page loads.
  • Time to Interactive (TTI)
    The time from when the page starts loading to when its main sub-resources have loaded.
  • Total Blocking Time (TBT)
    The time between First Contentful Paint and Time to Interactive where the main thread was blocked (no responsiveness to user input).
Which one of these is the most important?

Core Web Vitals are the subset of Web Vitals that Google has identified as having the greatest impact on the end-user experience. As of 2022, there are three Core Web Vitals — Largest Contentful Paint (speed), Cumulative Layout Shift (stability) and First Input Delay (interactivity).

Recommended Reading: The Developer’s Guide to Core Web Vitals

Chrome User Experience Report vs Real User Metrics

There are multiple ways of testing Web Vitals on your application. The easiest one is to open your Chrome Devtools, go to the Lighthouse tab, check your preferences, and generate a report. This is called a Chrome User Experience Report (CrUX), and it is based on a 28-day average of samples from Chrome users who meet certain requirements:

  • browsing history sync;
  • no Sync passphrase setup;
  • usage statistic reporting enabled.

But it’s quite hard to define how representative of your own users the Chrome UX Report is. The report serves as a ballpark range and can offer a good indicator of things to improve on an ad-hoc basis. This is why it’s a very good idea to use a Real User Monitoring (RUM) tool, like Raygun. This will report on people actually interacting with your app, across all browsers, within an allocated timeframe.

Monitoring real user metrics yourself is not a simple task though. There are a plethora of hurdles to be aware of. However, it doesn’t have to be complicated. It’s easy to get set up with getting RUM metrics with performance monitoring tools. One of the options worth considering is Raygun — it can be set up in a few quick steps and is GDPR-friendly. In addition, you also get plenty of error reporting features.

Application Monitoring

Developers often treat observability and performance monitoring as an after-thought. However, monitoring is a crucial aspect of the development lifecycle which helps software teams move faster, prioritize efforts, and avoid serious issues down the road.

Setting up monitoring can be straightforward, and building features that account for observability will help the team do basic maintenance and code hygiene to avoid those dreadful refactoring sprints. Application monitoring can help you sleep peacefully at night and guides your team towards crafting better user experiences.

Monitor Trends And Avoid Regressions

In the same way, we have tests running on our Continuous Integration pipeline (ideally) to avoid feature regressions and bugs, we ought to have a way to identify performance regressions for our users immediately after a new deployment. Raygun can help developers automate this work with their Deployment Tracking feature.

Adhering to the performance budget becomes more sustainable. With this information, your team can quickly spot performance regressions (or improvements) across all Web Vitals, identify problematic deployments, and zero in on impacted users.

Drill In And Take Action

When using RUM, it is possible to narrow down results on a per-user basis. For example, in Raygun, it is possible to click on a score or bar on the histogram to see a list of impacted users. This makes it possible to start drilling further into sessions on an individual basis, with instance-level information. This helps taking action directly targeted to the issue instead of simply trusting general best practices. And later on, to diagnose the repercussions of the change.

Wrapping Up

To summarize, Web Vitals are the new gold standard in performance due to their direct correlation with the user’s experience. Development teams who are actively monitoring and optimizing their Web Vitals based on real user insights will deliver faster and more resilient digital experiences.

We’ve only just scratched the surface of what monitoring can do and solutions to sustain performance maintenance while scaling your app. Let me know in the comments how you employ a Performance Budget, better observability, or other solutions to have a relaxed night of sleep!

Collective #684


















Collective 684 item image

maku.js

A useful library to make things easier when working with HTML and Three.js.

Check it out



Collective 684 item image

Vizzu

Vizzu is a free, open-source Javascript/C++ library utilizing a generic dataviz engine that generates many types of charts and seamlessly animates between them.

Check it out



Collective 684 item image

Syncthing

Syncthing is a continuous file synchronization program. It synchronizes files between two or more computers in real time, safely.

Check it out


Collective 684 item image

Alpha Paintlet

Another great resource by Dave Rupert: a little paint worklet that facilitates the usage of Alpha transparency on any color in CSS.

Check it out



The post Collective #684 appeared first on Codrops.

Rebuilding A Large E-Commerce Website With Next.js (Case Study)

At our company, Unplatform, we have been building e-commerce sites for decades now. Over those years, we have seen the technology stack evolve from server-rendered pages with some minor JavaScript and CSS to full-blown JavaScript applications.

The platform we used for our e-commerce sites was based on ASP.NET and when visitors started to expect more interaction, we added React for the front-end. Although mixing the concepts of a server web framework like ASP.NET with a client-side web framework like React made things more complicated, we were quite happy with the solution. That was until we went to production with our highest traffic customer. From the moment we went live, we experienced performance issues. Core Web Vitals are important, even more so in e-commerce. In this Deloitte study: Milliseconds Make Millions, the investigators analyzed mobile site data of 37 different brands. As a result, they found that a 0.1s performance improvement can lead to a 10% increase in conversion.

To mitigate the performance issues, we had to add a lot of (unbudgeted) extra servers and had to aggressively cache pages on a reverse proxy. This even required us to disable parts of the site’s functionality. We ended up having a really complicated, expensive solution that in some cases just statically served some pages.

Obviously, this didn’t feel right, until we found out about Next.js. Next.js is a React-based web framework that allows you to statically generate pages, but you can also still use server-side rendering, making it ideal for e-commerce. It can be hosted on a CDN like Vercel or Netlify, which results in lower latency. Vercel and Netlify also use serverless functions for the Server Side Rendering, which is the most efficient way to scale out.

Challenges

Developing with Next.js is amazing, but there are definitely some challenges. The developer experience with Next.js is something you just need to experience. The code you write visualizes instantly in your browser and productivity goes through the sky. This is also a risk because you can easily get too focused on productivity and neglect the maintainability of your code. Over time, this and the untyped nature of JavaScript can lead to the degradation of your codebase. The number of bugs increases and productivity starts to go down.

It can also be challenging on the runtime side of things. The smallest changes in your code can lead to a drop in performance and other Core Web Vitals. Also, careless use of server-side rendering can lead to unexpected service costs.

Let’s have a closer look at our lessons learned in overcoming these challenges.

  1. Modularize Your Codebase
  2. Lint And Format Your Code
  3. Use TypeScript
  4. Plan For Performance And Measure Performance
  5. Add Performance Checks To Your Quality Gate
  6. Add Automated Tests
  7. Aggressively Manage Your Dependencies
  8. Use A Log Aggregation Service
  9. Next.js’s Rewrite Functionality Enables Incremental Adoption
Lesson Learned: Modularize Your Codebase

Front-end frameworks like Next.js make it so easy to get started these days. You just run npx create-next-app and you can start coding. But if you are not careful and start banging out code without thinking about design, you might end up with a big ball of mud.

When you run npx create-next-app, you will have a folder structure like the following (this is also how most examples are structured):

/public
  logo.gif
/src
  /lib
    /hooks
      useForm.js
  /api
     content.js
  /components
     Header.js
     Layout.js
  /pages
     Index.js

We started out using the same structure. We had some subfolders in the components folder for bigger components, but most of the components were in the root components folder. There is nothing wrong with this approach and it’s fine for smaller projects. However, as our project grew it became harder to reason about components and where they are used. We even found components that were no longer used at all! It also promotes a big ball of mud, because there is no clear guidance on what code should be dependent on what other code.

To solve this, we decided to refactor the codebase and group the code by functional modules (kind of like NPM modules) instead of technical concepts:

/src
  /modules 
    /catalog
      /components
        productblock.js
    /checkout
      /api
        cartservice.js
      /components
        cart.js

In this small example, there is a checkout module and a catalog module. Grouping the code this way leads to better discoverability: by merely looking at the folder structure you know exactly what kind of functionality is in the codebase and where to find it. It also makes it a lot easier to reason about dependencies. In the previous situation, there were a lot of dependencies between the components. We had pull requests for changes in the checkout that also impacted catalog components. This increased the number of merge conflicts and made it harder to make changes.

The solution that worked best for us was to keep the dependencies between the modules to an absolute minimum (if you really need a dependency, make sure its uni-directional) and introduce a “project” level that ties everything together:

/src
  /modules
    /common
      /atoms
      /lib 
    /catalog
      /components
        productblock.js
    /checkout
      /api
        cartservice.js
      /components
        cart.js
    /search
  /project
    /layout
      /components
    /templates
      productdetail.js
      cart.js
  /pages
    cart.js

A visual overview of this solution:

The project level contains the code for the layout of the e-commerce site and page templates. In Next.js, a page component is a convention and results in a physical page. In our experience, these pages often need to reuse the same implementation and that is why we have introduced the concept of “page templates”. The page templates use the components from the different modules, for example, the product detail page template will use components from the catalog to display product information, but also an add to cart component from the checkout module.

We also have a common module, because there is still some code that needs to be reused by the functional modules. It contains simple atoms that are React components used to provide a consistent look and feel. It also contains infrastructure code, think of certain generic react hooks or GraphQL client code.

Warning: Make sure the code in the common module is stable and always think twice before adding code here, in order to prevent tangled code.

Micro Front-Ends

In even bigger solutions or when working with different teams, it can make sense to split up the application even more into so-called micro-frontends. In short, this means splitting up the application even more into multiple physical applications that are hosted independently on different URLs. For example: checkout.mydomain.com and catalog.mydomain.com. These are then integrated by a different application that acts as a proxy.

Next.js’ rewrite functionality is great for this and using it like this is supported by so-called Multi Zones.

The benefit of multi-zones is that every zone manages its own dependencies. It also makes it easier to incrementally evolve the codebase: If a new version of Next.js or React gets out, you can upgrade the zones one by one instead of having to upgrade the entire codebase at once. In a multi-team organization, this can greatly reduce dependencies between teams.

Further Reading

Lesson Learned: Lint And Format Your Code

This is something we learned in an earlier project: if you work in the same codebase with multiple people and don’t use a formatter, your code will soon become very inconsistent. Even if you are using coding conventions and are doing reviews, you will soon start to notice the different coding styles, giving a messy impression of the code.

A linter will check your code for potential issues and a formatter will make sure the code is formatted in a consistent way. We use ESLint & prettier and think they are awesome. You don’t have to think about the coding style, reducing the cognitive load during development.

Fortunately, Next.js 11 now supports ESLint out of the box (https://nextjs.org/blog/next-11), making it super easy to set up by running npx next lint. This saves you a lot of time because it comes with a default configuration for Next.js. For example, it is already configured with an ESLint extension for React. Even better, it comes with a new Next.js-specific extension that will even spot issues with your code that could potentially impact the Core Web Vitals of your application! In a later paragraph, we will talk about quality gates that can help you to prevent pushing code to a product that accidentally hurts your Core Web Vitals. This extension gives you feedback a lot faster, making it a great addition.

Further Reading

Lesson Learned: Use TypeScript

As components got modified and refactored, we noticed that some of the component props were no longer used. Also, in some cases, we experienced bugs because of missing or incorrect types of props being passed into the components.

TypeScript is a superset of JavaScript and adds types, which allows a compiler to statically check your code, kind of like a linter on steroids.

At the start of the project, we did not really see the value of adding TypeScript. We felt it was just an unnecessary abstraction. However, one of our colleagues had good experiences with TypeScript and convinced us to give it a try. Fortunately, Next.js has great TypeScript support out of the box and TypeScript allows you to add it to your solution incrementally. This means you don’t have to rewrite or convert your entire codebase in one go, but you can start using it right away and slowly convert the rest of the codebase.

Once we started migrating components to TypeScript, we immediately found issues with wrong values being passed into components and functions. Also, the developer feedback loop got shorter and you get notified of issues before running the app in the browser. Another big benefit we found is that it makes it a lot easier to refactor code: it is easier to see where code is being used and you immediately spot unused component props and code. In short, the benefits of TypeScript:

  1. Reduces the number of bugs
  2. Makes it easier to refactor your code
  3. Code gets easier to read

Further Reading

Lesson Learned: Plan For Performance And Measure Performance

Next.js supports different types of pre-rendering: Static generation and Server-side rendering. For best performance, it is recommended to use static generation, which happens during build time, but this is not always possible. Think of product detail pages that contain stock information. This kind of information changes often and running a build every time does not scale well. Fortunately, Next.js also supports a mode called Incremental Static Regeneration (ISR), which still statically generates the page, but generates a new one in the background every x seconds. We have learned that this model works great for larger applications. Performance is still great, it requires less CPU time than Server-side rendering and it reduces build times: pages only get generated on the first request. For every page you add, you should think of the type of rendering needed. First, see if you can use static generation; if not, go for Incremental Static Regeneration, and if that too is not possible, you can still use server-side rendering.

Next.js automatically determines the type of rendering based on the absence of getServerSideProps and getInitialProps methods on the page. It’s easy to make a mistake, which could cause the page to be rendered on the server instead of being statically generated. The output of a Next.js build shows exactly which page uses what type of rendering, so be sure to check this. It also helps to monitor production and track the performance of the pages and the CPU time involved. Most hosting providers charge you based on the CPU time and this helps to prevent any unpleasant surprises. I will describe how we monitor this in the Lesson learned: Use a log aggregation service paragraph.

Bundle Size

To have a good performance it is crucial to minimize the bundle size. Next.js has a lot of features out of the box that help, e.g. automatic code splitting. This will make sure that only the required JavaScript and CSS are loaded for every page. It also generates different bundles for the client and for the server. However, it is important to keep an eye on these. For example, if you import JavaScript modules the wrong way the server JavaScript can end up in the client bundle, greatly increasing the client bundle size and hurting performance. Adding NPM dependencies can also greatly impact the bundle size.

Fortunately, Next.js comes with a bundles analyzer that gives you insight into which code takes up what part of the bundles.

Further Reading

Lesson Learned: Add Performance Checks To Your Quality Gate

One of the big benefits of using Next.js is the ability to statically generate pages and to be able to deploy the application to the edge (CDN), which should result in great performance and Web Vitals. We learned that, even with great technology like Next.js, getting and keeping a great lighthouse score is really hard. It happened a number of times that after we deployed some changes to production, the lighthouse score dropped significantly. To take back control, we have added automatic lighthouse tests to our quality gate. With this Github Action you can automatically add lighthouse tests to your pull requests. We are using Vercel and every time a pull request is created, Vercel deploys it to a preview URL and we use the Github action to run lighthouse tests against this deployment.

If you don’t want to set up the GitHub action yourself, or if you want to take this even further, you could also consider a third-party performance monitoring service like DebugBear. Vercel also offers an Analytics feature, which measures the core Web Vitals of your production deployment. Vercel Analytics actually collects the measures from the devices of your visitors, so these scores are really what your visitors are experiencing. At the time of writing, Vercel Analytics only works on production deployments.

Lesson Learned: Add Automated Tests

When the codebase gets bigger it gets harder to determine if your code changes might have broken existing functionality. In our experience, it is vital to have a good set of End-to-end tests as a safety net. Even if you have a small project, it can make your life so much easier when you have at least some basic smoke tests. We have been using Cypress for this and absolutely love it. The combination of using Netlify or Vercel to automatically deploy your Pull request on a temporary environment and running your E2E tests is priceless.

We use cypress-io/GitHub-action to automatically run the cypress tests against our pull requests. Depending on the type of software you're building it can be valuable to also have more granular tests using Enzyme or JEST. The tradeoff is that these are more tightly coupled to your code and require more maintenance.

Lesson Learned: Aggressively Manage Your Dependencies

Managing dependencies becomes a time-consuming, but oh so important activity when maintaining a large Next.js codebase. NPM made adding packages so easy and there seems to be a package for everything these days. Looking back, a lot of times when we introduced a new bug or had a drop in performance it had something to do with a new or updated NPM package.

So before installing a package you should always ask yourself the following:

  • What is the quality of the package?
  • What will adding this package mean for my bundle size?
  • Is this package really necessary or are there alternatives?
  • Is the package still actively maintained?

To keep the bundle size small and to minimize the effort needed to maintain these dependencies it is important to keep the number of dependencies as small as possible. Your future self will thank you for it when you are maintaining the software.

Tip: The Import Cost VSCode extension automatically shows the size of imported packages.

Keep Up With Next.js Versions

Keeping up with Next.js & React is important. Not only will it give you access to new features, but new versions will also include bug fixes and fixes for potential security issues. Fortunately, Next.js makes upgrading incredibly easy by providing Codemods (https://nextjs.org/docs/advanced-features/codemods. These are automatic code transformations that automatically update your code.

Update Dependencies

For the same reason, it is important to keep the Next.js and React versions actual; it is also important to update other dependencies. Github’s dependabot (https://github.com/dependabot) can really help here. It will automatically create Pull Requests with updated dependencies. However, updating dependencies can potentially break things, so having automated end-to-end tests here can really be a lifesaver.

Lesson learned: Use A Log Aggregation Service

To make sure the app is behaving properly and to preemptively find issues, we have found it is absolutely necessary to configure a log aggregation service. Vercel allows you to log in and view the logs, but these are streamed in real-time and are not persisted. It also does not support configuring alerts and notifications.

Some exceptions can take a long time to surface. For example, we had configured Stale-While-Revalidate for a particular page. At some point, we noticed that the pages were not being refreshed and that old data was being served. After checking the Vercel logging, we found that an exception was happening during the background rendering of the page. By using a log aggregation service and configuring an alert for exceptions, we would have been able to spot this a lot sooner.

Log aggregation services can also be useful to monitor the limits of Vercel’s pricing plans. Vercel’s usage page also gives you insights in this, but using a log aggregation service allows you to add notifications when you reach a certain threshold. Prevention is better than cure, especially when it comes to billing.

Vercel offers a number of out-of-the-box integrations with log aggregation services, featuring Datadog, Logtail, Logalert, Sentry, and more.

Further Reading

Lesson Learned: Next.js’s Rewrite Functionality Enables Incremental Adoption

Unless there are some serious issues with the current website, not a lot of customers are going to be excited to rewrite the entire website. But what if you could start with rebuilding only the pages that matter most in terms of Web Vitals? That is exactly what we did for another customer. Instead of rebuilding the entire site, we only rebuild the pages that matter most for SEO and conversion. In this case the product detail and category pages. By rebuilding those with Next.js, performance greatly increased.

Next.js rewrite functionality is great for this. We built a new Next.js front-end that contains the catalog pages and deployed that to the CDN. All other existing pages are rewritten by Next.js to the existing website. This way you can start having the benefits of a Next.js site in a low-effort or low-risk manner.

Further Reading

What’s Next?

When we released the first version of the project and started doing serious performance testing we were thrilled by the results. Not only were the page response times and Web Vitals so much better than before, but the operational costs were also a fraction of what it was before. Next.js and JAMStack generally allow you to scale out in the most cost-efficient way.

Switching over from a more back-end-oriented architecture to something like Next.js is a big step. The learning curve can be quite steep, and initially, some team members really felt outside of their comfort zone. The small adjustments we made, the lessons learned from this article, really helped with this. Also, the development experience with Next.js gives an amazing productivity boost. The developer feedback cycle is incredibly short!

Further Reading

Collective #675






Breaking the web forward

A sobering article by Peter-Paul Koch on the current lamentable state of browsers and the web where “[c]omplex systems and arrogant priests rule”.

Read it



















The post Collective #675 appeared first on Codrops.

CSS for Web Vitals

The marketing for Core Web Vitals (CWV) has been a massive success. I guess that’s what happens when the world’s dominant search engine tells people that something’s going to be an SEO factor. Ya know what language can play a huge role in those CWV scores? I’ll wait five minutes for you to think of it. Just kidding, it’s CSS.

Katie Hempenius and Una Kravets:

The way you write your styles and build layouts can have a major impact on Core Web Vitals. This is particularly true for Cumulative Layout Shift (CLS) and Largest Contentful Paint (LCP).

For example…

  • Absolutely positioning things takes them out of flow and prevents layout shifting (but please don’t read that as we should absolute position everything).
  • Don’t load images you don’t have to (e.g. use a CSS gradient instead), which lends a bit of credibility to this.
  • Perfect font fallbacks definitely help layout shifting.

There are a bunch more practical ideas in the article and they’re all good ideas (because good performance is good for lots of reasons), even if the SEO angle isn’t compelling to you.

Direct Link to ArticlePermalink


The post CSS for Web Vitals appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Monitoring Lighthouse Scores and Core Web Vitals with DebugBear

DebugBear takes just a few seconds to start using. You literally point it at a URL you want to watch, and it’ll start watching it. You install nothing.

It’ll start running tests, and you’ve immediately got performance charts you can start looking at that are tracked over time, not just one-offs.

Five minutes after signing up I’ve got great data to look at, including Core Web Vitals

I’ve got full Lighthouse reports right in there, showing me stuff I really need to work on.

Because all these changes are tracked over time, you can see exactly what changed and when. That’s pretty big. Did your JavaScript bundle jump up in size? When? Why? Did your SEO score take a dip? When? Why?

Now I have an exact idea of what’s causing an issue and how it’s affected performance over time.
The best part is being able to see how the site’s Core Web Vitals have performed over time.

Another great thing: DebugBear will email you (or send a Slack message) when you have regressions. So, even though the charts are amazing, it’s not like you have to log in every time to see what’s up. You can also set very clear performance budgets to test against:

Break a performance budget? You’ll be notified:

An email alert of an exceeded performance budget.
A Slack alert warning of HTML validation errors.

Want to compare across different areas/pages of your site? (You can log in to them before you test them, by the way.) Or compare yourself to competitors to make sure you aren’t falling behind? No problem, monitor those too:

Testing production is a very good thing. It’s measuring reality and you can get started quickly. But it can also be a very good thing to measure things before they become a problem. You know how you get deploy previews on services like Netlify and Vercel? Well, DebugBear has integrations built just for for services like Netlify and Vercel.

Now, when you’ve got a Pull Request with a deploy preview, you can see right on GitHub if the metrics are in-line.

That’s an awful lot of value for very little work. But don’t be fooled by the simplicity — there are all kinds of advanced things you can do. You can warm the cache. You can test from different geolocations. You can write a script for a login that takes the CSS selectors for inputs and the values to put in them. You can even even have it execute your own JavaScript to do special things to get it ready for testing, like open modals, inject peformance.mark metrics, or do other navigation. 🎉


The post Monitoring Lighthouse Scores and Core Web Vitals with DebugBear appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

A New Way To Reduce Font Loading Impact: CSS Font Descriptors

Font loading has long been a bugbear of web performance, and there really are no good choices here. If you want to use web fonts your choices are basically Flash of Invisible Text (aka FOIT) where the text is hidden until the font downloads or Flash of Unstyled Text (FOUT) where you use the fallback system font initially and then upgrade it to the web font when it downloads. Neither option has really “won out” because neither is really satisfactory, to be honest.

Wasn’t font-display Supposed To Solve This?

The font-display property for @font-face gave that choice to the web developer whereas previously the browser decided that (IE and Edge favored FOUT in the past, whereas the other browsers favored FOIT). However, beyond that it didn’t really solve the problem.

A number of sites moved to font-display: swap when this first came out, and Google Fonts even made it the default in 2019. The thinking here was that it was better for performance to display the text as quickly as you can, even if it’s in the fallback font, and then to swap the font in when it finally downloads.

I was supportive of this back then too, but am increasingly finding myself frustrated by the “hydration effect” when the web font downloads and characters expand (or contract) due to differences between the fonts. Smashing Magazine, like most publishers, makes use of web fonts and the below screenshot shows the difference between the initial render (with the fallback fonts), and the final render (with the web fonts):

Now, when put side by side, the web fonts are considerably nicer and do fit with the Smashing Magazine brand. But we also see there is quite some difference in the text layout with the two fonts. The fonts are very different sizes and, because of this, the screen content shifts around. In this age of Core Web Vitals and Cumulative Layout Shifts being (quite rightly!) recognized as detrimental to users, font-display: swap is a poor choice because of that.

I’ve reverted back to font-display: block on sites I look after as I really do find the text shift quite jarring and annoying. While it’s true that block won’t stop shifts (the font is still rendered in invisible text), it at least makes them less noticeable to the user. I’ve also optimized by font-loading by preloading fonts that I’ve made as small as possible by self-hosting subsetted fonts — so visitors often saw the fallback fonts for only a small period of time. To me, the “block period” of swap was too short and I’d honestly prefer to wait a tiny bit longer to get the initial render correct.

Using font-display: optional Can Solve FOIT And FOUT — At A Cost

The other option is to use font-display: optional. This option basically makes web fonts optional, or to put differently, if the font isn’t there by the time the page needs it, then it's up to the browser to never swap it. With this option, we avoid both FOIT and FOUT by basically only using fonts that have already been downloaded.

If the web font isn’t available then, we fall back to the fallback font, but the next page navigation (or a reload of this page) will use the font — as it should have finished downloading by then. However, if the web font is that unimportant to the site, then it might be a good idea to just remove it completely — which is even better for web performance!

First impressions count and to have that initial load without web fonts altogether seems a little bit too much. I also think — with absolutely no evidence to back this up by the way! — that it will give people the impression, perhaps subconsciously, that something is “off” about the website and may impact how people use the website.

So, all font options have their drawbacks, including the option to not use web fonts at all, or using system fonts (which is limiting — but perhaps not as limiting as many think!).

Making Your Fallback Font More Closely Match Your Font

The holy grail of web font loading has been to make the fallback font closer to the actual web font to reduce the noticeable shift as much as possible, so that using swap is less impactful. While we never will be able to avoid the shifts altogether, we can do better than in the screenshot above. The Font Style Matcher app by Monica Dinculescu is often cited in font loading articles and gives a fantastic glimpse of what should be possible here. It helpfully allows you to overlay the same text in two different fonts to see how different they are, and adjust the font settings to get them more closely aligned:

Unfortunately, the issue with the font style matching is that we can’t have these CSS styles apply only to the fallback fonts, so we need to use JavaScript and the FontFace.load API to apply (or revert) these style differences when the web font loads.

The amount of code isn’t huge, but it still feels like a little bit more effort than it should be. Though there are other advantages and possibilities to using the JavaScript API for this as explained by Zach Leatherman in this fantastic talk from way back in 2019 — you can reduce reflows and handle data-server mode and prefers-reduced-motion though that (note however that both have since been exposed to CSS since that talk).

It’s also trickier to handle cached fonts we already have, not to mention differences in various fallback styles. Here on Smashing Magazine, we try a number of fallbacks to make the best use of the system fonts different users and operating systems have installed:

font-family: Mija,-apple-system,Arial,BlinkMacSystemFont,roboto slab,droid serif,segoe ui,Ubuntu,Cantarell,Georgia,serif;

Knowing which font is used, or having separate adjustments for each and ensuring they are applied correctly can quickly become quite complex.

A Better Solution Is Coming

So, that’s a brief catch-up on where things stand as of today. However, there is some smoke starting to appear on the horizon.

Excited for the CSS "size-adjust" descriptor for fonts: reduce layout shifts by matching up a fallback font and primary web font through a scale factor for glyphs (percentage).

See https://t.co/mdRW2BMg6A by @cramforce for a demo (Chrome Canary/FF Nightly with flags) pic.twitter.com/hEg1HfUJlT

— Addy Osmani (@addyosmani) May 22, 2021

As I mentioned earlier, the main issue with applying the fallback styling differences was in adding, and then removing them. What if we could tell the browser that these differences are only for the fallback fonts?

That’s exactly what a new set of font descriptors being proposed as part of the CSS Fonts Module Level 5 do. These are applied to the @font-face declarations where the individual font is defined.

Simon Hearne has written about this proposed update to the font-face descriptors specification which includes four new descriptors: ascent-override, descent-override, line-gap-override and advance-override (since dropped). You can play with the F-mods playground that Simon has created to load your custom and fallback fonts, then play with the overrides to get a perfect match.

As Simon writes, the combination of these four descriptors allowed us to override the layout of the fallback font to match the web font, but they only really modify vertical spacing and positioning. So for character and letter-spacing, we’ll need to provide additional CSS.

But things seem to be changing yet again. Most recently, advance-override was dropped in favor of the upcoming size-adjust descriptor which allows us to reduce layout shifts by matching up a fallback font and primary web font through a scale factor for glyphs (percentage).

How does it work? Let’s say you have the following CSS:

@font-face {
  font-family: 'Lato';
  src: url('/static/fonts/Lato.woff2') format('woff2');
  font-weight: 400;
}

h1 {
    font-family: Lato, Lato-fallback, Arial;
}

Then what you would do is create a @font-face for the Arial fallback font and apply adjustor descriptors to it. You’ll get the following CSS snippet then:

@font-face {
  font-family: 'Lato';
  src: url('/static/fonts/Lato.woff2') format('woff2');
  font-weight: 400;
}

@font-face {
    font-family: "Lato-fallback";
    size-adjust: 97.38%;
    ascent-override: 99%;
    src: local("Arial");
}

h1 {
    font-family: Lato, Lato-fallback, sans-serif;
}

This means that when the Lato-fallback is used initially (as Arial is a local font and can be used straight away without any additional download) then the size-adjust and ascent-override settings allow you to get this closer to the Lato font. It is an extra @font-face declaration to write, but certainly a lot easier than the hoops we had to jump through before!

Overall, there are four main @font-face descriptors included in this spec: size-adjust, ascent-override, descent-override, and line-gap-override with a few others still being considered for subscript, superscript, and other use cases.

Malte Ubl created a very useful tool to automatically calculate these settings given two fonts and a browser that supports these new settings (more on this in a moment!). As Malte points out, computers are good at that sort of thing! Ideally, we could also expose these settings for common fonts to web developers, e.g. maybe give these hints in font collections like Google Fonts? That would certainly help increase adoption.

Now different operating systems may have slightly different font settings and getting these exactly right is basically an impossible task, but that’s not the aim. The aim is to close the gap so using font-display: swap is no longer such a jarring experience, but we don’t need to go to the extremes of optional or no web fonts.

When Can We Start Using This?

Three of these settings have already been shipped in Chrome since version 87, though the key size-adjust descriptor is not yet available in any stable browser. However, Chrome Canary has it, as does Firefox behind a flag so this is not some abstract, far away concept, but something that could land very soon.

At the moment, the spec has all sorts of disclaimers and warnings that it’s not ready for real-time yet, but it certainly feels like it’s getting there. As always, there is a balance between us, designers and developers, testing it and giving feedback, and discouraging the use of it, so the implementation doesn’t get stuck because too many people end up using an earlier draft.

Chrome has stated their intent to make size-adjust available in Chrome 92 due for release on July 20th presumably indicating it’s almost there.

So, not quite ready yet, but looks like it’s coming in the very near future. In the meantime, have a play with the demo in Chrome Canary and see if it can go a bit closer to addressing your font loading woes and the CLS impact they cause.

Links on Performance

  • Making GitHub’s new homepage fast and performant — Tobias Ahlin describes how the scrolling effects are done more performantly thanks to IntersectionObserver and the fact that it avoids the use of methods that trigger reflows, like getBoundingClientRect. Also, WebP + SVG masks!
  • Everything we know about Core Web Vitals and SEO — Simon Hearne covers why everyone is so obsessed with CWV right now: SEO. Simon says something I’ve heard a couple of times: The Page Experience Update is more of a carrot approach than stick — there is no direct penalty for failing to meet Google’s goals. That is, you aren’t penalized for poor CWV, but are given a bonus for good numbers. But if everyone around you is getting that bonus except you, isn’t that the same as a penalty?
  • Setting up Cloudflare Workers for web performance optimisation and testing — Matt Hobbs starts with a 101 intro on setting up a Cloudflare Worker, using it to intercept a CSS file and replace all the font-family declarations with Comic Sans. Maybe that will open your eyes to the possibilities: if you can manipulate all assets like HTML, CSS, and JavaScript, you can force those things into doing more performant things.
  • Now THAT’S What I Call Service Worker! — Jeremy Wagner sets up a “Streaming” Service Worker that caches common partials on a website (e.g. the header and footer) such that the people of Waushara County, Wisconsin, who have slow internet can load the site somewhere in the vicinity of twice as fast. This is building upon Philip Walton’s “Smaller HTML Payloads with Service Workers” article.
  • Who has the fastest F1 website in 2021? — Jake Archibald’s epic going-on-10-part series analyzing the performance of F1 racing websites (oh, the irony). Looks like Red Bull is in the lead so far with Ferarri trailing. There is a lot to learn in all these, and it’s somewhat cathartic seeing funny bits like, Their site was slow because of a 1.8MB blocking script, but 1.7MB of that was an inlined 2300×2300 PNG of a horse that was only ever displayed at 20×20. Also, I don’t think I knew that Jake was the original builder of Sprite Cow! (Don’t use that because it turns out that sprites are bad.)
  • Real-world CSS vs. CSS-in-JS performance comparison — Tomas Pustelnik looks at the performance implications of CSS-in-JS. Or, as I like to point out: CSS-in-React, as that’s always what it is since all the other big JavaScript frameworks have their own blessed styling solutions. Tomas didn’t compare styled-components to hand-written vanilla CSS, but to Linaria, which I would think most people still think of as CSS-in-JS — except that instead of bundling the styles in JavaScript, it outputs CSS. I agree that, whatever a styling library does for DX, producing CSS seems like the way to go for production. Yet another reason I like css-modules. Newer-fancier libs are doing it too.
  • The Case of the 50ms request — Julia Evans put together this interactive puzzle for trying to figure out why a server request is taking longer than it should. More of a back-end thing than front-end, but the troubleshooting steps feel familiar. Try it on your machine, try it on my machine, see what the server is doing, etc.

The post Links on Performance appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

What Google’s New Page Experience Update Means for Images on Your Website

It’s easy to forget that, as a search engine, Google doesn’t just compare keywords to generate search results. Google knows that if people don’t enjoy their experience on a web page, they won’t stay on the page long enough to consume the content — no matter how relevant it is.

As a result, Google has been experimenting with ways to analyze the user experience of web pages using quantifiable metrics. Factoring these into its search engine rankings, it’s hoped to provide users not only with great, relevant content but with awesome user experiences as well.

Google’s soon-to-be-launched page experience update is a major step in this direction. Website owners with image-heavy websites need to be particularly vigilant to adapt to these changes or risk falling by the wayside. In this article, we’ll talk about everything you need to know regarding this update, and how you can take full advantage of it.

Note: Google introduced their plans for Page Experience in May 2020 and announced in November 2020 that it will begin rolling out in May 2021. However, Google has since delayed their plans for a gradual rollout starting mid-Jun 2021. This was done in order to give website admins time to deal with the shifting conditions brought about by the COVID-19 pandemic first.

Some Background

Before we get into the latest iteration of changes to how Google factors user experience metrics into search engine rankings, let’s get some context. In April 2020, Google made its most pivotal move in this direction yet by introducing a new initiative: core web vitals.

Core web vitals (CWV) were introduced to help web developers deal with the challenges of trying to optimize for search engine rankings using testable metrics – something that’s difficult to do with a highly subjective thing like user experience.

To do this, Google identified three key metrics (what it calls “user-centric performance metrics”). These are:

  1. LCP (Largest Contentful Paint): The largest element above the fold when a web page initially loads. Typically, this is a large featured image or header. How fast the largest content element loads plays a huge role in how fast the user perceives the overall loading speed of the page.
  2. FID (First Input Delay): The time it takes between when a user first interacts with the page and when the main thread is free for the browser to process the event. This can be clicking/tapping a button, link, or interacting with any other dynamic element. Delays when interacting with a page can obviously be frustrating to users which is why keeping FID low is crucial.
  3. Cumulative Layout Shift (CLS): This calculates the visual stability of a page when it first loads. The algorithm takes the size of the elements and the distance they move relevant to the viewport into account. Pages that load with high instability can cause miscues by users, also leading to frustrating situations.

These metrics have evolved from more rudimentary ones that have been in use for some time, such as SI (Speed Index), FCP (First Contentful Paint), TTI (Time-to-interactive), etc.

The reason this is important is because images can play a significant role in how your website’s CWVs score. For example, the LCP is more often than not an above-the-fold image or, at the very least, will have to compete with an image to be loaded first. Images that aren’t correctly used can also negatively impact CLS. Slow-loading images can also impact the FID by adding further delays to the overall rendering of the page.

What’s more, this came on the back of Google’s renewed focus on mobile-first indexing. So, not only are these metrics important for your website, but you have to ensure that your pages score well on mobile devices as well.

It’s clear that, in general, Google is increasingly prioritizing user experience when it comes to search engine rankings. Which brings us to the latest update – Google now plans to incorporate page experience as a ranking factor, starting with an early rollout in mid-June 2021.


So, what is page experience? In short, it’s a ranking signal that combines data from a number of metrics to try and determine how good or bad the user experience of a web page is. It consists of the following factors:

  • Core Web Vitals: Using the same, unchanged, core web vitals. Google has established guidelines and recommended rankings that you can find here. You need an overall “good” CWV rating to qualify for a “good” page experience score.
  • Mobile Usability: A URL must have no mobile usability errors to qualify for a “good” page experience score.
  • Security Issues: Any flagged security issues will disqualify websites.
  • HTTPS: Pages must be served via HTTPS to qualify.
  • Ad Experience: Measures to what degree ads negatively affect the user experience on your web page, for example, by being intrusive, distracting, etc.

As part of this change, Google announced its intention to include a visual indicator, or badge, that highlights web pages that have passed its page experience criteria. This will be similar to previous badges the search engine has used to promote AMP (Accelerated Mobile Pages) or mobile-friendly pages.

This official recognition will give high-performing web pages a massive advantage in the highly competitive arena that is Google’s SERPs. This visual cue will undoubtedly boost CTRs and organic traffic, especially for sites that already rank well. This feature may drop as soon as May if it passes its current trial phase.

Another bit of good news for non-AMP users is that all pages will now become eligible for Top Stories in both the browser and Google News app. Although Google states that pages can qualify for Top Stories “irrespective of its Core Web Vitals score or page experience status,” it’s hard to imagine this not playing a role for eligibility now or down the line.

Key Takeaway: What Does This Mean For Images on Your Website?

Google noted a 70% surge in consumer usage of their Lighthouse and PageSpeed Insight tools, showing that website owners are catching up on the importance of optimizing their pages. This means that standards will only become higher and higher when competing with other websites for search engine rankings.

Google has reaffirmed that, despite these changes, content is still king. However, content is more than just the text on your pages, and truly engaging and user-friendly content also consists of thoughtfully used media, the majority of which will likely be images.

With the proposed page experience badges and Top Stories eligibility up for grabs, the stakes have never been higher to rank highly with the Google Search algorithm. You need every advantage that you can get. And, as I’m about to show, optimizing your image assets can have a tangible effect on scoring well according to these metrics.

What Can You Do To Keep Up?

Before I propose my solution to help you optimize image assets for core web vitals, let’s look at why images are often detrimental to performance:

  • Images bloat the overall size of your website pages, especially if the images are unoptimized (i.e. uncompressed, not properly sized, etc.)
  • Images need to be responsive to different devices. You need much smaller image sizes to maintain the same visual quality on smaller screens.
  • Different contexts (browsers, OSs, etc.) have different formats for optimally rendering images. However, most images are still used in .JPG/.PNG format.
  • Website owners don’t always know about the best practices associated with using images on website pages, such as always explicitly specifying width/height attributes.

Using conventional methods, it can take a lot of blood, sweat, and tears to tackle these issues. Most solutions, such as manually editing images and hard-coding responsive syntax have inherent issues with scalability, the ability to easily update/adjust to changes, and bloat your development pipeline.

To optimize your image assets, particularly with a focus on improving CWVs, you need to:

  • Reduce image payloads
  • Implement effective caching
  • Speed up delivery
  • Transform images into optimal next-gen formats
  • Ensure images are responsive
  • Implement run-time logic to apply the optimal setting in different contexts

Luckily, there is a class of tools designed specifically to solve these challenges and provide these solutions — image CDNs. Particularly, I want to focus on ImageEngine which has consistently outperformed other CDNs on page performance tests I’ve conducted.

ImageEngine is an intelligent, device-aware image CDN that you can use to serve your website images (including GIFs). ImageEngine uses WURFL device detection to analyze the context your website pages are accessed from (device, screen size, DPI, OS, browser, etc.) and optimize your image assets accordingly. Based on these criteria, it can optimize images by intelligently resizing, reformatting, and compressing them.

It’s a completely automatic, set-it-and-forget-it solution that requires little to no intervention once it’s set up. The CDN has over 20 global PoPs with the logic built into the edge servers for faster across different regions. ImageEngine claims to achieve cache-hit ratios of as high as 98%+ as well as reduce image payloads by 75%+.

Step-by-Step Test + How to Use ImageEngine to Improve Page Experience

To illustrate the difference using an image CDN like ImageEngine can make, I’ll show you a practical test.

First, let’s take a look at how a page with a massive amount of image content scores using PageSpeed Insights. It’s a simple page, but consists of a large number of high-quality images with some interactive elements, such as buttons and links as well as text.

FID is unique because it relies on data from real-world interactions users have with your website. As a result, FID can only be collected “in the field.” If you have a website with enough traffic, you can get the FID by generating a Page Experience Report in the Google Console.

However, for lab results, from tools like Lighthouse or PageSpeed Insights, we can surmise the impact of blocking resources by looking at TTI and TBT.

Oh, yes, and I’ll also be focussing on the results of a mobile audit for a number of reasons:

  1. Google themselves are prioritizing mobile signals and mobile-first indexing
  2. Optimizing web pages and images assets are often most challenging for mobile devices/general responsiveness
  3. It provides the opportunity to show the maximum improvement a image CDN can provide

With that in mind, here are the results for our page:

So, as you can see, we have some issues. Helpfully, PageSpeed Insights flags the two CWVs present, LCP and CLS. As you can see, because of the huge image payload (roughly 35 MB), we have a ridiculous LCP of nearly 1 minute.

Because of the straightforward layout and the fact that I did explicitly give images width and height attributes, our page happened to be stable with a 0 CLS. However, it’s important to realize that slow loading images can also impact the perceived stability of your pages. So, even if you can’t directly improve on CLS, the faster sizable elements such as images load, the better the overall experience for real-world users.

TTI and TBT were also sub-par. It will take at least two  seconds from the moment the first element appears on the page until when the page can start to become interactive.

As you can see from the opportunities for improvement and diagnostics, almost all issues were image-related:

Setting Up ImageEngine and Testing the Results

Ok, so now it’s time to add ImageEngine into the mix and see how it improves performance metrics on the same page.

Setting up ImageEngine on nearly any website is relatively straightforward. First, go to ImageEngine.io and signup for a free trial. Just follow the simple 3-step signup process where you will need to:

  1. provide the website you want to optimize, 
  2. the web location where your images are stored, and then 
  3. copy the delivery address ImageEngine assigns to you.

The latter will be in the format of {random string}.cdn.imgeng.in but can be updated from within the ImageEngine dashboard.

To serve images via this domain, simply go back to your website markup and update the <img> src attributes. For example:

From:

<img src=”mywebsite.com/images/header-image.jpg”/>

To:

<img src=”myimages.cdn.imgeng.in/images/header-image.jpg”/>


That’s all you need to do. ImageEngine will now automatically pull your images and dynamically optimize them for best results when visitors view your website pages. You can check the official integration guides in the documentation on how to use ImageEngine with Magento, Shopify, Drupal, and more. There is also an official WordPress plugin.

Here’s the results for my ImageEngine test page once it’s set up:

As you can see, the results are nearly flawless. All metrics were improved, scoring in the green – even Speed Index and LCP which were significantly affected by the large images.

As a result, there were no more opportunities for improvement. And, as you can see, ImageEngine reduced the total page payload to 968 kB, cutting down image content by roughly 90%:

Conclusion

To some extent, it’s more of the same from Google who has consistently been moving in a mobile direction and employing a growing list of metrics to hone in on the best possible “page experience” for its search engine users. Along with reaffirming their move in this direction, Google stated that they will continue to test and revise their list of signals.

Other metrics that can be surfaced in their tools, such as TTFB, TTI, FCP, TBT, or possibly entirely new metrics may play even larger roles in future updates.

Finding solutions that help you score highly for these metrics now and in the future is key to staying ahead in this highly competitive environment. While image optimization is just one facet, it can have major implications, especially for image-heavy sites.

An image CDN like ImageEngine can solve almost all issues related to image content, with minimal time and effort as well as future proof your website against future updates.


The post What Google’s New Page Experience Update Means for Images on Your Website appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.