HTML vs JavaScript: What’s the Difference? Beginner’s Guide

html vs javascriptThere is no shortage of languages to develop software and websites. HTML vs JavaScript is a common comparison because both offer easy to understand syntax, and have accessibility for beginner coders. In this post, we'll look at HTML vs JavaScript in relation to their pros and cons, where and how you'll use each language during development, and much more.

The Potentially Dangerous Non-Accessibility Of Cookie Notices

No matter what your stance is on them, no matter what your perspective is on data protection, web advertisement, setting cookies, EU’s General Data Protection Regulation (GDPR), and privacy preferences, cookie consent widgets (or “cookie banners”) are a reality on the web today.

For this reason, it is worth looking into how accessible and usable these banners are or can be. They have become, for better or worse, a component of the majority of today’s websites. Even more, cookie banners are often the first thing a user encounters. And, of course, they are part of every site of a webpage once they are implemented.

Sometimes, cookie banners are a technical necessity because of the page’s feature set or because of advertisements on the page. Even more often, cookie banners are not built by the front-end team but are a ready-made solution, like UserCentrics or others.

Before I explain why the cookie banner deserves special attention regarding its accessibility, let’s quickly explain how the current gold standard of web accessibility, Web Content Accessibility Guidelines (WCAG) Version 2.1, works.

WCAG consists of principles, guidelines, and success criteria. The latter are testable steps to check against a webpage. For example:

  • “Is the main language of the document set?”
  • “Does this non-text content have a suitable text alternative?”
  • “Is it perceivable where my focus is when I’m using the web presence with the keyboard (or another tech that emulates keyboard presses)?”

You may have noticed that these are “yes or no” questions. Accordingly, this means that the final verdict of any given success criterion is either “pass” or “fail.”

Additionally, conformance to WCAG, as defined by the W3C (the governing body of the Web), means that none of its success criteria is allowed to “fail” when the whole document needs to be conformant:

“Conformance to a standard means that you meet or satisfy the ‘requirements’ of the standard. In WCAG 2.0, the ‘requirements’ are the Success Criteria. To conform to WCAG 2.0, you need to satisfy the Success Criteria, that is, there is no content which violates the Success Criteria.”

W3C Working Group Note

No nuance here. Going back to our cookie consent interface, this means that the banner (or any other component) alone has the potential to negatively affect the WCAG conformance of an entire web project.

WCAG conformance could be a big legal deal for many websites, whether part of the public sector in the European Union or the United States, as it is considered to fall under non-discrimination or market access laws or overall human rights to access to information. Webpages frequently must adhere to directives and regulations that directly or indirectly refer to WCAG, often its newest version, and conformance to its level AA standards. Therefore, all the following WCAG criteria are viewed through this lens, being fully aware that they are only a mere starting point when it comes to true web accessibility. On top of that, cookie consent interfaces are implemented on every subpage of a website, consequently harming accessibility and conformance throughout an entire website.

So, in order to not let a faulty cookie banner interface drag down your page’s conformance with accessibility laws and, more importantly, not exclude users from accessing and exercising their rights, let’s list what to look for, what to configure, and what to build properly in the first place.

Contrast Errors

This is especially relevant when it comes to important controls such as the setting of cookies or the overall acceptance of the recommended cookie set. It is crucial that form controls and text can be sufficiently perceived. Unsurprisingly, a solid contrast is also important for WCAG in general. Namely, in success criteria 1.4.3 and 1.4.11, both define contrast boundaries.

What To Do

When you are using a ready-made cookie management solution, try to influence the colors (if possible, potentially in your cookie vendor’s settings) and make sure interactive controls have sufficient color contrast.

Additionally, if your website relies on a dedicated contrast mode for WCAG conformance, check whether it extends to (or influences) the cookie management interface. I have seen cases in my accessibility auditor practice where this was not considered, and an inaccessible (often branded) color combination was used in the cookie interface, thinking the contrast mode takes care of every color-related violation. But the contrast setting of the website did not affect the third-party cookie banner due to it being, well, third-party and loaded from external sources or after the contrast mode had done its work, resulting in a “Fail” on WCAG’s contrast-related success criteria.

Pseudo Buttons

Another cookie banner issue can be one thing that is, unfortunately, an error pattern that you can find outside of cookie management: divs or spans with click events posing as links or buttons. These controls may be styled like buttons but lack the semantic information of a button.

On top of that, these controls usually aren’t keyboard focusable. Hence, many serious barriers and WCAG violations are occurring all at once. If we were about to imagine the most “pseudo” button, e.g., a div with a click handler, this would at least violate success criteria 2.1.1 (Keyboard), because it is neither reachable nor “activatable,” and 4.1.2 (Name, Role, Value) because it doesn’t “introduce” itself as a button and lacks a programmatic label.

What To Do

The easiest thing to do, assuming you have built the cookie management interface yourself, is to replace those above-mentioned pseudo buttons with real <button> elements because it provides semantics, focusability, and even keyboard event handlers for free. But even if we don’t talk literally about buttons, the pattern is the same: check your cookie prompt for interactive elements that are built with elements that are only styled to look like “the real thing” but consist of non-semantic divs and spans. This is a red flag for you to implement native interactive elements, like a, button, or input instead.

The situation gets a lot tougher, of course, when these semantic errors are in a third-party script and are, therefore, beyond your direct influence and control. Understandably, we have to leave the engineering side of things and start to dive into politics of some sort. If you work within an organization where the decision of cookie management infrastructure is outside your control, you have to escalate matters to your supervisors and managers (especially, but not only when your web projects have to adhere to accessibility laws).

Three abstract steps have to happen:

  1. Your organization has to become aware of the barrier and potential legal risk — “up” to the powers that have the influence to change technical decisions like these.
  2. As a consequence, the vendor that provided the faulty cookie banner has to be contacted about the issue.
  3. A form of pressure should be applied by your organization — not just for your own sake but also regarding all the other web pages where the faulty cookie banner negatively influences accessibility and conformance.

In a possible fourth step, your company or agency should reflect on its vending process for third-party services and the HTML (and possible barriers) that come with it.

Unlabeled Form Fields

When you think about it, the main user control that one could imagine for cookie management widgets is a form control: You can select which set of cookies you are willing to accept by interacting with checkboxes in a form element. And, of course, it is important that checkbox inputs are built in the correct way.

Alas, that is not always the case. While a checkbox and its label may visually appear adjacent, the checkbox can lack a programmatic label. This adds unnecessary confusion and barriers to the interface and also a failure of success criterion 1.3.1 when you look into the web accessibility standard.

What To Do

The most solid strategy to connect form inputs with their corresponding labels is to:

  1. Use a label element for the label (obviously).
  2. Establish an id on the respective input you want to label.
  3. Add a for attribute, filling it with the control’s id you created in the last step.

This also works for inputs of all types, like textareas and selects. Here’s an example of how a properly labeled checkbox could look:

<input type="checkbox" id="marketing-cookies" />
<label for="marketing-cookies">Accept marketing cookies</label>

If you can’t directly influence the HTML of the cookie banner’s code, the situation is comparable to the situation around pseudo buttons. Make sure that necessary pressure is applied to your cookie service provider to fix the problem. All of their customers will thank you for it, and even more so the people who visit their sites.

Broken Dialog Semantics (Or None At All)

Quite a few cookie banners are actually cookie dialogs, and of the modal kind. Modal, in the context of a dialog, means that such a window blocks everything but itself, leaving only itself accessible. That is, at least, the theory. But quite some cookie management dialogs do “want to be as aggressive,” presenting as a modal part of the interface but have no according semantics and behavior, which would violate WCAG success criterion 4.1.

What To Do

Up until recently, the recommendation was to build a dialog with WAI-ARIA roles and states and implement focus management yourself (or use Kitty Giraudel’s great a11y-dialog) component).

But the situation has (mostly) changed for the better. Lately, the native <dialog> element has matured to the point where it’s being recommended in most contexts as long as it is used reasonably. A great win for accessibility, in my opinion. The past way of building (modal) dialogs had so many moving parts and factors (roles, states, focus behaviors) to think about and implement manually that it was quite difficult to get it right. Now creating a dialog means using an aptly-named HTML element (and initializing it with .showModal() if you think the cookie dialog needs to be interface-blocking).

What I’ve written so far is, of course, also valid when you cannot influence a third party’s code, and what I wrote earlier about comparable situations and potential cookie consent barriers is valid as well. If you detect errors in the third-party script you are implementing (such as no focus trapping, no dialog role, no aria-modal="true" — and if everything else points towards “modalness”), escalate things internally and educate the decision-making powers about web accessibility consequences. Maybe even educate the third-party developers that things concerning modals have gotten a lot better recently.

Cookie Banners Are Hard To Find In The First Place

There are three typical places where you can usually find cookie consent interfaces, at least visually:

  1. As a modal dialog, i.e., in the middle or — more rarely — corners of the viewport;
  2. On top, sometimes in a fixed manner;
  3. At the bottom of the viewport, sometimes also somewhat positioned in a fixed way.

But what matters way more for some people is how easy it is to find, should they go on a hunt for it. A great way of presenting this very problem is a presentation that accessibility specialist Léonie Watson gave some time ago. Léonie is a seasoned screen reader user, and her presentation showcases a bunch of webpages and how the placement and “findability” of cookie banners influence the screen reader experiences, particularly as it is related to privacy. Hampering the ability to find important content in a document can, for example, negatively affect WCAG 1.3.2 (Meaningful Sequence).

What To Do

In Léonie’s presentation, the best practices for cookie notice findability become very clear, especially in the last example:

  • Place the banner preferably at the top of the document.
  • Use a headline in the cookie banner and make it either visible or visually hidden to help screen reader users “get a grasp about the webpage” and allow them to navigate by headings.
  • Build a bridge back to proper dialog semantics by making sure that if a dialog is meant to be the “exclusive” part of the interface, it uses appropriate semantic and state descriptions (see above for details).

When we’re talking about changing third-party code, I reckon you know the drill by now. Try to influence this code indirectly on the “political level” because direct control is not possible.

Conclusion

Hopefully, two things emerged while reading this article:

  1. Awareness of the issue, namely, that an often unloved stepchild interface element has the potential to make it harder for some people to manage their privacy settings and, on top of that, to even pose a legal risk.
  2. A sense of how you can possibly remediate barriers you encounter when working with a cookie management banner. The direct way is described in a certain detail in the details I provided earlier and often has to do with code, styling, or overall education on how to prevent this in the future. The indirect way leads to a path of either setting the consent interface up properly or influencing the inner and outer politics of your vendor scripts. And again, there is the aspect of educating everyone involved. This time, structured information may be aimed at the powers that be in your organization, showing them that their choice of service providers may have unintended consequences.

But regardless of whether you and your team manage to fix accessibility bugs directly or indirectly in your cookie consent interfaces, you can see their ubiquity and component architecture as an advantage. By getting the accessibility right in one place, you influence many other pages (or even foreign websites) for the better.

If you want to extend your horizon regarding the user experience side of cookie banners and learn about how you can actually turn privacy settings into a pleasant and respectful involvement with at least EU laws, please proceed to Vitaly’s smashing read, “Privacy UX: Better Cookie Consent Experiences”.

Further Reading On SmashingMag

Improve Site Navigation and WordPress SEO with New SmartCrawl Breadcrumbs

Breadcrumbs are now baked into SmartCrawl, along with another hotly requested feature… setting primary categories for posts and products!

SmartCrawl 3.5 gives you the ability to improve your WordPress site navigation for users and search engines with two new powerful features: breadcrumbs, and the ability to specify primary categories when assigning multiple categories to your posts or product pages.

In this article, we explain the benefits of using SmartCrawl’s latest new features and how to get the most out of them. We’ll cover:

Let’s get cooking…

What’s a Breadcrumb?

In the classic fairytale, siblings Hansel and Gretel left a trail of breadcrumbs when they went deep into the forest so they would not get lost and have a path to navigate on their return.

Aptly named, breadcrumbs are an essential navigation aid that can help visitors and search engines better understand your website’s structure.

Why Use Breadcrumbs?

According to research, 38% of first-time website visitors look at navigational links on a page. So, the easier you make it for users to navigate your site, the better their experience. Especially if your website has a hierarchical structure with lots of nested pages.

And it goes without saying that improving your site’s navigation is also good for SEO, as it helps search engine bots crawl your pages and index your content more efficiently.

Here are some other reasons why you should use breadcrumbs on your WordPress site:

  • Breadcrumbs help users figure out where they are on your site. Visitors usually land on your site through an article link or search result and need a way to orient themselves quickly. A breadcrumb path can provide this orientation, making it easier for visitors to find what they’re looking for. It can also help to reduce bounce rates (i.e. the percentage of visitors who navigate away from your site after viewing only one page).
  • Breadcrumbs improve user experience. By providing a clear and concise path, users can understand not only where they are on your website, but also how to get back to previous pages or go up a level or two in your site’s hierarchical structure.
  • Breadcrumbs can improve your search engine visibility and potentially increase traffic to your site. Google uses breadcrumbs to categorize information on your site, helping it to index and organize your content and present it correctly to users. In fact, search engines like Google display breadcrumbs in search results pages, making them a valuable tool for improving your click-through rates.
Google search results example
Google displays breadcrumbs in search results.

Where Can You Use Breadcrumbs?

Breadcrumbs are a type of secondary navigation scheme that show users the path they have taken to reach a particular page on a website.

They don’t replace your site navigation menu, they support and complement it. So, a good place to put them is at the top of a page, just below your site’s primary navigation menu or the main header section.

Breadcrumbs displaying at the top of the page.
You can display a breadcrumb trail at the top of your content.

However, you can also use them at the bottom of your page or even on your sidebar.

As you will see later in this post, you can pretty much add a breadcrumb anywhere on your site using a shortcode.

Example of adding a breadcrumb into content using a shortcode.
Don’t know why you’d want to do this, but you can.

The best way to find what works best for your site is to test different locations and use tools like heatmaps or analytics to measure your results.

A website page with breadcrumb navigation.
Test different locations to sprinkle breadcrumbs on your site.

What happens if you assign multiple categories to a post? How do breadcrumbs choose which path to display?

Example of a WordPress post assigned to multiple categories.
Which of these categories gets the breadcrumb?

The answer is… breadcrumbs will choose whichever category you have specified as the primary category for the post.

Example of WordPress post assigned to multiple categories with the option to assign a primary category.
Make sure you can assign a primary category to posts with multiple categories.

Primary categories are the main classification of your business, product, or service and can help search engines understand the primary focus of your website, so the ability to select primary categories in SEO is important.

Example of post with a primary category assigned.
This is the category your breadcrumb patch will display.

If you assign a primary category to your posts, breadcrumbs don’t have to guess. It’s as simple as that!

An example of a web page with breadcrumbs.
A breadcrumb makes your site less humdrum.

How to Add Breadcrumbs to WordPress with SmartCrawl SEO Plugin

SmartCrawl not only makes it easy to add breadcrumb navigation to your website and assign primary categories to your posts and product pages, but it automatically adds structured data to your breadcrumbs. This helps search engines to understand and categorize your content and present it correctly to users.

Plus, SmartCrawl gives you complete control over how your breadcrumbs appear, making it easy to provide visitors with the information they need to navigate your site.

Example of web page with SmartCrawl breadcrumbs feature activated.
SmartCrawl’s breadcrumbs give you more SEO for your dough.

To activate SmartCrawl’s breadcrumb feature, go to SmartCrawl > Advanced Tools > Breadcrumb and click on activate.

SmartCrawl - Activate Breadcrumb screen
Activate SmartCrawl’s breadcrumb feature to configure and use it on your site.

Activating the feature gives you access to a range of settings and options for configuring your breadcrumbs.

SmartCrawl breadcrumbs settings screen.
SmartCrawl gives you loads of breadcrumb customization options.

Let’s go briefly over SmartCrawl’s breadcrumb settings:

  • Add Breadcrumbs to your Webpage – Add breadcrumbs to any page and anywhere on your website using a shortcode or adding PHP code to the template page.
  • Preview – This section lets you preview how breadcrumbs will display on your pages.
  • Breadcrumb Separator – Choose a breadcrumb separator from the list of presets or add your own custom separator using HTML characters.
  • Configurations – This section lets you enable additional breadcrumbs settings for your site, such as adding a prefix at the beginning of the breadcrumbs, adding home breadcrumbs to the trail, hiding the post title from the breadcrumb trail, or hiding the default WooCommerce product breadcrumb from your site if you use WooCommerce.
  • Breadcrumb Label Format – Here you can customize various breadcrumb label formats across your site, such as Post, Page, Archive, Search Results, and 404 Error Page label formats.
  • Deactivate – Deactivate the feature if you no longer want to display breadcrumbs on your site.

Let’s look at a few ways to customize breadcrumbs by tweaking SmartCrawl’s settings.

Choose a Breadcrumb Separator

The Breadcrumb Separator section lets you specify a separator symbol from a list of presets, but you can also add your own by entering HTML characters.

So, for this example, let’s add an emoji into the custom separator field…

Custom Breadcrumb separator
Add your custom separator HTML.

Here’s the result…

Webpage with custom breadcrumbs.
Create fun trails for users with custom breadcrumbs.

Add a Prefix

You can also add a prefix to your breadcrumbs in the Configurations section…

SmartCrawl - Breadcrumb settings: Add Prefix to Breadcrumb.
Add a prefix to your breadcrumbs.

And here’s the result…

Breadcrumb trail with prefix added.
Happy trails…

Hide Title in Breadcrumb

Let’s do one more tweak and hide the post title from our breadcrumb trails…

SmartCrawl breadcrumb configuration settings - Hide Post Title option.
You can hide the post title from displaying your breadcrumbs.

And here’s our customized breadcrumb sans title…

Breadcrumb trail with prefix and hidden title.
This humble breadcrumb is neither titled nor entitled.

Breadcrumb Label Formats

SmartCrawl gives you additional options to customize breadcrumb label formats across your site.

Customize breadcrumb label formats with a wide range of options.

This allows you to add additional information to your breadcrumbs such as post authors, dates and time, your site title, etc.

Example of customizing breadcrumb label formats.
Customized breadcrumb label formats? Is there anything SmartCrawl won’t do?

SmartCrawl… the Crumb de la Crumb of Breadcrumbs

Breadcrumbs improve your website’s SEO and search engine visibility, provide visitors with an easy way to navigate your site, reduce bounce rates, and increase click-through rates.

Smartcrawl’s breadcrumb feature is customizable, flexible, user-friendly, SEO-friendly, and compatible with all WordPress themes and plugins.

Additionally, SmartCrawl automatically ads breadcrumb schema markup and the ability to specify a primary category for posts and product pages with multiple categories assigned.

SmartCrawl is the free SEO plugin that lets you have your cake and eat it too… right down to the tastiest breadcrumbs!

See our documentation section for more information on using this feature and, if you have any questions, ask our 24/7 support team or check out our new AI Assistant by clicking the Support tab inside The Hub.

How To Get The X and Y Position of HTML Elements in JavaScript and jQuery

When developing web applications, it may be necessary to get the X and Y position of HTML elements on the page for a variety of purposes, such as positioning other elements relative to the target element or triggering events based on the element’s location. In this article, we will explore how to get the X and Y position of HTML elements in JavaScript and jQuery.

The Freelance Designer Toolbox

Unlimited Downloads: 500,000+ Web Templates, Icon Sets, Themes & Design Assets All starting at only $16.50 per month

 

Getting the X and Y Position in JavaScript

To get the X and Y position of an HTML element in JavaScript, we can use the getBoundingClientRect() method. This method returns an object with properties that describe the position of the element relative to the viewport.

Here’s an example of how to get the X and Y position of an element with the ID “myElement” using JavaScript:

const element = document.getElementById('myElement');
const rect = element.getBoundingClientRect();
const x = rect.left + window.scrollX;
const y = rect.top + window.scrollY;

In this example, we first get a reference to the element using getElementById(). We then call the getBoundingClientRect() method on the element, which returns an object with properties such as left, top, right, and bottom. The left and top properties describe the X and Y position of the element relative to the viewport.

Note that the left and top properties returned by getBoundingClientRect() are relative to the top-left corner of the viewport, not the top-left corner of the document. To get the absolute position of the element, we need to add the current scroll position of the window to the left and top values using window.scrollX and window.scrollY, respectively.

Getting the X and Y Position in jQuery

In jQuery, we can use the offset() method to get the X and Y position of an HTML element. This method returns an object with properties that describe the position of the element relative to the document.

Here’s an example of how to get the X and Y position of an element with the ID “myElement” using jQuery:

const element = $('#myElement');
const x = element.offset().left;
const y = element.offset().top;

In this example, we first get a reference to the element using the jQuery selector $('#myElement'). We then call the offset() method on the element, which returns an object with properties such as left and top. The left and top properties describe the X and Y position of the element relative to the document.

Note that the offset() method returns the position of the element relative to the document, not the viewport. If you want to get the position of the element relative to the viewport, you can subtract the current scroll position of the window using $(window).scrollLeft() and $(window).scrollTop(), respectively. Here’s an example:

const element = $('#myElement');
const offset = element.offset();
const x = offset.left - $(window).scrollLeft();
const y = offset.top - $(window).scrollTop();

Like the previous example, we first get a reference to the element using the jQuery selector $('#myElement'), then call the offset() method on the element, which returns an object with properties such as left and top. The left and top properties describe the X and Y position of the element relative to the document.

The, to get the position of the element relative to the viewport, we subtract the current scroll position of the window using $(window).scrollLeft() and $(window).scrollTop(), respectively. This gives us the X and Y position of the element relative to the viewport.

Note that the scrollLeft() and scrollTop() methods return the number of pixels that the document is currently scrolled from the left and top edges, respectively. Subtracting these values from the offset of the element gives us its position relative to the viewport.

Quick Tip: How To Disable Autocomplete on Form Inputs

Autocomplete is a feature that can save users’ time by suggesting previously entered information when filling out forms. Although it can be a helpful feature, sometimes it can be a privacy concern, especially when users share devices or work on public computers. In this case, users may want to disable the autocomplete feature for specific input fields or forms. In this article, we will discuss how to disable autocomplete for input elements.

UNLIMITED DOWNLOADS: 1,500,000+ Icons & Design Assets

 

The autocomplete attribute

The autocomplete attribute is used to control this feature. This attribute can be applied to different form elements, including input tags. It takes two values: “on” and “off.” By default, the browser sets the value to “on” for input tags. When set to “off,” the browser will not suggest previously entered values for that particular input field.

Disabling autocomplete for a specific input field

To disable auto-complete for a specific input field, you can use the following code:

<input type="text" name="username" autocomplete="off">

In the above code, we have added the autocomplete attribute to an input tag and set its value to “off.” This will disable it for that particular input field. You can apply this attribute to other input tags as well, including password fields, email fields, and search fields.

Disabling autocomplete for a whole form

If you want to disable it for a whole form, you can add the attribute to the form element and set its value to “off.” The following code demonstrates this:

<form method="post" action="/submit-form" autocomplete="off">
  <input type="text" name="username">
  <input type="password" name="password">
  <input type="email" name="email">
  <button type="submit">Submit</button>
</form>

In the above code, we have added the autocomplete attribute to the form element and set its value to “off.” This will disable it for all the input fields within the form.

Best practices

When disabling autocomplete, it is essential to keep in mind that it can impact the user experience. Some users may appreciate the feature, as it can save time and make filling out forms easier. Therefore, it is recommended to disable it only when necessary, such as when the user is working on a public computer.

Another best practice is to use the attribute only for specific input fields, such as search fields or email fields, to provide a better user experience.

Building Complex Forms In Vue

More often than not, web engineers always have causes to build out forms, from simple to complex. It is also a familiar pain in the shoe for engineers how fast codebases get incredibly messy and incongruously lengthy when building large and complex forms. Thus begging the question, “How can this be optimized?”.

Consider a business scenario where we need to build a waitlist that captures the name and email. This scenario only requires two/three input fields, as the case may be, and could be added swiftly with little to no hassle. Now, let us consider a different business scenario where users need to fill out a form with ten input fields in 5 sections. Writing 50 input fields isn’t just a tiring job for the Engineer but also a waste of great technical time. More so, it goes against the infamous “Don’t Repeat Yourself” (DRY) principle.

In this article, we will focus on learning to use the Vue components, the v-model directive, and the Vue props to build complex forms in Vue.

The v-model Directive In Vue

Vue has several unique HTML attributes called directives, which are prefixed with the v-. These directives perform different functions, from rendering data in the DOM to manipulating data.

The v-model is one such directive, and it is responsible for two-way data binding between the form input value and the value stored in the data property. The v-model works with any input element, such as the input or the select elements. Under the hood, it combines the inputted input value and the corresponding change event listener like the following:

<!-- Input element -->
<input v-model="inputValue" type="text">

<!-- Select element -->
<select v-model="selectedValue">
  <option value="">Please select the right option</option>
  <option>A</option>
  <option>B</option>
  <option>C</option>
</select>

The input event is used for the <input type= "text"> element. Likewise, for the <select> … </select>, <input type= "checkbox"> and <input type= "radio">, the v-model will, in turn, match the values to a change event.

Components In Vue

Reusability is one of the core principles of Software Engineering, emphasizing on using existing software features or assets in a software project for reasons ranging from minimizing development time to saving cost.

One of the ways we observe reusability in Vue is through the use of components. Vue components are reusable and modular interfaces with their own logic and custom content. Even though they can be nested within each other just as a regular HTML element, they can also work in isolation.

Vue components can be built in two ways as follows:

  • Without the build step,
  • With the build step.

Without The Build Step

Vue components can be created without using the Vue Command Line Interface (CLI). This component creation method defines a JavaScript object in a Vue instance options property. In the code block below, we inlined a JavaScript string that Vue parses on the fly.

template: `
  <p> Vue component without the build step </p>
  `

With The Build Step

Creating components using the build step involves using Vite — a blazingly fast, lightweight build tool. Using the build step to create a Vue component makes a Single File Component (SFC), as it can cater to the file’s logic, content, and styling.

<template>
  <p> Vue component with the build step </p>
</template>

In the above code, we have the <p> tag within the HTML <template> tag, which gets rendered when we use a build step for the application.

Registering Vue Components

Creating a Vue component is the first step of reusability and modularity in Vue. Next is the registration and actual usage of the created Vue component.

Vue components allow the nesting of components within components and, even more, the nesting of components within a global or parent component.

Let’s consider that we stored the component we created using the build step in a BuildStep.vue file. To make this component available for usage, we will import it into another Vue component or a .vue, such as the root entry file. After importing this component, we can then register the component name in the components option property, thus making the component available as an HTML tag. While this HTML tag will have a custom name, the Vue engine will parse them as valid HTML and render them successfully in the browser.

<!-- App.vue -->
<template>
  <div>
    <BuildStep />
  </div>
</template>

<script>
import BuildStep from './BuildStep.vue'

export default {
  components: {
    BuildStep
  }
}
</script>

From the above, we imported the BuildStep.vue component into the App.vue file, registered it in the components option property, and then declared it within our HTML template as <BuildStep />.

Vue Props

Vue props, otherwise known as properties, are custom-made attributes used on a component for passing data from the parent component to the child component(s). A case where props can come in handy is when we need a component with different content but a constant visual layout, considering a component can have as many props as possible.

The Vue prop has a one-way data flow, i.e., from the parent to the child component. Thus, the parent component owns the data, and the child component cannot modify the data. Instead, the child component can emit events that the parent component can record.

Props Declaration In Vue

Let us consider the code block below:

<template>
  <p> Vue component {{ buildType }} the build step</p>
</template>

<script>
export default {
  props: {
    buildType: {
      type: String
    }
  }
}
</script>

We updated the HTML template with the interpolated buildType, which will get executed and replaced with the value of the props that will be passed down from the parent component.

We also added a props tag in the props option property to listen to the props change and update the template accordingly. Within this props option property, we declared the name of the props, which matches what we have in the <template> tag, and also added the props type.

The props type, which can be Strings, Numbers, Arrays, Boolean, or Objects, acts as a rule or check to determine what our component will receive.

In the example above, we added a type of String; we will get an error if we try to pass in any other kind of value like a Boolean or Object.

Passing Props In Vue

To wrap this up, we will update the parent file, i.e., the App.vue, and pass the props accordingly.

<!-- App.vue -->
<template>
  <div>
    <BuildStep buildType="with"/>
  </div>
</template>

<script>
import BuildStep from './BuildStep.vue'

export default {
  components: {
    BuildStep
  }
}
</script>

Now, when the build step component gets rendered, we will see something like the following:

Vue component with the build step

With props, we needn’t create a new component from scratch to display whether a component has a build step or not. We can again declare the <BuildStep /> component and add the relevant build type.

<!-- App..vue -->
<template>
  <div>
    <BuildStep buildType="without"/>
  </div>
</template>

Likewise, just as for the build step, when the component gets rendered, we will have the following view:

Vue component without the build step
Event Handling In Vue

Vue has many directives, which include the v-on. The v-on is responsible for listening and handling DOM events to act when triggered. The v-on directive can also be written as the @ symbol to reduce verbosity.

<button @click="checkBuildType"> Check build type </button>

The button tag in the above code block has a click event attached to a checkBuildType method. When this button gets clicked, it facilitates executing a function that checks for the build type of the component.

Event Modifiers

The v-on directive has several event modifiers that add unique attributes to the v-on event handler. These event modifiers start with a dot and are found right after the event modifier name.

<form @submit.prevent="submitData">
 ...
<!-- This enables a form to be submitted while preventing the page from being reloaded. -->
</form>

Key Modifiers

Key modifiers help us listen to keyboard events, such as enter, and page-up on the fly. Key modifiers are bound to the v-on directive like v-on:eventname.keymodifiername, where the eventname could be keyup and the modifiername as enter.

<input @keyup.enter="checkInput">

The key modifiers also offer flexibility but allow multiple key name chaining.

<input @keyup.ctrl.enter="checkInput">

Here the key names will listen for both the ctrl and the enter keyboard events before the checkInput method gets called.

The v-for Directive

Just as JavaScript provides for iterating through arrays using loops like the for loop, Vue-js also provides a built-in directive known as the v-for that performs the same function.

We can write the v-for syntax as item in items where items are the array we are iterating over or as items of items to express the similarity with the JavaScript loop syntax.

List Rendering

Let us consider rendering the types of component build steps on a page.

<template>
  <div>
    <ul>
        <li v-for="steps in buildSteps" :key="steps.id"> {{ steps.step }}</li>
      </ul>
  </div>
</template>

<script>
export default {
 data() {
   return {
     buildSteps: [
      {
       id: "step 1",
       step:'With the build step',
      },
      {
        id: "step 2",
       step:'Without the build step'
      }
    ]
   }
 }
}
</script>

In the code block above, the steps array within the data property shows the two types of build steps we have for a component. Within our template, we used the v-for directive to loop through the steps array, the result of which we will render in an unordered list.

We added an optional key argument representing the index of the item we are currently iterating on. But beyond that, the key accepts a unique identifier that enables us to track each item’s node for proper state management.

Using v-for With A Component

Just like using the v-for to render lists, we can also use it to generate components. We can add the v-for directive to the component like the following:

<BuildStep v-for="steps in buildSteps" :key="steps.id"/>

The above code block will not do much for rendering or passing the step to the component. Instead, we will need to pass the value of the step as props to the component.

<BuildStep v-for="steps in buildSteps" :key="steps.id" :buildType="steps.step" />

We do the above to prevent any tight fixation of the v-for to the component.

The most important thing to note in the different usage of the v-for is the automation of a long process. We can move from manually listing out 100 items or components to using the v-for directive and have everything rendered out within the split of a second, as the case may be.

Building A Complex Registration Form In Vue

We will combine everything we have learned about the v-model, Vue components, the Vue props, the v-for directive, and event handling to build a complex form that would help us achieve efficiency, scalability, and time management.

This form will cater to capturing students’ bio-data, which we will develop to facilitate progressive enhancement as business demands increase.

Setting Up The Vue App

We will be scaffolding our Vue application using the build step. To do this, we will need to ensure we have the following installed:

Now we will proceed to create our Vue application by running the command below:

# npm
npm init vue@latest vue-complex-form

where vue-complex-form is the name of the Vue application.

After that, we will run the command below at the root of our Vue project:

npm install

Creating The JSON File To Host The Form Data

We aim to create a form where users can fill in their details. While we can manually add all the input fields, we will use a different approach to simplify our codebase. We will achieve this by creating a JSON file called util/bio-data.json. Within each of the JSON objects, we will have the basic info we want each input field to have.

[
  {
    "id": 1,
    "inputvalue":"  ",
    "formdata": "First Name",
    "type": "text",
    "inputdata": "firstname"
  },
  {
    "id": 2,
    "inputvalue":"  ",
    "formdata": "Last Name",
    "type": "text",
    "inputdata": "lastname"
  },
]

As seen in the code block above, we created an object with some keys already carrying values:

  • id acts as the primary identifier of the individual object;
  • inputvalue will cater to the value passed into the v-model;
  • formdata will handle the input placeholder and the labels name;
  • type denotes the input type, such as email, number, or text;
  • inputdata represents the input id and name.

These keys’ values will be passed in later to our component as props. We can access the complete JSON data here.

Creating The Reusable Component

We will create an input component that will get passed the props from the JSON file we created. This input component will get iterated on using a v-for directive to create numerous instances of the input field at a stretch without having to write it all out manually. To do this, we will create a components/TheInputTemplate.vue file and add the code below:

<template>
  <div>
    <label :for="inputData">{{ formData }}</label>
    <input
      :value= "modelValue"
      :type= "type"
      :id= "inputData"
      :name= "inputData"
      :placeholder= "formData"
      @input="$emit('update:modelValue', $event.target.value)"
    >
  </div>
 </template>

<script>
export default {
  name: 'TheInputTemplate',
  props: {
    modelValue: {
      type: String
    },
    formData: {
      type: String
    },
    type: {
      type: String
    },
    inputData: {
      type: String
    }
  },
  emits: ['update:modelValue']
}
</script>
<style>
label {
  display: inline-block;
  margin-bottom: 0.5rem;
  text-transform: uppercase;
  color: rgb(61, 59, 59);
  font-weight: 700;
  font-size: 0.8rem;
}
input {
  display: block;
  width: 90%;
  padding: 0.5rem;
  margin: 0 auto 1.5rem auto;
}
</style>

In the above code block, we achieved the following:

  • We created a component with an input field.
  • Within the input field, we matched the values that we will pass in from the JSON file to the respective places of interest in the element.
  • We also created props of modelValue, formData, type, and inputData that will be registered on the component when exported. These props will be responsible for taking in data from the parent file and passing it down to the TheInputTemplate.vue component.
  • Bound the modelValue prop to the value of the input value.
  • Added the update:modelValue, which gets emitted when the input event is triggered.

Registering The Input Component

We will navigate to our App.vue file and import the TheInputTemplate.vue component from where we can proceed to use it.

<template>
  <form class="wrapper">
    <TheInputTemplate/>
  </form>
</template>
<script>
import TheInputTemplate from './components/TheInputTemplate.vue'
export default {
  name: 'App',
  components: {
    TheInputTemplate
  }
}
</script>
<style>
html, body{
  background-color: grey;
  height: 100%;
  min-height: 100vh;
}
.wrapper {
  background-color: white;
  width: 50%;
  border-radius: 3px;
  padding: 2rem  1.5rem;
  margin: 2rem auto;
}
</style>

Here we imported the TheInputTemplate.vue component into the App.vue file, registered it in the components option property, and then declared it within our HTML template.

If we run npm run serve, we should have the following view:

At this point, there is not much to see because we are yet to register the props on the component.

Passing Input Data

To get the result we are after, we will need to pass the input data and add the props to the component. To do this, we will update our App.vue file:

<template>
  <div class="wrapper">
    <div v-for="bioinfo in biodata" :key="bioinfo.id">
      <TheInputTemplate v-model="bioinfo.inputvalue":formData= "bioinfo.formdata":type= "bioinfo.type":inputData= "bioinfo.inputdata"/>
    </div>
  </div>
<script>
//add imports here
import biodata from "../util/bio-data.json";
export default {
  name: 'App',
 //component goes here
  data: () => ({
    biodata
  })
}
</script>

From the code block above, we achieved several things:

  • We imported the bio-data JSON file we created into the App.vue file. Then we added the imported variable to the data options of the Vue script.
  • Looped through the JSON data, which we instantiated in the data options using the Vue v-for directive.
  • Within the TheInputTemplate.vue component we created, we passed in the suitable data to fill the props option.

At this point, our interface should look like the following:

To confirm if our application is working as it should, we will open up our Vue DevTools, or install one from https://devtools.vuejs.org if we do not have it in our browser yet.

When we type in a value in any of the input fields, we can see the value show up in the modelValue within the Vue Devtools dashboard.

Conclusion

In this article, we explored some core Vue fundamentals like the v-for, v-model, and so on, which we later sewed together to build a complex form. The main goal of this article is to simplify the process of building complex forms while maintaining readability and reusability and reducing development time.

If, in any case, there will be a need to extend the form, all the developer would have to do is populate the JSON files with the needed information, and voila, the form is ready. Also, new Engineers can avoid swimming in lengthy lines of code to get an idea of what is going on in the codebase.

Note: To explore more about handling events within components to deal with as much complexity as possible, you can check out this article on using components with v-model.

Further Reading on Smashing Magazine

Compare The Best Landing Page Creation Tools

So much goes into an effective landing page. It takes practice, testing, analytics, design skills, keyword research, and so much more. 

Fortunately, there are plenty of landing page creation tools that take the guesswork out of building and optimizing your landing pages. This guide covers the best ones.

Landing Page Builders

These are typically websites or web-based services that let you build a landing page by using an HTML editor or drag-and-drop functionality. Some will give you a basic editor with different landing page templates to choose from.

Unbounce

Unsplash landing page builder splash page

Unbounce is one of the most well-known landing page builders simply because it was one of the first web-based services that allowed people to build and test landing pages without relying on the IT department.

Here’s the pricing breakdown:

  • Launch—$74/month billed annually or $99 billed monthly for sites getting up to 20,000 unique monthly visitors
  • Optimize—$109/month billed annually or $145 billed monthly for sites getting up to 30,000 unique monthly visitors
  • Accelerate—$180/month billed annually or $240 billed monthly for sites getting up to 50,000 unique monthly visitors
  • Concierge—$469/month billed annually or $625 billed monthly for sites getting more than 100,000 monthly visitors

Additionally, you can test as many landing pages as you want, and Unbounce offers a variety of templates for web-based, email, and social media landing pages.

Instapage

Instapage landing page creation tool  homepage

Instapage is a bit different than your typical landing page builder in that it does come with a variety of templates for different uses (lead generation, click-through and “coming soon” pages), but what sets it apart is that it learns based on the visitors that come to your landing pages.

You can view real-time analytical data and easily determine the winners of your split tests, while tracking a variety of conversion types from button and link clicks, to thank you pages and shopping cart checkouts.

Instapage also integrates with a variety of marketing tools and platforms, including:

  • Google Analytics
  • Mouseflow
  • CrazyEgg
  • Mailchimp
  • Aweber
  • Constant Contact
  • Facebook
  • Google+
  • Twitter
  • Zoho
  • And more

A free option is available if you’d like to try it out, and a Starter package makes landing page creation and testing a bit easier on the wallet of startups and new entrepreneurs.

Real features like the aforementioned integrations start kicking in with the Professional package at $79/month, but if you’d like to get landing pages up and running quickly, it’s hard to beat the stylish templates that Instapage provides.

Launchrock

Launchrock landing page creation tool homepage

Launchrock is not so much a landing page builder as it is a social and list-building placeholder. Combining “coming soon” pages with list building capabilities, Launchrock also includes some interesting social features that encourage users to share the page with others.

For example, get X people to sign up, you’ll get Y. It also includes basic analytics and the ability to use your own domain name or a Launchrock branded subdomain (yoursite.launchrock.com). You can customize the page via the built-in HTML/CSS editor if you know how to code.

Launchrock is free and requires only an email address to get started.

Landing Page Testers/Trackers

While many landing page builders also include testing and tracking, they usually do one or the other well, but not both.

Of course, when you’re just starting out, it’s a good idea to take advantage of free trials and see which service works best for your needs.

Here are a few of the most popular ones available for testing and tracking your campaigns:

Optimizely

Optimizely landing page creation tool homepage

Optimizely is often touted as a good entry-level product for when you’re just starting out and working toward upgrading to something bigger and better as your business grows.

But with prices starting at $17/month and a free 30 day trial period, it’s a powerful product in its own right.

There are some limitations with the lower level packages. For example, multivariate testing is not available at the Bronze or Silver levels. It only becomes a feature at the Gold level, which will set you back $359/month.

On the upside, Optimizely lets you conduct an unlimited number of tests and also allows for mobile testing and personalization.

Although you do get an unlimited number of experiments, you can also edit these on-the-fly, but doing so will also cause you to lose count of which version of which page you were working on.

It can also leave some things to be desired when it comes to integration with Google Analytics, for example, it’s not able to segment custom data (like PPC traffic) or advanced analytics segments.

You can also tell Optimizely what you consider as “goal” points on your website — ranging from email subscription to buying and checkout, and it will track those items independently.

Overall, it does a great job with a simple and intuitive user interface and is ideal for those just starting to optimize their landing pages.

CrazyEgg

CrazyEgg landing page creation tool homepage.

CrazyEgg is the definitive heat map and visualization service to help you better understand how your website visitors are interacting with your landing pages.

Reports are available as “confetti” style, mouse clicks/movement tracking and scrolling heat maps.

This gives you an all in one picture to see where your visitors are engaging with your pages (and where you could improve that engagement).

CrazyEgg landing page creation tool confetti style report example.

An example of a CrazyEgg click heatmap. Warmer colors indicate more activity

Although CrazyEgg doesn’t consider itself a landing page testing and tracking solution, it does take you beyond the core information that Google Analytics gives you to show you actual user behavior on your landing pages.

Pricing starts at $9/month for up to 10,000 visitors with 10 active pages and daily reports available. A 30 day free trial is also available.

Hubspot

Hubspot landing page creation tool example

More than a tracking/testing service, Hubspot’s landing pages offer extremely customizable elements that let you tailor each page to precisely match your customers’ needs.

This lets you devise alternative segments for each “persona” you’ve created — driving engagement and conversion rates even higher.

The packages are pricey ($200/month starting out) for first-time landing page optimizers, but larger companies and organizations will see the value built in to the platform.

Beyond its smart segmenting, Hubspot also offers a drag and drop landing page builder and form builder. This is all in addition to its existing analytics, email marketing, SEO and other platforms.

Visual Website Optimizer

Visual Website Optimizer landing page creation tool example

If you’d like a more creative, hands-on approach to your landing pages, along with fill in the blanks simplicity, Visual Website Optimizer is as good as it gets.

Where this package really shines, however, is through its multivariate testing. It also offers behavioral targeting and usability testing along with heat maps, so you can see precisely how your visitors are interacting with your landing pages, and make changes accordingly.

You can also use the built-in WYSIWYG (what you see is what you get) editor to make changes to your landing pages without any prior knowledge of HTML, CSS or other types of coding.

Results are reported in real-time and as with Hubspot, you can create landing pages for specific segments of customers.

Pricing for all of these features is in the middle of all of the contenders, with the lowest available package starting at $50/month. Still, it’s a good investment for an “all in one” service where you don’t need the advanced features or tracking that other products provide.

Ion Interactive

Ion Interactive landing page creation tool example.

Ion Interactive’s landing page testing solution, could set you back several thousand per month, but it’s one of the most feature-packed options available, letting you create multi-page microsites, different touch-points of engagement, and completely scalable options with a variety of dynamic customizable options.

If you’d like to take the service for a test drive, you can have it “score” your page based on an in-house 13-point checklist. A free trial is also available, as is the opportunity to schedule a demo.

Of course, once you’ve decided on the best building, testing and tracking solution, there’s still work to be done.

Before you formally launch your new landing pages, it’s a good idea to get feedback and first impressions — not just from your marketing or design team, but from real, actual people who will be using your site for the first time.

Here are a few tools that can help you do just that.

Optimal Workshop


Optimal Workshop actually consists of three different tools. OptimalSort lets you see how users would sort your navigation and content, while Treejack lets you find areas that could lead to page abandonment when visitors can’t find what they’re looking for.

Chalkmark lets you get first impressions from users when uploading wireframes, screenshots or other “under construction” images.

Through these services, you can assign tasks to users to determine where they would go in order to complete them. You can also get basic heat maps to see how many users followed a certain route to complete the task.

You can buy any of the three services individually, or purchase the whole suite for $1,990/year. A free plan with limited functionality and number of participants is also available if you’d like to try before you buy.

Usabilla

Usabilla landing page tool homepage

Usabilla allows you to immediately capture user feedback on any device, including smartphones and tablets – a feature that sets it apart from most testing services.

Improvement is done via a simple feedback button which can be fully customized and encourages the customer to help you improve your site by reporting bugs, asking about features or just letting you know about the great shopping experience they had.

Usabilla also lets you conduct targeted surveys and exit surveys to determine why a customer may be leaving a page.

They also offer a service called Usabilla survey which is similar to other “first impression” design testing services and lets visitors give you feedback on everything from company names to wireframes and screenshots.

Pricing starts at $49/month and a free trial is available.

5 Second Test


Imagine you want visitors to determine the point of a certain page. What if they could only look at it for five seconds and then give you their opinion? Five second test makes this possible and it’s incredibly quick and easy to set up.

Case in point — you can try a sample test to see what a typical user would see. In my case, I was asked my first impressions of an app named “WedSpot” and what I’d expect to find by using such an app.

It’s simple questions like these that can actually give you some invaluable insights – and that for just five seconds of your users time.

It’s free to conduct and participate in user tests through Five Second test.

Other Helpful Tools

Beyond usability testing and user experience videos, there are a few other tools that your landing pages can benefit from:

Site Readability Test


Juicy Studio has released a readability test that uses three of the most common reading level algorithms to determine how easy or difficult it is to read the content on your site.

You’ll need to match the reading level with your intended audience but these tests will give you some insight on simplifying your language and making your pages more reading-accessible to everyone.

You simply type in your URL and get your results in seconds. You can also compare your results to other typical readings including Mark Twain, TV Guide, the Bible and more.

Pingdom Website Speed Test


Page loading time is a huge factor in your website’s bounce rate and lack of conversions. Simply put, if your page loads too slowly, visitors won’t wait around for it to finish.

They’ll simply leave and potentially go to your competition. Using Pingdom’s website speed test, you can see how fast (or slow) your website is loading.

Beyond the speed of your website itself, the service will also calculate your heaviest scripts, CSS, images, or other files that could be slowing down your pages.

You should note that testing is conducted from Amsterdam, the Netherlands, so depending on how close or far your server is from there will also factor into the equation.

It’s free to test your site on Pingdom.

Browser Shots


Although this is the last entry in our series of helpful tools, it is by no means any less important. Testing your landing pages in a multitude of browsers on a variety of operating systems is crucial to your pages’ overall success.

Fortunately, BrowserShots.org makes this process incredibly easy. You can test your pages on all current versions of the web’s most popular browsers, as well as older versions of those browsers.

It does take time for browser screenshots to be taken and uploaded for you to see the results. You can sign up for a paid account and see them faster, but for a free tool, it’s no problem to wait a little while and see just how accessible your page is to visitors on a variety of operating systems, browsers, and browser versions.

The Top Landing Page Creation Tools in Summary

The best landing page creation tools help you with keyword research, split testing, content creation, and everything else you need to drive conversions.

Remember, landing page creation is not a one-and-done process. So make sure you assess tools that will help you optimize your landing page after you’ve created them.

A Guide To Accessible Form Validation

When it comes to form validation, we can explore it from two perspectives: usability and accessibility. “What’s the difference between usability and accessibility?” you may ask. Let’s start from there.

Usability

Usability is about improving a given action until it’s as easy and delightful as possible. For example, making the process of fixing an invalid field easier or writing better descriptions so the user can fill the field without facing an error message.

To get a really good grasp of the challenges in this process, I highly recommend you to read the deep-dive “Designing better inline validations UX” from Vitaly. There, you’ll learn about the different approaches to validate a field and what are the caveats and trade-offs of each one.

Accessibility

Choosing the best UX approach is just half of the challenge. The other half is ensuring that any person knows the field is invalid and easily understands how to fix it. That’s what I’ll explore through this guide.

You can look at ‘Accessibility’ and ‘Usability’ as two equally important universes with their own responsibilities. Accessibility is about ensuring anyone can access the content. Usability is about how easy it is to use the website. Once overlapped will take ‘User Experience’ to its best.

With these two concepts clarified, we are now ready to dive into accessible validations.

Accessibility In Forms

Before we get into validation, let me recap the accessibility fundamentals in forms:

  • Navigation
    The form can be navigated using only the keyboard, so people who don’t use a mouse can still fill and submit the form. This is mostly about setting a compliant focus indicator to each form control.
  • Context
    Each form field must have an accessible name (label), so people who use assistive technologies can identify each field. For example, screen readers would read a field name to its user.

Screen Readers In Forms

Similar to browsers, screen readers (SR) behave slightly differently from each other: different shortcuts, different semantic announcements, and different features support. For example, NVDA works better with Firefox, while VoiceOver works best with Safari, and both have slightly different behaviors. However, this shouldn’t stop us from building the common solid foundations that are strongly supported by all.

A while ago, I asked on Twitter how screen reader users navigate forms. Most prefer to Tab or use special shortcuts to quickly jump through the fields but oftentimes can’t do it. The reason is that we, developers, forget to implement those fields with screen readers in mind most of the time.

Currently, many of the field validations can’t be solved with native HTML elements, so we are left with the last resource: ARIA attributes. By using them, Assistive Technologies like screen readers will better describe a given element to the user.

Through the article, I’m using VoiceOver in macOS Catalina for all the scenarios. Each one includes a Copeden demo and a video recording, which hopefully will give you a better idea of how screen readers behave in forms, field descriptions, and errors.

The Field Instructions

Field Description

The field label is the first visual instruction to know what to fill in, followed by a description when needed. The same way sighted users can see the description (assuming a color contrast that meets WCAG 1.4.3 Contrast Minimum), the SR users also need to be aware of it.

To do so, we can connect the description to the input by using the aria-describedby attribute, which accepts an id pointing to the description element. With it, SR will read the description automatically when the user focuses on the field input.

<!-- note: highlight the aria-describedby -->
<label for="address">Your address</label>
<input id="address" type="text" aria-describedby="addressHint"/>
<span id="addressHint">Remember to include the door and apartment.</span>

In this field, it would cause more harm than good to connect the entire description to the aria-describedby. Instead, I prefer to connect a short description that hints to the user about the full description so they can navigate to it on their own.

<input id="days" type="text" aria-describedby="daysHintShort"/>
<div class="field-hint">
  <span id="daysHintShort" hidden>
    <!-- This text is announced automatically when the input is focused
    and ignored when the screen reader users navigate to it. -->
    Below it's explained how these days influence your benefits.
  <span>
  <div>Depending on how many days....</div>
</div>

As this short description is exclusive to assistive technologies only, we need to hide it from sight users. A possibility could be using the .sr-only technique. However, a side-effect is that the screen reader user would bump into it again when moving to the next element, which is redundant. So, instead, let’s use the hidden attribute, which hides the short description from assistive technologies altogether, but still lets us use the node’s contents as the inputs’ description.

<input id="days" type="text" aria-describedby="daysHintShort"/>
<div class="field-hint">
  
<span id="daysHintShort" hidden>
    <!-- This text is announced automatically when the input is focused,
    and ignored when the screen reader users navigates to it. -->
    Below it's explained how these days influence your benefits.
  </span>
  <div>Depending on how many days....</div>
</div>

I find this pattern very useful for fields with long descriptions or even complex validation descriptions. The tip here is to hint to the users about the full instructions, so they won’t be left alone guessing about it.

Visually most people will recognize this pattern. However, people who use SR will get confused. For instance, Voice Over will announce “Address star, edit text.” Some screen readers might completely ignore it, depending on how strict the verbosity settings are.

This is a perfect scenario of an element that, although it’s visually useful, it’s far from ideal for SR users. There are a few ways to address this asterisk pattern. Personally, I prefer to “hide” the asterisk using aria-hidden="true", which tells all assistive technologies to ignore it. That way, Voice Over will just say “Address, edit text.”

<label for="address" class="field-label">
  Address <span class="field-star" aria-hidden="true">*</span>
</label> 

The Semantic Clue

With the visual clue removed from AT, we still need to semantically tell the input is required. To do so, we could add the required attribute to the element. With that, the SR will say, “Address, required, edit text.”

<input id="address" type="text" required />

Besides adding the necessary semantics, the required attribute also modifies the form behavior. On Chrome 107, when the submit fails, it shows a tooltip with a native error message and focuses the required empty field, like the following:

The Flaws In Default Validations

Probably your designer or client will complain about this default validation because it doesn’t match your website aesthetics. Or your users will complain the error is hard to understand or disappears too soon. As currently, it’s impossible to customize the styling and behavior, so we’ll see ourselves forced to avoid the default field validation and implement our own. And just like that, accessibility is compromised again. As web creators, it’s our duty to ensure the custom validation is accessible, so let’s do it.

The first step is to replace required with aria-required, which will keep the input required semantics without modifying its style or behavior. Then, we’ll implement the error message itself in a second.

<!-- note: this would be a diff -->
<input id="address" type="text" required="required" />
<input id="address" type="text" aria-required="true" />

Here’s a table comparing side by side the difference between required and aria-required:

Function required aria-required
Adds semantics Yes Yes
Prevents invalid submit Yes No
Shows custom error message Yes No
Auto-focus invalid field Yes No

Reminder: ARIA attributes never modify an element’s styles or behavior. It only enhances its semantics.

The Error Message

From a usability standpoint, there’s a lot to take into consideration about error messages. In short, the trick is to write a helpful message without technical jargon that states why the field is incorrect and, when possible, to explain how to fix it. For a deep dive, read how to design better error messages by Vitaly and how Wix rewrote all their error messages.

From an accessibility standpoint, we must ensure anyone not only knows that the field is invalid but also what’s the error message. To mark a field as invalid, we use the ARIA attribute aria-invalid="true", which will make the SR announce that the field is invalid when it’s focused. Then, to also announce the error, we use aria-describedby we learned about before, pointing to the error element:

<input
  id="address"
  type="text"
  required="required"
  aria-invalid="true"
  aria-describedby="addressError addressHint"
/>
<span>
  <p id="addressError">Address cannot be empty.</p>
  <p id="addressHint">Remember to include the door and apartment.</p>
</span>

Invalid Field With Description

A good thing about aria-describedby is that it accepts multiple ids, which is very useful for invalid fields with descriptions. We can pass the id of both elements, and the screen reader will announce both when the input is focused, respecting the order of the ids.

<input
  id="address"
  type="text"
  required="required"
  aria-invalid="true"
  aria-describedby="addressError addressHint"
/>
<span>
  <p id="addressError">Address cannot be empty.</p>
  <p id="addressHint">Remember to include the door and apartment.</p>
</span>

Open the Pen Field Validations — aria-invalid by Sandrina Pereira.

The Future Of ARIA Errors And Its Support

An ARIA attribute dedicated to errors already exists — aria-errormessage — but it’s not yet supported by most screen readers. So, for now, you are better off avoiding it and sticking with aria-describedby.

In the meantime, you could check A11Ysupport to know the support of a given ARIA attribute. You can look at this website as the “caniuse” but for screen readers. It contains detailed test cases for almost every attribute that influences HTML semantics. Just pay attention to the date of the test, as some tests might be too old.

Dynamic Content Is Not Announced By Default

Important to note that although aria-describedby supports multiple ids, if you change them (or the elements’ content) dynamically while the input is focused, the SR won’t re-announce its new content automatically. The same happens to the input label. It will only read the new content after you leave the input and focus it again.

In order for us to announce changes in content dynamically, we’ll need to learn about live regions. Let’s explore that in the next section.

Moments Of Validation

The examples shown so far demonstrate ARIA attributes in static fields. But in real fields, we need to apply them dynamically based on user interactions. Forms are one of the scenarios where JavaScript is fundamental to making our fields fully accessible without compromising modern interactive usability.

Regardless of which moment of validation (usability pattern) you use, any of them can be accomplished with accessibility in mind. We’ll explore three common validation patterns:

  • Instant validation
    The field gets validated on every value change.
  • Afterward validation
    The field gets validated on blur.
  • Submit validation
    The field gets validated on the form submit.

Instant Validation

In this pattern, the field gets validated every time the value changes, and we show the error message immediately after.

In the same way, as the error is shown dynamically, we also want the screen reader to announce it right away. To do so, we must turn the error element in a Live Region, by using aria-live="assertive". Without it, the SR won’t announce the error message, unless the user manually navigates to it.

Open the Pen Field Validations - instant validation by Sandrina Pereira.

Some nice to know about this example:

  • While the input is valid, the aria-invalid can be "false" or be completely absent from the DOM. Both ways work fine.
  • The aria-describedby can be dynamically modified to contain one or multiple ids. However, if modified while the input is focused, the screen reader won’t re-announce its new ids — only when the input gets re-focused again.
  • The aria-live attribute holds many gotchas that can cause more harm than good if used incorrectly. Read “Using aria-live” by Ire Aderinokun to better understand how Live Regions behave and when (not) to use it.
  • From a usability perspective, be mindful that this validation pattern can be annoying, the same way it’s annoying when the error shows up too early while we are still typing our answer.

Afterward Validation

In this pattern, the error message is only shown after the user leaves the field (on blur event). Similar to the ‘Instant Validation’, we need to use the aria-live so that the user knows about the error before start filling the next elements.

Usability tip: I personally prefer to show the on-blur error only if the input value changes. Why? Some screen reader users go through all the fields to know how many exist before starting to actually fill them. This can happen with keyboard users too. Even sight users might accidentally click on one of the fields while scrolling down. All these behaviors would trigger the on-blur error too soon when the intent was just to ‘read’ the field, not to fill it. This slightly different pattern avoids that error flow.

Open the Pen Field Validations - afterward validation by Sandrina Pereira.

Submit Validation

In this pattern, the validation happens when the user submits the form, showing the error message afterward. How and when exactly these errors are shown depends on your design preferences. I’ll go through two of the most common approaches:

In Long Forms

In this scenario, I personally like to show an error summary message, usually placed right before the submit button, so that the chances of being visible on the screen are bigger. This error message should be short, for example, “Failed to save because 3 fields are invalid.”

It’s also common to show the inline error messages of all invalid fields, but this time without aria-live so that the screen reader doesn’t announce all the errors, which can be annoying. Some screen readers only announce the first Live Region (error) in the DOM which can also be miss-leading.

Instead, I add the aria-live="assertive" only to the error summary.

Open the Pen Field Validations - on submit - error summary by Sandrina Pereira.

In the demo above, the error summary has two elements:

<p aria-live="assertive" class="sr-only">
  Failed to save because 2 fields are invalid.
</p>
<p aria-hidden="true">
  Failed to save because 2 fields are invalid.
</p>
  1. The semantic error summary contains a static error summary meant to be announced only on submit. So the aria-live is in this element, alongside the .sr-only to hide it visually.
  2. The visual error summary updates every time the number of invalid fields changes. Announcing that message to SR could be annoying, so it’s only meant for visual updates. It has the aria-hidden so that the screen readers users don’t bump into the error summary twice.

Check the screen reader demo below:

In Short Forms

In very short forms, such as logins, you might prefer not to show an error summary in favor of just the inline error messages. If so, there are two common approaches you can take here:

  1. Add an invisible error summary for screen readers by using the .sr-only we learned above.
  2. Or, when there’s just one invalid field, focus that invalid field automatically using HTMLElement.focus(). This helps keyboard users by not having to tab to it manually, and, thanks to aria-describedby, will make screen readers announce the field error immediately too. Note that here you don’t need aria-live to force the error announcement because the field getting focused is enough to trigger it.

Open the Pen Field Validations - on submit - auto-focus by Sandrina Pereira.

Accessibility Comes Before Usability

I must highlight that this is just one approach among others, such as:

  • Error text: It can be just a simple text or include the number of invalid fields or even add an anchor link to each invalid field.
  • Placement: Some sites show the error summary at the top of the form. If you do this, remember to scroll and focus it automatically so that everyone can see/hear it.
  • Focus: Some sites focus on the error summary, while others don’t. Some focus on the first invalid field and don’t show an error summary at all.

Any of these approaches can be considered accessible as long it’s implemented correctly so that anyone can perceive why the form is invalid. We can always argue that one approach is better than the other, but at this point, the benefits would be mostly around usability and no longer exclusively about accessibility.

Nevertheless, the form error summary is an excellent opportunity to gracefully recover from a low moment in a form journey. In an upcoming article, I will break down these form submit patterns in greater detail from both accessibility and usability perspectives.

Testing Field Validations

Automated accessibility tools catch only around 20-25% of A11Y issues; the more interactive your webpage is, the fewer the bugs it catches. For instance, those tools would not have caught any of the demos explored in this article.

You could write unit tests asserting that the ARIA attributes are used in the right place, but even that doesn’t guarantee that the form works as intended for everyone in every browser.

Accessibility is about personal experiences, which means it relies a lot on manual testing, similar to how pixel-perfect animations are better tested manually too. For now, the most effective accessibility testing is a combination of multiple practices such as automated tools, unit tests, manual tests, and user testing.

In the meantime, I challenge you to try out a screen reader by yourself, especially when you build a new custom interactive element from scratch. You’ll discover a new web dimension, and ultimately, it will make you a better web creator.

Things To Keep In Mind For Accessible Fields

Auto Focusing Invalid Inputs

Above, I mentioned one possible pattern of automatically focusing the first invalid field, so the user doesn’t need to manually navigate to it. Depending on the case, this pattern might be useful or not. In doubt, I prefer to keep things simple and not add auto-focus. If not obvious, let the user read the summary error message, understand it and then navigate to the field by themselves.

Wrapping Everything Inside <label>

It might be tempting to wrap everything about a field inside the <label> element. Well, yes, the assistive technologies would then announce everything inside automatically on input focus. But, depending on how ‘extensive’ the input is, it might sound more confusing than helpful:

  • It’s not clear for screen reader users what exactly the label is.
  • If you include interactive elements inside the label, clicking on them might conflict with the automatic input focus behavior.
  • In unit tests (e.g., Testing Library), you can’t target an input by its label.

Overall, keeping the label separate from everything else has more benefits, including having grainier control over how elements are announced and organized.

Disabling The Submit Button

Preventing the user from submitting an invalid form is the most common reason to disable a button. However, the user probably won’t understand why the button is disabled and won’t know how to fix the errors. That’s a big cognitive effort for such a simple task. Whenever possible, avoid disabled buttons. Let people click buttons at any time and show them the error message with instructions. In the last instance, if you really need a disabled button, consider making it an inclusive disabled button, where anyone can understand and interact with it.

Good UX Is Adaptable

Most physical buildings in the world have at least two ways to navigate them: stairs and lifts. Each one has its unique UX with pros and cons suited for different needs. On the web, don’t fall into the trap of forcing the same pattern on all kinds of users and preferences.

Whenever you find an A11Y issue in a given pattern, try to improve it, but also consider looking for an alternative pattern that can be used simultaneously or toggled.

Remember, every person deserves a good experience, but not every experience is good for everyone.

Wrapping Up

Field validations are one of those web concepts without dedicated HTML elements that suit modern web patterns, so we need ARIA to fill in the gaps. In short, this article covered the most common attributes for field validations that make a difference for many people in their web journey:

  • aria-required: To mark a field as required.
  • aria-invalid: To mark the field as invalid.
  • aria-describedby: To connect the description and error message to the input.
  • aria-live: To announce dynamic error messages.

Accessibility is about people, not about ARIA. Besides those attributes, remember to review your colors, icons, field instructions, and, equally important, the validations themselves. Your duty as a web creator is to ensure everyone can enjoy the web, even when a form fails to submit.

Last but not least, I’d like to appreciate the technical review from Ben Mayers, Kitty Giraudel, and Manuel Matuzović. Because sharing is what makes us better. <3

WCAG References

All the practices we explored in this article are covered by WCAG guideline “3.3 Input Assistance”:

The more I learn about web accessibility, the more I realize accessibility goes beyond complying with web standards. Remember, the WCAG are ‘guidelines’ and not ‘rules’ for a reason. They are there to support you, but if you suspect a guideline doesn’t make sense based on your diverse user research, don’t be afraid to question it and think outside the box. Write about it, and ultimately guidelines will evolve too.

How to Improve the Performance of Large WordPress Sites

If you’re running a larger WordPress site, you may be facing performance challenges that can impact user experience and search engine rankings. Fortunately, there are several steps you can take to improve your site’s speed and performance. This article covers some of the most common performance challenges for larger WordPress sites and provides tips and solutions for addressing these.

Growing pains… some should be so lucky to have them!

Let’s talk about how to manage a WordPress site that is growing too quickly too soon and causing you or your clients all sorts of pains and problems.

If the issue is temporary, such as dealing with an unexpected traffic spike from a post gone viral (another thing we should be so lucky to experience!), then knowing how to scale your WordPress site when traffic soars can fix this.

However, if problems persist, it may take more than a couple of aspirins and calling the doctor in the morning to make the headaches go away.

In this article, we’ll cover:

WordPress Enterprise Development Challenges

Q: How complex can you make a WordPress site?

A: Very.

When it comes to building large and complex sites, WordPress’s capacity to handle it is not an issue. As WordPress enterprise developer and global SME business adviser Mario Peshev states in his excellent article on building large and complex sites using WordPress

“WordPress is a proven CMS that handles various applications handling millions of users and tens or even 100M views a month.”

As Mario also states…

“Scaling from 10M to 50M is feasible, 50M to 100M is challenging, 100M–200M is quite complex and 200M+ may require some serious engineering effort.”

So, the capacity of the WordPress CMS platform to handle large and complex sites is not a problem.

The issue is having the skills to handle WordPress enterprise development challenges.

As most developers know, WordPress is not only a widely popular content management system known for its flexibility, ease of use, and affordability, but it is also an excellent platform for small businesses and startups that want to establish a web presence quickly and easily.

However, when it comes to enterprise-grade WordPress development, the amount of information available is as scarce as a developer who hasn’t resorted to cursing loudly at their code editor at least once.

So, before we get into diagnosing the challenges and issues of dealing with large WordPress sites, let’s explore some of the challenges of finding relevant information on WordPress enterprise development.

Here is a summary of the points Mario Peshev makes in his article…

Scarcity of Information on Enterprise-Grade WordPress Development

One of the main reasons why information on enterprise-grade WordPress development is scarce is that only a handful of agencies specialize in building WordPress platforms, applications, plugins, or performing migrations and integrations for the enterprise.

Most vendors profile in small business websites, and only a small chunk of the service providers work with enterprises.

Furthermore, those consultants and agencies often don’t have the time and resources to write tutorials and share their know-how with the industry, or they just don’t care, especially more hardcore engineers who don’t want to bother.

Another reason why information on WordPress enterprise development is limited is that WordPress is often not the core application that enterprises use in the first place. It’s another obstacle for many, like working on the front-end interface as 1% of the main platform running behind the scenes.

However, WordPress developers who want to bid on enterprise projects can focus on several different areas to enhance their expertise.

Focus on Different Areas for Enhancing Expertise

The first area that WordPress developers should focus on is studying the WordPress Core, APIs, and the surrounding ecosystem in-depth. This will give developers a deeper understanding of the platform and how it works.

They should also make sure that they’re comfortable with WordPress coding standards and best practices. This will ensure that the code they write is maintainable and easy to read.

The second area that WordPress developers should focus on is practicing in the main technical areas that enterprises care about, such as performance, security, scalability, and backward compatibility.

Enterprises have high expectations, and it’s essential to demonstrate that you have the expertise to meet their requirements.

These WordPress development resources will help you gain these valuable skills and expertise:

Strategic Players in the Field

Hosting vendors are strategic players in the field and occasionally work with high-scale applications. Developers can browse their resources and follow their blogs, knowledge base articles, and the like. WordPress is a platform built on top of PHP and SQL, front-end served through HTML, CSS, JavaScript. It runs on a web server like Apache or Nginx using mod_php or php-fpm, connected to a MySQL database on a Linux server.

Most of the heavy lifting for enterprises happens on top of those layers. Therefore, it makes sense to dive deeper into their own communities and resources stressing on those topics.

Follow WordPress Core Contributors and Employees

It always helps to follow WordPress Core contributors, employees at enterprise-grade companies, and the blogs of the leading agencies working with enterprises. You may find some relevant case studies, interviews with clients, or other top engineers that could help you improve even further.

Now that we’ve looked at the first challenges, which is acquiring the expertise to handle large and complex WordPress sites and meeting the expectations of enterprises, let’s turn to addressing common performance issues you may experience working with large WordPress sites.

Common Performance Challenges for Large WordPress Sites

WordPress is used by some of the biggest and most well-known companies, celebrities, and brands in the world, like Intel, Pepsi Cola, PlayStation, American Express, TechCrunch, Fisher-Price, Beyonce, Justin Timberlake, Usain Bolt, and many more.

Someone has to look after these large sites… why not you?

While browsing through WPMU DEV’s member forums (which, by the way, is a treasure trove of information for web developers), I came across this post from WPMU DEV member Charly Leetham, which I am reproducing in full below:

***

I was contacted by a long term client asking for assistance with their client.

The end customer is setting up a rather large website in WordPress and they were having no end of difficulties in keeping the site running. It was so bad, that they had to reboot their Amazon EC2 instance regularly (several times a day regularly).

With trepidation I agreed to take a look and see if I could help. What I found has left me … saddened. For the client, mostly.

The site:

  • Database: 4Gigabytes (after optimization)
  • Posts / Pages and other content: Over 900K entries.

This is not a small site.

It was built in Elementor which initially left me concerned, as I know that Elementor is resource hungry.

The EC2 instance was provisioned with 140 Gig storage and 32 Gig memory. More than enough, right? One would think so.

The business had been moved to EC2 by a consultant who had promised them it would improve their performance. Then they told them that the reason the instance kept hanging was because of the high number of transients that were being created.

They created a cron job that deleted the transients every hour and with very little improvement.

I’ve found a number of things during my investigations but the three most concerning things are:

1. Although the server was provisioned with 32G of memory. PHP had been limited 2G and WordPress had been limited to 40M.

It’s no wonder they were having trouble.

Increasing these limits has stopped the hanging but we’re still experiencing memory overflows.

2. The database was provisioned on the same server.

Splitting the database onto a RDS (remote database server) should provide more performance increases.

3. No optimization or performance improvement work had been done.

By implementing Hummingbird, I’ve been able to improve the load time of the site and that’s without doing anything really hard core. That’s still to come.

The main thing I want to highlight for others here, is that it’s the incremental knowledge you bring to the table when working with clients.

Yes, people can build their own WordPress sites but few people can really make them hum. That takes experience and a lot of work.

***

Charly’s forum post is a great example of some of the typical performance challenges you can expect when working with larger WordPress sites and provides a number of useful insights into handling these.

To address these challenges, let’s first summarize the main technical issues Charly described when looking at this client’s site:

  1. The end customer is setting up a rather large website in WordPress with over 900k entries and a 4GB database after optimization, which is not a small site.
  2. The website was built in Elementor, which is resource-hungry and requires a lot of server resources.
  3. The EC2 instance was provisioned with 140GB storage and 32GB memory, but PHP had been limited to 2GB and WordPress had been limited to 40MB, causing performance issues and memory overflows.
  4. The database was provisioned on the same server, which caused performance issues. Splitting it onto a remote database server should provide performance improvements.
  5. No optimization or performance improvement work had been done. By implementing Hummingbird, Charly was able to improve the site’s load time.
  6. The incremental knowledge and experience brought to the table by an experienced web developer is crucial for optimizing and improving the performance of WordPress sites, which can be complex and require a lot of work to make them run smoothly.

We’ve already addressed point #6, so let’s go through the other issues on the list above.

Large WordPress Site Performance Issue #1 – WordPress Database

As your WordPress site grows, so does the size of its database. Your WordPress database can become quite large and may start causing some issues.

Managing a large WordPress database can be a daunting task, so let’s take a look at some of the challenges, best practices, strategies, and solutions for managing your WordPress database on larger sites.

The challenges of having a large WordPress database include:

  • Slow page load times: A large database can slow down your website, making it difficult for visitors to load pages quickly.
  • Backup and restore issues: Backing up and restoring a large database can be a challenge, and it may take a long time to complete the process.
  • Database corruption: A large database can be more prone to corruption, which can cause data loss and other issues.
  • Difficulty in database maintenance: Maintaining a large database may require more resources and expertise to keep it running smoothly.

Here are some strategies and best practices managing WordPress databases on larger sites:

Initial Configuration

Before you even start thinking about managing your database, it’s important to make sure that it’s set up correctly. When you install WordPress, it creates a new database for you. However, if you’re running a large site, you may want to consider using a separate database server. This will help to improve performance and reduce the load on your web server.

When configuring your database, it’s important to choose the right settings. In particular, you’ll want to pay attention to the database character set and collation. These settings can affect how your content is displayed on your site, so it’s important to get them right from the start.

Where to Keep the Databases

When managing a large WordPress site, you’ll want to think carefully about where to keep your databases.

There are a few different options to consider:

  • Local Database: You can keep your database on the same server as your website. This is the simplest and most common option, but it can lead to performance issues as your site grows, as Charly referred to in the client example above.
  • Remote Database: You can keep your database on a separate server, either within your own network or in the cloud. This can improve performance, but it can also increase costs.
  • Managed Database: You can use a managed database service, such as Amazon RDS or Google Cloud SQL. This can be a good option if you don’t have the expertise to manage your own database.

Database Access Time with Large Numbers of Records

As your WordPress site grows, the size of your database can have an impact on how quickly your site loads.

When you have a large number of records in your database, queries can take longer to run, which can slow down your site.

Caching can help speed up your website by storing frequently accessed data in memory, reducing the need to access the site’s database and PHP. This, of course, depends on the kind of caching being used, e.g. database caching (which includes object caching) or page caching (where the cache of the web pages is stored and presented when a specific page is requested later without needing to be processed by PHP and MySQL).

To improve performance, you can use server-side caching, caching plugins that manage server-side caching solutions, or standalone caching plugins. Our performance-optimizing plugin Hummingbird, for example, has its own caching but also integrates with WPMU DEV’s server-side caching.

Caching can have a significant impact on site performance, particularly for larger sites. However, setting up and managing caching can be complex and time-consuming.

Also, it’s important to regularly monitor your site’s performance to ensure the caching is optimized for your specific needs.

To learn more about caching solutions, check out our Ultimate Guide to WordPress Caching.

Another option is to use a technique called “sharding,” which involves splitting your database into smaller pieces. This can help to improve performance by spreading the load across multiple servers.

Techniques for Splitting the Data Up

If you’re using a technique like sharding, you’ll need to decide how to split your data up. One option is to split your data by category or tag. For example, you could have one database for posts related to technology, and another for posts related to entertainment.

Another option is to split your data by date. This can be particularly useful if you have a lot of older content that doesn’t change very often. You could have one database for posts from the last year, and another for older posts.

Consider also using a plugin like HyperDB. HyperDB is maintained by Automattic, the parent company of WordPress.

As described on the plugin page…

HyperDB allows tables to be placed in arbitrary databases. It can use callbacks you write to compute the appropriate database for a given query. Thus you can partition your site’s data according to your own scheme and configure HyperDB accordingly.

Basic Indexing

Indexing your database can help to improve performance by making it faster to search for data. When you create an index, the database creates a data structure that makes it easier to search for specific values.

To create an index, you’ll need to use the MySQL command line or a tool like phpMyAdmin.

When you’re creating an index, it’s important to choose the right columns to index. Typically, you’ll want to index columns that are frequently used in queries.

You can also use a plugin like Index WP MySQL for Speed. This plugin adds database keys (also called indexes) to your MySQL tables to make it easier for WordPress to find the information it needs. The plugin page also includes excellent information on database indexing in relational database management systems.

Settings and Logs to Check

To keep your database running smoothly, there are a few settings and logs that you’ll want to keep an eye on. These include:

  • MySQL slow query log: This log records queries that take longer than a certain amount of time to run. By analyzing this log, you can identify queries that are causing performance issues.
  • MySQL error log: This log records any errors that occur in the MySQL server. By monitoring this log, you can identify and troubleshoot issues that may be affecting your database.
  • WordPress debug log: This log records any errors or warnings that occur within WordPress. By monitoring this log, you can identify issues with your WordPress installation or plugins.
  • Database backups: Regularly backing up your database is important to ensure that you don’t lose any data in case of a server crash or other disaster and can restore your website quickly in case of a problem. You can use a plugin like Snapshot to automate this process, or if you’re hosting with WPMU DEV, you can configure automatic enterprise database backups to perform daily and even hourly. Also, consider storing all backups separately from the server hosting the site, as the backups may be lost if the server crashes.

Other Ongoing Maintenance

In addition to the above, there are a few other ongoing maintenance tasks that you’ll want to perform to keep your database running smoothly.

These include:

  • Cleaning up your database: Over time, your database can become cluttered with unused data. Check our article on how to clean up your database and remove unnecessary data for more details.
  • Optimizing your database tables: Reducing the size of your database and optimizing your database tables helps to improve site performance. You can optimize your database by removing unnecessary data, such as post revisions, trashed items, spam comments, and unused plugins and themes. Check our complete WordPress database optimization guide for detailed instructions and plugins that help you do this.
  • Monitoring your site for security issues: Large sites are often a target for hackers. You can use a plugin like Defender to monitor your site for security issues and prevent attacks.

In terms of cleaning up your database, Charly mentions a high number of transients as being a possible issue affecting the site’s performance. Although addressing this issue seemed to offer very little improvement in Charly’s client’s case, it’s worth mentioning it here as something to check if you are experiencing issues with your site.

Transients are a type of cache that stores data in the database for a specific period of time. They are used to speed up the loading time of a website by storing the results of a complex or time-consuming query, such as an API request, so that the query doesn’t have to be run every time the page is loaded.

Transients have a set expiration time, after which they are automatically deleted from the database. However, if the website is not properly optimized, transients can accumulate in the database and cause performance issues, such as slow page loading times or database crashes.

To optimize WordPress and avoid issues with transients, there are several steps that can be taken. These include:

  • Use a caching plugin: A caching plugin like Hummingbird can help reduce the number of database queries and prevent unnecessary creation of transients.
  • Delete expired transients: Expired transients can accumulate in the database, so it’s important to regularly delete them to keep the database optimized. This can be done manually, or by using a plugin like Hummingbird.
  • Set a maximum lifetime for transients: By setting a maximum lifetime for transients, you can prevent them from being stored in the database for too long, which can lead to performance issues. This can be done using the set_transient() function in WordPress.
  • Use a remote database: Storing the database on a remote server can help reduce the load on the server and prevent issues with transients.
  • Increase the memory limit: Increasing the memory limit for PHP and WordPress can help prevent memory overflows and performance issues caused by transients.

No matter what size WordPress site you are working on, using WPMU DEV’s Hummingbird caching and site optimization plugin can help to automatically take care of expired transients and eliminate this issue, leading to faster page loading times and a smoother user experience.

Hummingbird: Advanced Tools screen with Database Cleanup and Transients options highlighted.
Hummingbird can be configured to automatically delete expired transients from your WordPress database.

In terms of increasing the memory limit for PHP, if you are a WPMU DEV member, it’s really easy to check a whole bunch of information about your WordPress site, include current PHP memory limits and max filesize upload settings.

Just log into your WordPress dashboard and navigate to the WPMU DEV dashboard plugin menu. Select Support > System Information > PHP tab.

WPMU DEV Dashboard plugin - Support tab.
WPMU DEV’s Dashboard plugin lets you easily check information about your WordPress site.

If you are not a WPMU DEV member, you can still check this information manually.

To find out how much php memory is allocated, create a php and add the following:

<?php
phpinfo();
?>

Call it something like php-test.php and upload it to your server.

Access the file from a browser and search for memory_limit. This will give you two settings – what the local site settings are and what the server default is. It is possible to have different php memory_limits by site.

For WordPress memory, for instance, you might see the following:

define('WP_MEMORY_LIMIT', '64M');

Note that if this entry is missing in the wp-config.php file, then your site is probably working between 40M and 64M.

In addition to the above, make sure to also scan and fix corrupt or broken files and database in WordPress.

As you can see, there are quite a number of things you can do to improve the performance of your WordPress database.

Let’s move on, to…

Large WordPress Site Performance Issue #2 – WordPress Core, Themes, And Plugins

Charly mentions that another possible reason for the performance issues her client’s site was experiencing was using a resource-hungry theme.

Rather than focusing on a particular theme, let’s look at themes and plugins in general (btw… if you use Elementor, check out our article on how to optimize Elementor themes. We’ve also written articles on ways to optimize themes like Divi, WPBakery, Astra, and other page builders.)

Here are some of the things you can do:

Theme and Plugin Bloat – Themes and plugins can significantly impact the performance of a WordPress site, particularly if they are not optimized or updated regularly. Some themes and plugins can also be poorly coded, leading to slow loading times and site bloat.

Solution: Be sure to choose a lightweight and optimized theme that is regularly updated by the developer. Avoid using too many plugins and remove any unnecessary ones to reduce site bloat. Always keep your themes and plugins up-to-date to ensure optimal performance.

  • Avoid poorly coded themes and plugins, as these can lead to slow loading times, site bloat, and conflicts.
  • Choose lightweight and optimized themes and plugins that are regularly updated by their developer.
  • Check your server logs to identify heavy plugins and themes that could be slowing down your site.
  • Always keep your themes and plugins up-to-date to ensure optimal performance.
  • Deactivate and remove unnecessary and non-essential plugins and themes.

As with all WordPress sites, regardless of size, it’s also really important to optimize your client sites.

There are a number of tools you can use to scan your site and measure site performance, including Google PageSpeed Insights and GTmetrix. These tools provide important insights into ways to optimize your sites.

You can also use a developer tool plugin like Query Monitor to help you identify issues, aid in the debugging of database queries, PHP errors, hooks and actions, block editor blocks, enqueued scripts and stylesheets, and HTTP API calls. The plugin also provides advanced features such as debugging of Ajax calls, REST API calls, and user capability checks.

Query Monitor - WordPress plugin
Use Query Monitor to quickly identify poorly performing plugins, themes, or functions in your WordPress site.

Additional articles and tutorials that we recommend checking out include our guide on speeding up WordPress, solutions to forgotten WordPress page speed problems, WordPress troubleshooting guide, and Mario Peshev’s article on scaling mistakes when running a large WordPress site.

Large WordPress Site Performance Issue #3 – Site Content

Large WordPress sites typically have loads of content. In Charly’s case, for example, the client’s website had over 900k entries.

If you’ve gone and optimized the database and you’re still experiencing issues, here are some of the things you can look at:

  • Perform a content audit: A content audit is essentially performing an inventory of your existing content and assessing and identifying content that’s outdated, obsolete, duplicated, etc, before deciding what to do with it (e.g. update, SEO optimize, trash). It’s a long-term but effective and important strategy for keeping your site’s content manageable and maintained.
  • Use lazy loading: Lazy loading can help to ensure that media files are only loaded when they are needed, which can significantly improve page load times.
  • Use a content delivery network (CDN): Consider using a content delivery network (CDN) to distribute cached media files and reduce the load on your server. A CDN can help speed up your website by caching your website’s content on servers located around the world, reducing the load on your server. Popular CDNs include Cloudflare and MaxCDN. Note that all WPMU DEV membership and hosting plans include a CDN. Our Hummingbird and Smush plugins also include a CDN (Hummingbird also offers Cloudflare integration).
  • Use content optimization plugins: Optimize images, videos, and other media files by compressing them and reducing their file size. If the site contains loads of images, consider using an image optimization plugin like Smush, which significantly reduces image file sizes without compromising on image quality to improve content delivery performance. Smush also includes WPMU DEV’s CDN.
  • Use a managed WordPress hosting service: A managed WordPress hosting service can provide you with optimized servers and database management tools to help keep your website running smoothly. As discussed in the next section below, WPMU DEV not only offers a best-of-class managed WordPress hosting service, but it is also specifically configured to deliver enterprise-level hosting for WordPress sites of all kinds and sizes.

Large WordPress Site Performance Issue #4 – Hosting

If you are still experiencing problems with the site after fixing issues with the WordPress database and optimizing the site’s core, plugins, themes, and content, the issue may be related to web hosting.

Consider using a managed WordPress hosting service with a company that specializes in WordPress.

Hosting with a reputable host not only means placing your site in the care of an experienced team who will handle areas like server optimization and database management for you, but also migrate your existing website to their servers.

This is very important, as a large WordPress site no doubt has lots of moving parts and active traffic and transactional events taking place, and you don’t want to lose any valuable data or break anything during the migration process.

Additional hosting considerations for a large WordPress site include the ability to handle demands with ample resources, uptime, speed, and customer support.

WPMU DEV offers enterprise-level hosting, 24/7 expert hosting and WordPress support, and migrations by a team of experts who will handle everything for you, including troubleshooting any potential issues with your site.

Additionally, WPMU DEV has been independently rated and reviewed by many users as one of the leading managed WordPress hosting companies, with a near-perfect rating score. G2.com, for example, rates WPMU DEV 4.8 out of 5 stars overall, and 9.8 out of 10 for quality of support.

More importantly and on a practical level, our expert team proactively manages larger sites by regularly checking areas like “PHP error logs” for any errors in the plugins, themes or in the WordPress core and “PHP slow logs” for slow loading scripts (e.g. plugins where scripts exceed 30 seconds to execute), access logs (to see if there’s a DDoS attack or high visitors in general), and load on the server resources, including CPU, RAM, etc.

The team also checks if WAF is enabled, caching is ON, and any non-used profiling software is turned off when not needed, and will perform conflict tests for plugins and themes and run query monitoring scans at the mysql level when required.

We also offer integration with New Relic and Blackfire to profile the site and its pages for all sites, large and small.

Managing Larger WordPress Sites Is A Big Job

A large WordPress site differs from other WordPress sites mostly in the scale and complexity of its management.

Dealing with performance issues in large, complex WordPress sites requires having the skills and the expertise to handle challenges and meet the high expectations of enterprise clients.

Finding information on WordPress enterprise development can be challenging, but focusing on different areas like studying the WordPress Core, APIs, and the surrounding ecosystem, practicing in the main technical areas, and following leading agencies, will help you become more knowledgeable and confident in your abilities as a developer.

Also, managing a large WordPress database can be challenging but there are solutions available to help you manage it. By optimizing your database, using caching and CDN services, using a managed WordPress hosting service, and regularly backing up your database, you can ensure that your website runs smoothly and avoid potential issues.

By addressing common performance challenges and regularly monitoring your site’s performance to identify and address any issues as they arise, you can significantly improve the performance of your larger WordPress site.

Finally, hosting your site on enterprise-level servers with an experienced and reliable managed WordPress hosting partner like WPMU DEV will not only improve your large site’s performance but also help to eliminate problems and issues, as your site will be expertly managed and monitored 24/7.

If you are looking to migrate your existing site from another host or upgrade hosting for a large WordPress site, we recommend looking at our enterprise-level hosting plans (3 x Essential and 3 x Premium options), and taking advantage of our hosting buyout and free expert site migration service.

***

Ask Charly Leethan

Special thanks to WPMU DEV member Charly Leethan for her contribution to this post. AskCharlyLeethan provides ongoing support and advice to help small businesses define and refine their processes and plan and build their web presence using current and emerging technologies.

Building Future-Proof High-Performance Websites With Astro Islands And Headless CMS

This article is a sponsored by Storyblok

Nowadays, web performance is one of the crucial factors for the success of any online project. Most of us have probably experienced the situation that you left a website due to its unbearable slowness. This is certainly frustrating for a website’s user, but even more so for its owner: in fact, there is a direct correlation between web performance and business revenue which has been corroborated time and again in a plethora of case studies.

As developers, optimizing web performance must therefore be an integral part of our value proposition. However, before moving on, let’s actually define the term. According to the MDN Web Docs, “web performance is the objective measurement and perceived user experience of a website or application”. Primarily, it involves optimizing a site’s initial overall load time, making the site interactive as soon as possible, and ensuring it is enjoyable to use throughout.

Achieving an excellent measurable performance as well as an outstanding perceived performance certainly constitutes a potentially very strenuous challenge for developers, especially when dealing with increasingly complex, large-scale websites. Fortunately, while it is easy to get lost in the subtle details of performance optimization measures, there are a few factors that should be the focus points of our efforts due to their extraordinarily high impact. One of these is image optimization, a topic that has been thoroughly covered in ‘A Guide To Image Optimization On Jamstack Sites’ by my colleague Alba Silvente.

Another key factor? Shipping less JavaScript (JS). A large JS bundle size takes longer to be transmitted, parsed, and executed. As a consequence, the initial page load and Time to Interactive can be delayed quite significantly. In recent years, we have witnessed the rise of extremely powerful JS frontend frameworks that offer client-side rendering and strive for an app-like experience. While their versatility, their features, and their developer experience is impressive by all means, they all share one major disadvantage in regard to performance optimization: their JS bundle size is comparably heavy, negatively impacting both the initial page load and the time-to-interactive quite substantially.

Depending on the type of your project, the question arises whether a less JS-centric approach might be feasible. In fact, if you think of your average content-driven marketing website, you would probably conclude that only a fraction of the functionality actually relies on JavaScript, whereas the majority of the site could probably be rendered as static HTML.

And that is precisely where Astro enters the game, shipping zero JS by default and letting you partially hydrate only those components that de facto rely on interactivity. Importantly, Astro accomplishes all of that without sacrificing the wonderful developer experience (DX) that we have been getting spoiled by, but actually even improving it. Let’s take a closer look.

Introducing Astro

Astro defines itself as an “all-in-one web framework for building fast, content-focused websites”. One of its key features is that it replaces unused JS with HTML on the server, effectively resulting in zero JavaScript runtime out-of-the-box. This, in turn, leads to very fast load times and quicker interactivity. Notably, Astro explicitly states that it is specifically designed for content-driven websites, such as marketing, documentation, or eCommerce sites. The Astro team transparently acknowledges that other frameworks may be a much better fit if your project classifies as a web application rather than a mostly content-driven site.

Moreover, Astro provides a powerful Islands architecture that utilizes the technical concept of partial hydration. In a nutshell, this allows you to hydrate only those components that you actually need to be interactive. Importantly, this happens in isolation, leaving the rest of the site as static HTML. All in all, the impact on web performance is huge, making developers’ lives a lot easier along the way. And it gets even better: it is possible to bring your own framework. Thus, you could effortlessly use, for example, Vue, Svelte or React components in your Astro project.

Speaking of isolated islands, it is worth pointing out that developers actually rarely work alone: most larger-scale web projects typically rely on close collaboration between teams of developers and content creators. Therefore, let’s explore how going Headless with Storyblok can improve the experience and productivity of everyone involved.

Introducing Storyblok

Storyblok is a powerful headless CMS that meets the requirements of developers and content creators alike. Completely framework-agnostic, you can connect Storyblok and your favorite technology within minutes. Storyblok’s Visual Editor allows you to create and manage your content with ease, even when dealing with complex layouts. Furthermore, localizing and personalizing your content becomes a breeze. Beyond that, Storyblok’s API-first design allows you to create outstanding cross-platform experiences.

Let’s explore in a case study how we can effectively combine the power of Astro and Storyblok.

Case Study: Interactive Components In Storyblok And Astro

In this example, we will create a simple landing page consisting of a hero component and a tabbed content component. Whereas the former will be a basic Astro component, the latter will be rendered as a dynamic island. In order to demonstrate the flexibility of this technology stack, we will examine how to render the tabbed content component using both Vue and Svelte.

This is what we will build:

(Large preview)

Step 1: Create The Astro Project And The Storyblok Space

Once we’ve created an account on Storyblok (the Community plan is free forever), we can create a new space.

Now, we can copy and run the command to run Storyblok’s CLI in order to quickly create a project that is connected to your fresh new space:

npx @storyblok/create-demo@latest --key <your-access-token>

You can copy the complete command, including your personal access token, from the Get Started section of your space:

In the scaffolding steps, choose Astro, the package manager of your choice, the region of your space, and a local folder for your project. Now, in your chosen folder, you can run npm install && npm run dev to install all dependencies and launch the development server.

For the Storyblok Visual Editor to work, we need to go to Settings > Visual Editor and specify https://127.0.0.1:3000/ as the default environment.

Next, let’s go to the Content section and open our Home story. Here, we need to open the Entry configuration and set the Real path to / in order for src/pages/index.astro to be able to load this story correctly.

After having saved, you should now see the page being rendered correctly in the Visual Editor.

Perfect, we’re ready to move on.

Step 2: Create The Hero Component In Storyblok

In the Block Library, which you can easily access from within your Home story, you will find four default components. Let’s delete all of the nestable blocks (Grid, Teaser, and Feature). For our case study, we just need the Page content type block.

Note: In order to learn more about nestable and content type blocks, you can read the Structures of Content tutorial.

Now, we can create the first component that we will need for our case study: A nestable block called Hero (hero) and the following fields:

  • caption (Text)
  • image (Asset > Images)

Next, close the Block Library, delete the instances of the Teaser and Grid blocks, and create a Hero, providing any caption and image of your choice.

Step 3: Create The Hero Component In Astro

The next step is to create a matching counterpart for our Hero component in our Astro project. Let’s open up the project.

First of all, let’s modify astro.config.mjs in order to register our Hero component properly:

storyblok({
  accessToken: '<your-access-token>', // ideally, you would want to use an environment variable for the token
  components: {
    page: 'storyblok/Page',
    hero: 'storyblok/Hero',
  },
})

Next, let’s delete the Grid, Feature and Teaser components in src/storyblok and create a new src/storyblok/Hero.astro component with the following content:

---
import { storyblokEditable } from '@storyblok/astro'

const { blok } = Astro.props
---

<section
  {...storyblokEditable(blok)}
  class='relative w-full h-[50vh] min-h-[400px] max-h-[800px] flex items-center justify-center'
>
  <h2 class='relative z-10 text-white text-7xl'>{blok.caption}</h2>
  <img
    src={blok.image?.filename}
    alt={blok.image?.alt}
    class='absolute top-0 left-0 object-cover w-full h-full z-0'
  />
</section>

Having taken care of that, the Hero block should now be displayed correctly in your Home story. In this particular case, we are using a native Astro component, which means that this component will be rendered as static HTML, requiring zero JS!

Amazing, but what happens if you actually need interactivity on your frontend? This is precisely where dynamic islands come into play, which we will explore next.

Step 4: Create The Tabbed Content Component In Storyblok

Let’s proceed by creating the blocks that we need for our tabbed content component, which will have a slightly more complex setup.

First of all, we want to create a new nestable block Tabbed Content Entry (tabbed_content_entry) with the following fields:

  • headline (Text)
  • description (Textarea)
  • image (Asset > Images)

This nestable block will be used in superordinate nestable block called Tabbed Content (tabbed_content) consisting of these fields:

  • entries (Blocks > Allow only tabbed_content_entry components to be inserted)
  • directive (Single-Option > Source: Self) with the key-value pair options: load → load and idle → idle, and visible → visible (Default: idle)

The entries field is used to allow nesting of the previously created Tabbed Content Entry nestable blocks. In order to prevent any kind of block could get inserted, we can limit it to blocks of the type tabbed_content_entry.

Additionally, the directive field is used to take advantage of Astro’s client directives, which determine if and when a framework component should be hydrated. Utilizing the single-option field type in Storyblok enables content creators to choose whether this particular instance of the component should be hydrated with the highest priority (load), after the initial page load has been completed (idle), or as soon as the component instance actually enters the viewport (visible).

Utilizing Astro’s visible directive would result in the biggest performance gain as long as the component is below the fold. As the default option, we will use Astro’s idle directive, hydrating the component immediately on page load. However, in all cases, the rest of our landing page will remain as static HTML. As a result, the out-of-the-box performance should theoretically always be superior when compared to alternative frameworks.

Before moving on, we can use our newly created Tabbed Content component and insert three example entries in the entries field.

Step 5: Create The Tabbed Content Component In Astro

First of all, let’s register our Tabbed Content component in our astro.config.mjs:

storyblok({
  accessToken: '<your-access-token>',
  components: {
    page: 'storyblok/Page',
    hero: 'storyblok/Hero',
    tabbed_content: 'storyblok/TabbedContent',
  },
}),

Next, let’s create storyblok/TabbedContent.astro with the following preliminary content:

---
import { storyblokEditable } from '@storyblok/astro'

const { blok } = Astro.props
---

<section {...storyblokEditable(blok)}></section>

This will serve as our wrapper component, wherein we can subsequently import the actual component using the UI framework of our choice and dynamically assign a client directive derived from the value we receive from Storyblok.

Step 6: Render the Tabbed Content Component using Vue

With everything in place, we can now start building our tabbed content component using Vue. First, we need to install Vue in our project. Fortunately, Astro makes that very simple for us. All we have to do is to run the following command:

npx astro add vue

Next, let’s create a new Vue component (storyblok/TabbedContent.vue) with the following content:

<script setup lang="ts">
import { ref } from 'vue'
const props = defineProps({ blok: Object })

const activeTab = ref(0)

const setActiveTab = (index) => {
  activeTab.value = index
}

const tabWidth = ref(100 / props.blok.entries.length)
</script>

<template>
  <ul class="relative border-b border-gray-900 mb-8 flex">
    <li
      v-for="(entry, index) in blok.entries"
      :key="entry._uid"
      :style="'width:' + tabWidth + '%'"
    >
      <button
        @click.prevent="setActiveTab(index)"
        class="cursor-pointer p-3 text-center"
        :class="index === activeTab ? 'font-bold' : ''"
      >
        {{ entry.headline }}
      </button>
    </li>
  </ul>
  <section
    v-for="(entry, index) in blok.entries"
    :key="entry._uid"
    :id="'entry-' + entry._uid"
  >
    <div v-if="index === activeTab" class="grid grid-cols-2 gap-12">
      <div>
        <p>{{ entry.description }}</p>
        <a
          :href="entry.link?.cached_url"
          class="inline-flex bg-gray-900 text-white py-3 px-6 mt-6"
          >Explore {{ entry.headline }}</a
        >
      </div>
      <img :src="entry.image?.filename" :alt="entry.image?.alt" />
    </div>
  </section>
</template>

<style scoped>
ul:after {
  content: '';
  @apply absolute bottom-0 left-0 h-0.5 bg-gray-900 transition-all duration-500;
  width: v-bind(tabWidth + '%');
  margin-left: v-bind(activeTab * tabWidth + '%');
}
</style>

Finally, we can import this component in TabbedContent.astro, pass the whole blok object as a property and assign the client directive based on the value we receive from Storyblok.

---
import { storyblokEditable } from '@storyblok/astro'
import TabbedContent from './TabbedContent.vue'

const { blok } = Astro.props
---

<section {...storyblokEditable(blok)} class='container py-12'>
  {blok.directive === 'load' && <TabbedContent blok={blok} client:load />}
  {blok.directive === 'idle' && <TabbedContent blok={blok} client:idle />}
  {blok.directive === 'visible' && <TabbedContent blok={blok} client:visible />}
</section>

Furthermore, our Astro wrapper component is the right place to assign a client directive to the Vue component. Since we would like to give the content creators the possibility to choose between different directives, we need to assign them based on the value we retrieve from Storyblok.

Our tabbed content component will now be rendered correctly. Using Astro’s dynamic islands and hydration directives can tremendously boost your site’s performance, and combined with Storyblok, you provide content creators with straightforward and easy-to-use possibilities to tap into the power of this next-gen approach.

Let’s conclude our case study by examining how to render the very same component with Svelte (or any other popular framework supported by Astro).

Step 7: Render The Tabbed Content Component Using Svelte

First of all, as before, we need to install Svelte in our Astro project. Again, we can easily accomplish that by running the following command:

npx astro add svelte

Now, we can create the Svelte component (storyblok/TabbedContent.svelte) with the following content:

<script>
  export let blok

  let tabWidth = 100 / blok.entries.length
  let activeTab = 0
  let marginLeft = 0

  const setActiveTab = (index) => {
    activeTab = index
    marginLeft = activeTab * tabWidth
  }
</script>

<ul
  class="relative border-b border-gray-900 mb-8 flex"
  style="--tab-width: {tabWidth}%; --margin-left: {marginLeft}%;"
>
  {#each blok.entries as entry, index (entry._uid)}
    <li style="width: var(--tab-width)">
      <button
        class="{index === activeTab
          ? 'font-bold'
          : ''} w-full cursor-pointer p-3 text-center"
        on:click={() => setActiveTab(index)}>{entry.headline}</button
      >
    </li>
  {/each}
</ul>
{#each blok.entries as entry, index (entry._uid)}
  {#if index === activeTab}
    <section id={entry._uid}>
      <div class="grid grid-cols-2 gap-12">
        <div>
          <p>{entry.description}</p>
          <a
            href={entry.link?.cached_url}
            class="inline-flex bg-gray-900 text-white py-3 px-6 mt-6"
            >Explore {entry.headline}</a
          >
        </div>
        <img src={entry.image?.filename} alt={entry.image?.alt} />
      </div>
    </section>
  {/if}
{/each}

<style>
  ul:after {
    content: '';
    @apply absolute bottom-0 left-0 h-0.5 bg-gray-900 transition-all duration-500;
    width: var(--tab-width);
    margin-left: var(--margin-left);
  }
</style>

The only change that we have to make in order to load the Svelte component instead of the Vue component can easily be completed by simply changing the import in TabbedContent.astro:

//import TabbedContent from './TabbedContent.vue'
import TabbedContent from './TabbedContent.svelte'

And that’s it! Everything else can remain the same. Amazingly, our tabbed content component still works but is now using Svelte instead of Vue. Since Astro makes it possible to pass down the blok object, containing all of the data coming from Storyblok, as a property to the different framework components, we can simply reuse all of the information in various environments.

Wrapping Up

With Astro, you as a developer benefit from phenomenal DX, mind-blowing performance out of the box, and a high degree of flexibility thanks to the possibility to bring your own component framework (or even combine multiple component frameworks) and the availability of integrations. Moreover, Astro is highly future-proof: Considering moving from Vue to Svelte? From React to Vue? Astro makes the transition seamless, keeping the foundation of your project the same.

With Storyblok, your clients or fellow colleagues from the content marketing team get to enjoy a high degree of autonomy and flexibility, effectively utilizing the full potential of your Astro code base. Landing pages can be created in a matter of mere minutes, and dynamic, interactive components will have no negative impact on their performance.

Taking everything into account, Astro and Storyblok may very well be the last technology stack you will ever need for your content-driven website projects.

Resources

A Step-By-Step Guide To Building Accessible Carousels

You can spot them on every other homepage: carousel widgets with auto-rotating content. Those who use them want to show more content within a limited space. At the same time, they aim to present the most important messages as prominently as possible. However, if design and development do not receive proper attention, you will be at the highest risk of creating usability and accessibility issues.

This particularly affects people with cognitive, motor, or visual impairments, for whom using the complex design pattern is usually even more challenging. Whether you call it an "eye candy" or "devil’s spawn", if you want to implement the carousel in a user-friendly and accessible way, a whole range of requirements is to be considered.

“Should I Use A Carousel?”

As widely used as they are, carousel widgets have a bad reputation among UX professionals. They are ignored by users (Nielsen Norman Group), only 1% interact with a carousel at all, and 89% of them only with the first slide (Eric Runyon). Jared Smith even responds to the question "Should I use A Carousel?" by saying, "Seriously, you really shouldn't." Others state that there isn’t one answer. You have to consider various factors, such as function, design, platform (desktop or mobile) and, most importantly, context. For whatever reason you include a carousel on a website, make sure it is user-friendly and accessible.

Accessible Implementation

For a carousel to be accessible, it is essential that all users can perceive the widget and its components and that they can easily navigate it using a mouse, touch gestures, a keyboard, a screen reader, and any other assistive technology. The rotation of the slides must not interfere with users' ability to operate the carousel or use the website as a whole.

The W3C’s WAI-ARIA working group provides guidance on implementing an accessible carousel in its ARIA Authoring Practices Guide (APG). This article is based on those recommendations and wants to create a profound understanding of elements and ARIA attributes applied and their impact on users.

Auto-Rotating

I assume we have all been there at some point: Constantly changing content can be rather annoying. For some people, however, uncontrolled rotation can be absolutely critical:

  • People need different times to read content.
    If the available time is not enough, auto-advancing carousels can become very frustrating. It’s not very satisfying to just be able to skim the text before new content appears without any request.
  • Some people, especially people with cognitive impairments such as attention deficit disorder, may experience moving content distracting.
    The movement could prevent users from concentrating and interacting with the rest of the website. People within the autistic spectrum may even have to leave such a page entirely.
  • Screen reader users may not be aware of the rotation.
    They might read a heading on "slide 1" and execute the keyboard command for "read next item." Instead of hearing the next element on "slide 1", the screen reader announces a text from "slide 2" — without the person knowing that the element just read out is from an entirely new context.

WCAG 2.1, therefore, requires the possibility to stop movement (2.2.2 Pause, Stop, Hide).

  • Add a "Pause" or "Stop" button if you cannot give up auto-rotation at all.
    The "Pause" button is the minimum solution (if the movement does not end automatically after 5 seconds). In addition, usability experts recommend the following:
  • Pause auto-rotation as long as a slide is being hovered.
    There is usually a correlation between the mouse position and the user’s interest in a piece of content. Don’t risk a slide change a few milliseconds before a person activates a link and then ends up on the wrong page and, to make things worse, not realizing that this was not the intended one.
  • Stop the rotation permanently when users are setting focus on interactive elements using a keyboard or interacting with the carousel.
    An action such as actively changing slides also indicates that the user might be interested in the content. People may temporarily explore other parts of the page but possibly want to come back. Furthermore, you make keyboard navigation easier if the carousel stops as soon as a control receives focus.

Visibility

For all of us, user interface elements are intuitively easier to use if we can perceive them well. For people with visual impairments, however, this aspect is crucial.

  • Ensure sufficient contrast.
    Most often, icons are used to indicate the controls: simple arrows for advancing the slides as well as the widely used dots for selectively fading in. However, these icons often have poor contrast against the background, especially when placed on images. The easiest way to avoid contrast issues is to position the icons outside the slides.
  • Provide a visible focus indicator.
    People navigating with a keyboard need to be able to see which interactive element they are currently focusing on. Often design merely (if at all) provides a slight and barely discernible change in color of the controls. If color is the only way to indicate focus, both colors (the one used for non-focused and focused state) must meet at least a contrast ratio of 3.0:1 with each other. This minimum contrast requirement also applies to the icon against its background and to other kinds of focus indicators (e.g., a border that indicates the focus state of a button).
  • The size of the controls affects their visibility as well.
    Besides, with a reasonable target size combined with sufficient distance between interactive elements, you reduce the risk of accidentally activating a nearby button or a linked slide. This not only benefits people with limited dexterity but also minimizes errors when using mobile devices.
  • Indicate the current position within the set of slides.
    It is not the only option, but often this is done by highlighting one of several progress dots. To ensure that people with color deficiencies can access visual information, you must not rely on color alone but use a color-independent indicator, for example, a filled and unfilled dot.

Activation With A Simple Pointer Input

Many people are used to moving slides on touch devices by swiping. But there are also users who cannot perform this gesture because of their impairment or because of the assistive technology they use. In addition to swiping, enable users with an alternative way of navigating (WCAG 2.5.1 Pointer Gestures).

Provide interaction with a simple pointer input (e.g., simple click or tap). That means if there are no slide picker controls to display content directly, you need to implement "Previous" and "Next" buttons.

Structure, Semantics And Labelling

The carousel widget is designed primarily for two-dimensional, visual use. For screen reader users who do not see the carousel as a whole, it is much harder to understand the composition of controls and content. These users have different perspectives when using a website. They explore content linearly, tab through interactive elements, and use shortcuts for navigation. The challenge is to provide a meaningful structure and semantics and inform users about controls and slides in a way that enables them to build a mental model of the widget. Use semantic markup for different sections, controls, and content and proper labeling to ensure good orientation.

Regions And Groups

Using the HTML element <section> (or role="region") with an accessible name, you can set a generic landmark and thus mark smaller regions of a page. Landmarks provide a way to identify the structure of a web page to screen reader users and help them with orientation. Using a screen reader, you can display these sections in a table of contents and use specific keyboard shortcuts for landmarks to move focus to the corresponding content.

When navigating to the landmark, the assistive technology announces the name and type of the section, for example: "[name] — region". In browse mode (exploring content with arrow keys), the screen reader tells users when they enter or exit a region. They will hear something like "[name] — region" or "out of region". (This output in browse mode is currently well-supported by the screen reader NVDA, but unfortunately, with JAWS, users have to set screen reader settings accordingly.)

role="group" is intended to form a logical set of items as a group. Since it is not a landmark, the group is not included in a landmark listing displayed by the user. For that reason, use this role in contrast to role="region" for less important sections of a page. You could use role="group", for example, to group (custom) form controls (compare <fieldset> and <legend> in HTML) but it is not limited to this.

With role="group", it is also necessary to use the role with an accessible name. The screen reader will convey the boundaries of the group by announcing something like "[name] — grouping" or "out of grouping," and the assistive technology will inform users about the start and end of the group (in this case also JAWS in the default setting).

The following applies to role="region" and role="group": When the first interactive element within the region or group receives focus, the screen reader will announce its name and role, plus the role and label of the focused control.

The Carousel Container

Currently, there is no specific HTML element or ARIA role to mark up a carousel. However, you can still convey semantic information to make the existence of the carousel, its sections, and the meaning of the controls clear and understandable to non-visual users. That will help them better adapt to the widget and its interaction patterns.

role="region|group" + accessible name

An easy way to make non-visual users aware of the carousel is to use a landmark or a group role: To get there, use the <section> element (or role="region") or role="group" for the container that encompasses all the carousel elements (including the carousel controls and slides). In addition, provide an accessible name that identifies the carousel as such.

Carousel container marked with a landmark:

<div role="region" aria-label="Carousel">
    <!-- Slides and controls -->
</div>

Screen reader output when encountering the widget: „Carousel — region“

The W3C Authoring Practices go beyond this basic markup and recommend using the aria-roledescription attribute:

role="region|group" + aria-roledescription + accessible name

With aria-roledescription, you create an even more specific output for screen reader users. You thus define the meaning of the container more precisely.

Code sample (HTML): Carousel container marked with a landmark with a specified meaning

<div role="region" aria-roledescription="carousel" aria-label="Design patterns">
    <!-- Slides and controls -->
</div>

Screen reader output: "Design patterns — carousel" instead of (without using aria-roledescription) "Design patterns — region." In the previous example, we have seen a more generic accessible name ("Carousel"). With role="region", this has resulted in the output: "Carousel — region".

aria-roledescription changes the way a screen reader announces the role of an element. This allows authors to define a localized and human-readable description for a role. The screen reader announces the value of aria-roledescription (a string provided by the author) instead of the original role. It overrides the role to which the attribute is applied. aria-roledescription should only be used on elements that have a valid implicit or explicit (ARIA) role.

Back to our carousel, this means instead of region or grouping (you can also use role="group" instead of role="region"), the value of aria-roledescription is read out by the assistive technology. Note: If the aria-roledescription property is set to "carousel", you should not set the aria-label property to "carousel" because this would be redundant.

The ARIA specification recommends the attribute mainly for clarifying the purpose of non-interactive container roles such as group or region. Otherwise, use it with great caution:

“Do not use aria-roledescription to change the way a role is announced, if it will prevent the user from knowing what to do with it.”

Léonie Watson

The Slide Container

Just as you communicate the boundaries of the carousel as a whole, you can do the same for the slides. Each slide represents simply a smaller entity within the "carousel" composite.

role="group" + aria-roledescription + accessible name

Container for slides marked as "group" with a specified meaning:

<div role="group" aria-roledescription="slide" aria-labelledby="carousel_item-1_heading">
  <h2 id="carousel_item-1_heading">Modal dialogs</h2>
  <!-- Some more content -->
</div>

Screen reader output: „Slide — heading level 2 — Modal dialogs“

In this example, the heading of the slide serves as the accessible name of the group. The aria-labelledby property is set to the id of the heading. You could also use the aria-label attribute to convey the position within the set of slides, for example, "1 of 3". In particular, that makes sense if no unique name is available to identify the slide.

Again, if you set aria-roledescription to "slide", the aria-label property should not be set to "slide" to avoid redundancy.

The Slides

Slides that are not visible should be hidden, not only visually but from all users (including users of assistive technology). You can either use CSS (display: none or visibility: hidden) or the HTML hidden attribute.

If you truly hide the slides that are not visible, you should not use list markup for the whole set of slides. Screen readers do announce the number of list items (which may seem helpful) but will ignore hidden ones. Concerning the carousel, list markup would not result in an output of the total number of list items but just the visible ones. If only one slide is visible, that wouldn’t be very helpful.

The Container For The Controls

What about the container for the controls? It also encompasses a set of items. Because of this, it is reasonable to apply role="group" to it. And again, the group needs an appropriate group name.

role="group" + accessible name

Container for controls marked as "group" with a specified meaning:

<div role="group" aria-label="Slide controls">
    <!-- Slide controls -->
</div>

Screen reader output: „Slide controls — grouping“

In this context, the accessible name (the group name) reflects the purpose of the controls, for example, "Slide controls" or "Choose a slide".

The ARIA Authoring Practices refer to this implementation as a "grouped carousel". Another approach is described as "tabbed carousel". Following the "tabbed" pattern, you would use role="tablist", role="tab" for the controls and role="tabpanel" (instead of role="group") for the slide container. Don’t use aria-roledescription in this case. The appropriate ARIA properties are specified in the tab pattern.

The Controls

Slide picker controls allow users to display a specific slide. They visually present the total number of slides and indicate the current position within the set. It is rather popular to use "dots" for that purpose.

Operable Keyboard Controls

All controls must be operable with the keyboard. Use the native button element for controls of the grouped carousel. That involves the "Previous" and "Next" buttons, the "Pause" button, and the slide picker controls.

If you cannot use native HTML, add semantic information by applying an explicit ARIA role (role="button") to the clickable div or span element. Use tabindex="0" to include the element in the tab sequence. Furthermore, you need to incorporate JavaScript that provides interactions expected for a button (activation with Space and Enter ). Be aware that ARIA roles do not cause browsers to provide keyboard behaviors.

Grouped controls

<div role="group" aria-label="Slide controls">
  <button aria-label="Show slide 1 of 4"> 
      <!-- SVG icon -->
  </button>
  <!-- Some more controls -->
</div>

Screen reader output: „Slide controls — grouping — Show slide 1 of 4 — button“

The Accessible Name Of The Controls

Choose an appropriate accessible name (text alternative) in context with the group name. Regarding the above example, the screen reader would announce: "Slide controls — grouping — Show slide 1 of 4" — button" when focusing on the first control in the group (or "Slide controls — grouping — Pause — button" if the first control is a "Pause" button).

Assuming the group name is "Choose a slide," then the heading of the corresponding slide would also be a reasonable label for the button in this context. Instead of using the aria-label attribute, you could also specify the label using visually hidden text.

Along with the group name and the slide picker elements, the "Previous," "Next," and "Pause" buttons need an accessible name. That means do not forget about the text alternatives for these icons either.

Semantic Markup Of The Active Control

Do not just indicate visually but also semantically which control does represent the currently displayed slide. You can use aria-disabled="true" for this purpose. In this case, aria-disabled is preferable to the HTML attribute disabled. Unlike disabled, a button with an aria-disabled attribute is still included in the tab sequence of the page. You may also use the aria-current property set to "true".

Focus Order

For a webpage to be accessible using a keyboard or assistive technology, it is important to ensure a proper focus order. That is the order in which you tab through interactive elements. An appropriate order generally means that it follows the visual flow of the page. Users can navigate in a logical and predictable way without loss of orientation.

By default, the tab order is set by order of the elements in the source code (more precisely, in the DOM). That means that the order in which you write the HTML will affect the focus order when navigating with the keyboard. The DOM order also affects the reading order, that is, the order in which screen reader users in browse mode linearly explore content using the arrow keys (non-interactive content included).

The DOM and, with it, the focus sequence should match as closely as possible to the visual order of the page. In this way, sighted keyboard users can easily follow the sequence.

Content changes that precede the current focus cause issues with sequential navigation since screen reader users will not know about such changes (apart from dynamically displayed status messages implemented using live regions). Users need to navigate backwards to reach those interactive elements, which makes navigation more difficult.

A Proper And Understandable Focus Order For A Carousel

Many carousel implementations are intended to visually display the controls below the slides:

  1. Previous
  2. Slide
  3. Next
  4. Pause
  5. Slide Controls

If this order matches the order of the elements in the DOM, both the activation of "Next" and the activation of the slide controls will cause a content change (display of a new slide) before the triggering control. That is, users need to navigate backwards if they want to explore the content of the slide with the screen reader or follow a link on the slide.

DOM Order

The focus order is easier to use, and the reading order is more meaningful if the controls (at least the "Pause" button and the slide controls) are positioned in the DOM before the changing slides. If you still want the controls to be visually positioned below the carousel, use CSS for this purpose. When in doubt, a good focus and reading order for the carousel is more important than the optimal conformity with the visual order:

  1. Pause
  2. Slide controls
  3. Previous
  4. Slide
  5. Next

or

  1. Pause
  2. Slide controls
  3. Previous
  4. Next
  5. Slide

Notice that people working with magnification software will locate the "Pause" button more quickly if it is also visually positioned at the beginning of the carousel. They do not see the carousel as one (depending on the magnification level) but might explore the content by moving the keyboard focus or the mouse pointer, which results in repositioning their viewport.

Pause And Previous/Next Buttons

When activating the Pause / Play and Previous / Next control, the focus has to remain on the button. That allows sighted keyboard users to repeatedly press the button and perceive the changing content at the same time.

The Keyboard Navigation For "Tabbed Carousels"

If you prefer following the "tabbed carousel" pattern, allow users to navigate with arrow keys within the set of controls. In addition, ARIA markup must correspond to the tab pattern (unlike the "grouped carousel") to make screen reader users understand the expected operation based on the information announced by the screen reader.

Working Example

Complete carousel widget:

<div role="region" aria-roledescription="carousel" aria-label="Tips & Techniques">
  <div role="group" aria-label="Slide controls">
    <button aria-label="Stop auto-rotation">
      <!-- SVG icon -->
    </button>
    <button aria-disabled="true">
      <span class="hide-element">Show slide 1 of 3: Hiding Accessibly</span>
      <!-- SVG icon -->
    </button>
    <button aria-disabled="false">
      <span class="hide-element">Show slide 2 von 3: Accessible Contrasts</span>
      <!-- SVG icon -->
    </button>
    <button aria-disabled="false">
      <span class="hide-element">Show slide 3 von 3: Semantics, WAI-ARIA and Assistive Technologies</span>
      <!-- SVG icon -->
    </button>
    <button aria-label="Previous slide">
      <!-- SVG icon -->
    </button>
    <button aria-label="Next slide">
      <!-- SVG icon -->
    </button>
  </div>
  <div role="group" aria-roledescription="Slide" aria-labelledby="carousel-item-1__heading" id="carousel-item-1">
    <h2 id="carousel-item-1__heading">Hiding accessibly</h2>
    <!-- Further slide contents -->
  </div>
  <div hidden role="group" aria-roledescription="Slide" aria-labelledby="carousel-item-2__heading" id="carousel-item-2">
    <h2 id="carousel-item-2__heading">Accessible Contrasts</h2>
    <!-- Further slide contents -->
  </div>
  <div hidden role="group" aria-roledescription="Slide" aria-labelledby="carousel-item-3__heading" id="carousel-item-3">
    <h2 id="carousel-item-3__heading">Semantics, WAI-ARIA and Assistive Technologies</h2>
    <!-- Further slide contents -->
  </div>
</div>
Wrapping Up
  1. Add a button to pause or stop all movement.
  2. Ensure that controls have sufficient contrast (meet WCAG’s color contrast requirements).
  3. Provide a visible focus indicator.
  4. Don’t rely solely on swiping. In addition, provide interaction with a simple pointer input (e.g., simple click or tap).
  5. Provide semantic markup and labeling to ensure that screen readers can identify the carousel as a whole, the slides, and the set of controls. For the "grouped carousel", use role="group" with an aria-label or aria-labelledby attribute. For the carousel container, you can also use a landmark (role="region" with an accessible name) instead of role="group".
  6. The aria-roledescription attribute allows you to specify the meaning of the container tagged with role="region" or role="group".
  7. If you go for the "tabbed carousel", use the ARIA attributes specified in the tab pattern. Ensure keyboard navigation within the set of "tabs" can be operated with the arrow keys.
  8. All controls must be keyboard operable and require meaningful text alternatives for icons.
  9. Ensure a proper focus order. It is recommended to position at least "Pause" and slide controls in the DOM before the slides, even if this order may differ slightly from the visual tab order.
Conclusion

This article describes one way of implementing carousel widgets in an accessible way. Even W3C working groups provide different approaches in the ARIA Authoring Practices Guide (APG) and the W3C Accessibility Tutorial (see also a discussion on GitHub and the comment by Jason Web regarding user testing of "tabbed carousels" and focus management). The important thing is:

  • Be familiar with semantic markup options and their impact on users.
  • The objective is to provide a predictable and understandable way of operating the widget, also for non-visual users.
  • Test with a keyboard and a screen reader. This will show most clearly whether you have met your goal or not.
  • If you do not have the resources to implement a carousel in an accessible way or if it is just the wrong pattern, better explore alternatives, and you might notice the benefits they entail.
“Carousels are complex components, and making complex components accessible adds more complexity.”

Léonie Watson

Note: This article was first published in German on tollwerk.de in April 2022.

Useful Resources

Understanding App Directory Architecture In Next.js

Since Next.js 13 release, there’s been some debate about how stable the shiny new features packed in the announcement are. On “What’s New in Next.js 13?” we have covered the release announced and established that though carrying some interesting experiments, Next.js 13 is definitely stable. And since then, most of us have seen a very clear landscape when it comes to the new <Link> and <Image> components, and even the (still beta) @next/font; these are all good to go, instant profit. Turbopack, as clearly stated in the announcement, is still alpha: aimed strictly for development builds and still heavily under development. Whether you can or can’t use it in your daily routine depends on your stack, as there are integrations and optimizations still somewhere on the way. This article’s scope is strictly about the main character of the announcement: the new App Directory architecture (AppDir, for short).

Because the App directory is the one that keeps bringing questions due to it being partnered with an important evolution in the React ecosystem — React Server Components — and with edge runtimes. It clearly is the shape of the future of our Next.js apps. It is experimental though, and its roadmap is not something we can consider will be done in the next few weeks. So, should you use it in production now? What advantages can you get out of it, and what are the pitfalls you may find yourself climbing out of? As always, the answer in software development is the same: it depends.

What Is The App Directory Anyway?

It is the new strategy for handling routes and rendering views in Next.js. It is made possible by a couple of different features tied together, and it is built to make the most out of React concurrent features (yes, we are talking about React Suspense). It brings, though, a big paradigm shift in how you think about components and pages in a Next.js app. This new way of building your app has a lot of very welcomed improvements to your architecture. Here’s a short, non-exhaustive list:

  • Partial Routing.
    • Route Groups.
    • Parallel Routes.
    • Intercepting Routes.
  • Server Components vs. Client Components.
  • Suspense Boundaries.
  • And much more, check the features overview in the new documentation.

A Quick Comparison

When it comes to the current routing and rendering architecture (in the Pages directory), developers were required to think of data fetching per route.

  • getServerSideProps: Server-Side Rendered;
  • getStaticProps: Server-Side Pre-Rendered and/or Incremental Static Regeneration;
  • getStaticPaths + getStaticProps: Server-Side Pre-Rendered or Static Site Generated.

Historically, it hadn’t yet been possible to choose the rendering strategy on a per-page basis. Most apps were either going full Server-Side Rendering or full Static Site Generation. Next.js created enough abstractions that made it a standard to think of routes individually within its architecture.

Once the app reaches the browser, hydration kicks in, and it’s possible to have routes collectively sharing data by wrapping our _app component in a React Context Provider. This gave us tools to hoist data to the top of our rendering tree and cascade it down toward the leaves of our app.

import { type AppProps } from 'next/app';

export default function MyApp({ Component, pageProps }: AppProps) {
  return (
        <SomeProvider>
            <Component {...pageProps} />
        </SomeProvider>
}

The ability to render and organize required data per route made this approach an almost good tool for when data absolutely needed to be available globally in the app. And while this strategy will allow data to spread throughout the app, wrapping everything in a Context Provider bundles hydration to the root of your app. It is not possible anymore to render any branches on that tree (any route within that Provider context) on the server.

Here, enters the Layout Pattern. By creating wrappers around pages, we could opt in or out of rendering strategies per route again instead of doing it once with an app-wide decision. Read more on how to manage states in the Pages Directory on the article “State Management in Next.js” and on the Next.js documentation.

The Layout Pattern proved to be a great solution. Being able to granularly define rendering strategies is a very welcomed feature. So the App directory comes in to put the layout pattern front and center. As a first-class citizen of Next.js architecture, it enables enormous improvements in terms of performance, security, and data handling.

With React concurrent features, it’s now possible to stream components to the browser and let each one handle its own data. So rendering strategy is even more granular now — instead of page-wide, it’s component-based. Layouts are nested by default, which makes it more clear to the developer what impacts each page based on the file-system architecture. And on top of all that, it is mandatory to explicitly turn a component client-side (via the “use client” directive) in order to use a Context.

Building Blocks Of The App Directory

This architecture is built around the Layout Per Page Architecture. Now, there is no _app, neither is there a _document component. They have both been replaced by the root layout.jsx component. As you would expect, that’s a special layout that will wrap up your entire application.

export function RootLayout({ children }: { children: React.ReactNode }) {
    return (
        <html lang="en">
            <body>
                {children}
            </body>
        </html>
}

The root layout is our way to manipulate the HTML returned by the server to the entire app at once. It is a server component, and it does not render again upon navigation. This means any data or state in a layout will persist throughout the lifecycle of the app.

While the root layout is a special component for our entire app, we can also have root components for other building blocks:

  • loading.jsx: to define the Suspense Boundary of an entire route;
  • error.jsx: to define the Error Boundary of our entire route;
  • template.jsx: similar to the layout, but re-renders on every navigation. Especially useful to handle state between routes, such as in or out transitions.

All of those components and conventions are nested by default. This means that /about will be nested within the wrappers of / automatically.

Finally, we are also required to have a page.jsx for every route as it will define the main component to render for that URL segment (as known as the place you put your components!). These are obviously not nested by default and will only show in our DOM when there’s an exact match to the URL segment they correspond to.

There is much more to the architecture (and even more coming!), but this should be enough to get your mental model right before considering migrating from the Pages directory to the App directory in production. Make sure to check on the official upgrade guide as well.

Server Components In A Nutshell

React Server Components allow the app to leverage infrastructure towards better performance and overall user experience. For example, the immediate improvement is on bundle size since RSC won’t carry over their dependencies to the final bundle. Because they’re rendered in the server, any kind of parsing, formatting, or component library will remain on the server code. Secondly, thanks to their asynchronous nature, Server Components are streamed to the client. This allows the rendered HTML to be progressively enhanced on the browser.

So, Server Components lead to a more predictable, cacheable, and constant size of your final bundle breaking the linear correlation between app size and bundle size. This immediately puts RSC as a best practice versus traditional React components (which are now referred to as client components to ease disambiguation).

On Server Components, fetching data is also quite flexible and, in my opinion, feels closer to vanilla JavaScript — which always smooths the learning curve. For example, understanding the JavaScript runtime makes it possible to define data-fetching as either parallel or sequential and thus have more fine-grained control on the resource loading waterfall.

  • Parallel Data Fetching, waiting for all:
import TodoList from './todo-list'

async function getUser(userId) {
  const res = await fetch(`https://<some-api>/user/${userId}`);
  return res.json()
}

async function getTodos(userId) {
  const res = await fetch(`https://<some-api>/todos/${userId}/list`);
  return res.json()
}

export default async function Page({ params: { userId } }) {
  // Initiate both requests in parallel.
  const userResponse = getUser(userId)
  const  = getTodos(username)

  // Wait for the promises to resolve.
  const [user, todos] = await Promise.all([userResponse, todosResponse])

  return (
    <>
      <h1>{user.name}</h1>
      <TodoList list={todos}></TodoList>
    </>
  )
}
  • Parallel, waiting for one request, streaming the other:
async function getUser(userId) {
  const res = await fetch(`https://<some-api>/user/${userId}`);
  return res.json()
}

async function getTodos(userId) {
  const res = await fetch(`https://<some-api>/todos/${userId}/list`);
  return res.json()
}

export default async function Page({ params: { userId } }) {
  // Initiate both requests in parallel.
  const userResponse = getUser(userId)
  const todosResponse = getTodos(userId)

  // Wait only for the user.
  const user = await userResponse

  return (
    <>
      <h1>{user.name}</h1>
            <Suspense fallback={<div>Fetching todos...</div>}>
          <TodoList listPromise={todosResponse}></TodoList>
            </Suspense>
    </>
  )
}

async function TodoList ({ listPromise }) {
  // Wait for the album's promise to resolve.
  const todos = await listPromise;

  return (
    <ul>
      {todos.map(({ id, name }) => (
        <li key={id}>{name}</li>
      ))}
    </ul>
  );
}

In this case, <TodoList> receives an in-flight Promise and needs to await it before rendering. The app will render the suspense fallback component until it’s all done.

  • Sequential Data Fetching fires one request at a time and awaits for each:
async function getUser(username) {
  const res = await fetch(`https://<some-api>/user/${userId}`);
  return res.json()
}

async function getTodos(username) {
  const res = await fetch(`https://<some-api>/todos/${userId}/list`);
  return res.json()
}

export default async function Page({ params: { userId } }) {
  const user = await getUser(userId)


  return (
    <>
      <h1>{user.name}</h1>
            <Suspense fallback={<div>Fetching todos...</div>}>
            <TodoList userId={userId} />
            </Suspense>
    </>
  )
}

async function TodoList ({ userId }) {
  const todos = await getTodos(userId);

  return (
    <ul>
      {todos.map(({ id, name }) => (
        <li key={id}>{name}</li>
      ))}
    </ul>
  );
}

Now, Page will fetch and wait on getUser, then it will start rendering. Once it reaches <TodoList>, it will fetch and wait on getTodos. This is still more granular than what we are used to it with the Pages directory.

Important things to note:

  • Requests fired within the same component scope will be fired in parallel (more about this at Extended Fetch API below).
  • Same requests fired within the same server runtime will be deduplicated (only one is actually happening, the one with the shortest cache expiration).
  • For requests that won’t use fetch (such as third-party libraries like SDKs, ORMs, or database clients), route caching will not be affected unless manually configured via segment cache configuration.
export const revalidate = 600; // revalidate every 10 minutes

export default function Contributors({
  params
}: {
  params: { projectId: string };
}) {
    const { projectId }  = params
    const { contributors } = await myORM.db.workspace.project({ id: projectId })

  return <ul>{*/ ... */}</ul>;
}

To point out how much more control this gives developers: when within the pages directory, rendering would be blocked until all data is available. When using getServerSideProps, the user would still see the loading spinner until data for the entire route is available. To mimic this behavior in the App directory, the fetch requests would need to happen in the layout.tsx for that route, so always avoid doing it. An “all or nothing” approach is rarely what you need, and it leads to worse perceived performance as opposed to this granular strategy.

Extended Fetch API

The syntax remains the same: fetch(route, options). But according to the Web Fetch Spec, the options.cache will determine how this API will interact with the browser cache. But in Next.js, it will interact with the framework server-side HTTP Cache.

When it comes to the extended Fetch API for Next.js and its cache policy, two values are important to understand:

  • force-cache: the default, looks for a fresh match and returns it.
  • no-store or no-cache: fetches from the remote server on every request.
  • next.revalidate: the same syntax as ISR, sets a hard threshold to consider the resource fresh.
fetch(`https://route`, { cache: 'force-cache', next: { revalidate: 60 } })

The caching strategy allows us to categorize our requests:

  • Static Data: persist longer. E.g., blog post.
  • Dynamic Data: changes often and/or is a result of user interaction. E.g., comments section, shopping cart.

By default, every data is considered static data. This is due to the fact force-cache is the default caching strategy. To opt out of it for fully dynamic data, it’s possible to define no-store or no-cache.

If a dynamic function is used (e.g., setting cookies or headers), the default will switch from force-cache to no-store!

Finally, to implement something more similar to Incremental Static Regeneration, you’ll need to use next.revalidate. With the benefit that instead of being defined for the entire route, it only defines the component it’s a part of.

Migrating From Pages To App

Porting logic from Pages directory to Apps directory may look like a lot of work, but Next.js has worked prepared to allow both architectures to coexist, and thus migration can be done incrementally. Additionally, there is a very good migration guide in the documentation; I recommend you to read it fully before jumping into a refactoring.

Guiding you through the migration path is beyond the scope of this article and would make it redundant to the docs. Alternatively, in order to add value on top of what the official documentation offers, I will try to provide insight into the friction points my experience suggests you will find.

The Case Of React Context

In order to provide all the benefits mentioned above in this article, RSC can’t be interactive, which means they don’t have hooks. Because of that, we have decided to push our client-side logic to the leaves of our rendering tree as late as possible; once you add interactiveness, children of that component will be client-side.

In a few cases pushing some components will not be possible (especially if some key functionality depends on React Context, for example). Because most libraries are prepared to defend their users against Prop Drilling, many create context providers to skip components from root to distant descendants. So ditching React Context entirely may cause some external libraries not to work well.

As a temporary solution, there is an escape hatch to it. A client-side wrapper for our providers:

// /providers.jsx
‘use client’

import { type ReactNode, createContext } from 'react';

const SomeContext = createContext();

export default function ThemeProvider({ children }: { children: ReactNode }) {
  return (
    <SomeContext.Provider value="data">
      {children}
    </SomeContext.Provider>
  );
}

And so the layout component will not complain about skipping a client component from rendering.

// app/.../layout.jsx
import { type ReactNode } from 'react';
import Providers from ‘./providers’;

export default function Layout({ children }: { children: ReactNode }) {
    return (
    <Providers>{children}</Providers>
  );
}

It is important to realize that once you do this, the entire branch will become client-side rendered. This approach will take everything within the <Providers> component to not be rendered on the server, so use it only as a last resort.

TypeScript And Async React Elements

When using async/await outside of Layouts and Pages, TypeScript will yield an error based on the response type it expects to match its JSX definitions. It is supported and will still work in runtime, but according to Next.js documentation, this needs to be fixed upstream in TypeScript.

For now, the solution is to add a comment in the above line {/* @ts-expect-error Server Component */}.

Client-side Fetch On The Works

Historically, Next.js has not had a built-in data mutation story. Requests being fired from the client side were at the developer’s own discretion to figure out. With React Server Components, this is bound for a chance; the React team is working on a use hook which will accept a Promise, then it will handle the promise and return the value directly.

In the future, this will supplant most bad cases of useEffect in the wild (more on that in the excellent talk “Goodbye UseEffect”) and possibly be the standard for handling asynchronicity (fetching included) in client-side React.

For the time being, it is still recommended to rely on libraries like React-Query and SWR for your client-side fetching needs. Be especially aware of the fetch behavior, though!

So, Is It Ready?

Experimenting is at the essence of moving forward, and we can’t make a nice omelet without breaking eggs. I hope this article has helped you answer this question for your own specific use case.

If on a greenfield project, I’d possibly take App directory for a spin and keep Page directory as a fallback or for the functionality that is critical for business. If refactoring, it would depend on how much client-side fetching I have. Few: do it; many: probably wait for the full story.

Let me know your thoughts on Twitter or in the comments below.

Further Reading On SmashingMag

Easy SVG Customization And Animation: A Practical Guide

Scalable Vector Graphics (SVG) have been a staple in Web Development for quite some time, and for a good reason. They can be scaled up or down without loss of quality due to their vector properties. They can be compressed and optimized due to the XML format. They can also be easily edited, styled, animated, and changed programmatically.

At the end of the day, SVG is a markup language. And just as we can use CSS and JavaScript to enhance our HTML, we can use them the same on SVGs. We could add character and flourishes to our graphic elements, add interactions, and shape truly delightful and memorable user experiences. This optional but crucial detail is often overlooked when building projects, so SVGs end up somewhat underutilized beyond their basic graphical use cases.

How can we even utilize SVGs beyond just using them statically in our projects?

Take the “The State of CSS 2021” landing page, for example. This SVG Logo has been beautifully designed and animated by Christopher Kirk-Nielsen. Although this logo would have looked alright just as a static image, it wouldn’t have had as much of an impact and drawn attention without this intricate animation.

Let’s go even further — SVG, HTML, CSS, and JavaScript can be combined and used to create delightful, interactive, and stunning projects. Check out Sarah Drasner’s incredible work. She has also written a book and has a video course on the topic.

Let’s add it to our HTML and create a simple button component.

<button type="button">
  <svg width="24" height="24" viewBox="0 0 80 80" fill="none" xmlns="http://www.w3.org/2000/svg" aria-hidden="true"><path d="..." fill="#C2CCDE" /></svg>
  Add to favorites
</button>

Our button already has some background and text color styles applied to it so let’s see what happens when we add our SVG star icon to it.

Our SVG icon has a fill property applied to it, more specifically, a fill="#C2CCDE" in SVG’s path element. This icon could have come from the SVG library or even exported from a design file, so it makes sense for a color to be exported alongside other graphical properties.

SVG elements can be targeted by CSS like any HTML element, so developers usually reach for the CSS and override the fill color.

.button svg * {
  fill: var(--color-text);
}

However, this is not an ideal solution as this is a greedy selector, and overriding the fill attribute on all elements can have unintended consequences, depending on the SVG markup. Also, fill is not the only property that affects the element’s color.

Let’s showcase this downside by creating a new button and adding a Google logo icon. SVG markup is a bit more complex than our star icon, as it has multiple path elements. SVG elements don’t have to be all visible, there are cases when we want to use them in different ways (as a clipping region, for example), but we won’t go into that. Just keep in mind that greedy selectors that target SVG elements and override their fill properties can produce unexpected results.

 <svg aria-hidden="true" xmlns="http://www.w3.org/2000/svg" height="24" viewBox="0 0 24 24" width="24">
  <path d="..." fill="#4285F4" />
  <path d="..." fill="#34A853" />
  <path d="..." fill="#FBBC05" />
  <path d="..." fill="#EA4335" />
  <path d="..." fill="none" />
 </svg>

We can look at the issue from a different perspective. Instead of looking for a silver bullet CSS solution, we can simply edit our SVG. We already know that the fill property affects the SVG element’s color so let’s see what we can do to make our icons more customizable.

Let’s use a very underutilized CSS value: currentColor. I’ve talked about this awesome value in one of my previous articles.

Often referred to as “the first CSS variable,” currentColor is a value equal to the element’s color property. It can be used to assign a value equal to the value of the color property to any CSS property which accepts a color value. It forces a CSS property to inherit the value of the color property.

If you are looking for more, CSS-Tricks keeps a comprehensive list of various SVG optimization tools with plenty of information and articles on the topic.

Using SVGs With Popular JavaScript-Based Frameworks

Many popular JavaScript frameworks like React have fully integrated SVG in their toolchains to make the developer experience easier. In React, this could be as simple as importing the SVG as a component, and the toolkit would do all the heavy lifting optimizing it.

import React from 'react';
import {ReactComponent as ReactLogo} from './logo.svg';

const App = () => {
  return (
    <div className="App">
      <ReactLogo />
    </div>
  );
}
export default App;

However, as Jason Miller and many other developers have noted, including the SVG markup in JSX bloats the JavaScript bundle and makes the SVG less performant as a result. Instead of just having the browser parse and render an SVG, with JSX, we have expensive extra steps added to the browser. Remember, JavaScript is the most expensive Web resource, and by injecting SVG markup into JSX, we’ve made SVG as expensive as well.

One solution would be to create SVG symbol objects and include them with SVG use. That way, we’ll be defining the SVG icon library in HTML, and we can instantiate it and customize it in React as much as we need to.

<!-- Definition -->
<svg viewBox="0 0 128 128" xmlns="http://www.w3.org/2000/svg">
  <symbol id="myIcon" width="24" height="24" viewBox="0 0 24 24">
      <!-- ... -->
  </symbol>
  <!-- ... -->
</svg>

<!-- Usage -->
<svg viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg">
  <use href="#myIcon" />
</svg>
Breathing Life Into SVGs

Animating SVGs can be easy and fun. It takes just a few minutes to create some simple and effective animations and interactions. If you are unsure which animation would be ideal for a graphic or should you animate it at all, it’s best to consult with the designer. You can even look for some similar examples and use cases on Dribble or other similar websites.

It’s also important to keep in mind that animations should be tasteful, add to the overall user experience, and serve some purpose (draw the user’s attention, for example).

We’ll cover various use cases that you might encounter on your projects. Let’s start with a really sweet example.

Animating A Cookie Banner

Some years ago, I was working on a project where a designer made an adorable cookie graphic for an unobtrusive cookie consent popup to make the element more prominent. This cookie graphic was whimsical and a bit different from the general design of the website.

I’ve created the element and added the graphic, but when looking at the page as a whole, it felt kind of lifeless, and it didn’t stand out as much as we thought it will. The user needed to accept cookies as the majority of website functionality depended on cookies. We wanted to create an unobtrusive banner that doesn’t block user navigation from the outset, so I decided to animate it to make it more prominent and add a bit of flourish and character.

I’ve decided to create three animations that’ll be applied to the cookie SVG:

  • Quick and snappy rolling fade-in entry animation;
  • Repeated wiggle animation with a good amount of delay in between;
  • Repeating and subtle eye sparkle animation.

Here’s the final result of the element that we’ll be creating. We’ll cover each animation step by step.

Let’s store it in a CSS variable so that we can reuse it for the repeatable wiggle movement animation.

--transition-bounce: cubic-bezier(0.2, 0.7, 0.4, 1.65);

Let’s put everything together, set a duration value and fill-mode, and add the animation to our svg element.

/* Our SVG element */
.cookie-notice__graphic {
  opacity: 0; /* Should not be visible at the start */
  animation: enter 0.8s var(--transition-bounce) forwards;
}

Let’s check out what we’ve created. It already looks really nice. Notice how the bouncing easing function made a lot of difference to the overall look and feel of the whole element.

@keyframes wiggle {
  /* Stands still */
  0% {
    transform: translate3d(0, 0, 0) rotateZ(17deg);
  }
  /* Starts moving */
  45% {
    transform: translate3d(0, 0, 0) rotateZ(17deg);
  }

  /* Pulls back */
  50% {
    transform: translate3d(-10%, 0, 0) rotateZ(8deg);
  }

  /* Moves forward */
  55% {
    transform: translate3d(6%, 0, 0) rotateZ(24deg);
  }

  /* Returns to starting position */
  60% {
    transform: translate3d(0, 0, 0) rotateZ(17deg);
  }

  /* Stands still */
  100% {
    transform: translate3d(0, 0, 0) rotateZ(17deg);
  }
}
/* Our SVG element */
.cookie-notice__graphic {
  opacity: 0;
  animation: enter 0.8s var(--transition-bounce) forwards,
    wiggle 6s 3s var(--transition-bounce) infinite;
}

SVG elements can have a CSS class attribute, so we’ll use that to target them. Let’s add the class attribute to the two path elements that we identified.

<!-- ... -->
<path fill="#351f17" d="..." />
<path class="cookie__eye" fill="#fff" d="..." />
<path fill="#351f17" d="..." />
<path class="cookie__eye" fill="#fff" d="..." />
<!-- ... -->

We want to make cookie’s eyes sparkle. I got this idea from a music video for a song by Devin Townsend. You can see the animation play at the 5-minute mark. It just goes to show how you can find ideas pretty much anywhere.

Let’s just change the scale and opacity. Notice how so far, we’ve relied only on those two attributes for all three animations, which are quite different from each other.

@keyframes sparkle {
  from {
    opacity: 0.95;
    transform: scale(0.95);
  }
  to {
    opacity: 1;
    transform: scale(1);
  }
}

We want this animation to repeat without delay. It should be subtle enough to blend in nicely with the graphic and the overall element and not obtrusive for the user. As for the easing function, we’ll do something different. We’ll use staircase functions to achieve that quick and snappy transition between the two animation states (our from and to values).

We need to be careful here. Transform origin is going to be set relative to the parent SVG element’s viewbox and not the element itself. So if we set transform-origin: center center, the transformation will use the center coordinates of the parent SVG and not the path element. We can easily fix that by setting a transform-box property to fill-box.

The nearest SVG viewport is used as the reference box. If a viewBox attribute is specified for the SVG viewport creating element, the reference box is positioned at the origin of the coordinate system established by the viewBox attribute, and the dimension of the reference box is set to the width and height values of the viewBox attribute.
.cookie__eye {
  animation: sparkle 0.15s 1s steps(2, jump-none) infinite alternate;
  transform-box: fill-box;
  transform-origin: center center;
}

Last but not least, let’s respect the user’s accessibility preferences and turn off all animations if they have it set.

@media (prefers-reduced-motion: reduce) {
  *,
  ::before,
  ::after {
    animation-delay: -1ms !important;
    animation-duration: 1ms !important;
    animation-iteration-count: 1 !important;
    background-attachment: initial !important;
    scroll-behavior: auto !important;
    transition-duration: 0s !important;
    transition-delay: 0s !important;
  }
}

Here is the final result. Feel free to play around with the demo and experiment with keyframe values and easing values to change the look and feel of the animation.

Let’s take a closer look at the SVG we’ll be working with. It consists of a few dozen circle elements.

<!-- ... -->
<circle cx="103.5" cy="34.5" r="11.3"></circle>
<circle cx="172.5" cy="34.5" r="15.7"></circle>
<circle cx="310.5" cy="34.5" r="24.6"></circle>
<circle cx="517.5" cy="34.5" r="34.5"></circle>
<circle cx="586.5" cy="34.5" r="34.5"></circle>
<circle cx="655.5" cy="34.5" r="33.4"></circle>
<!-- ... -->

Let’s start by adding a bit of opacity to our background and making it more chaotic. When we apply CSS transforms to elements inside SVG, they are transformed relative to the SVG’s main viewbox. That is why we’re getting a slightly chaotic displacement when applying a scale transform. We’ll use that to our advantage and not change the reference box.

To make things a little bit easier for us, we’ll use SASS. If you are unfamiliar with SASS and SCSS, you can view compiled CSS in CodePen below.

svg circle {
  opacity: 0.85;

  &:nth-child(2n) {
    transform: scale3d(0.75, 0.75, 0.75);
    opacity: 0.3;
}

With that in mind, let’s add some keyframes. We’ll use two sets of keyframes that we’ll apply randomly to our circle elements. Once again, we’ll leverage the scale transform displacement and change the opacity value.

@keyframes a {
  0% {
    opacity: 0.8;
    transform: scale3d(1, 1, 1);
  }
  100% {
    opacity: 0.3;
    transform: scale3d(0.75, 0.75, 0.75);
  }
}

@keyframes b {
  0% {
    transform: scale3d(0.75, 0.75 0.75);
    opacity: 0.3;
  }
  100% {
    opacity: 0.8;
    transform: scale3d(1, 1, 1);
  }
}

Now, let’s use quite a few :nth-child selectors. Every odd child will use the a keyframes, while every even circle will use a b keyframes. We’ll use :nth-child selectors to play around with animation duration and animation delay values.

svg circle {
  opacity: 0.85;
  animation: a 10s cubic-bezier(0.45,0.05,0.55,0.95) alternate infinite;

  &:nth-child(2n) {
    transform: scale3d(0.75, 0.75, 0.75);
    opacity: 0.3;

    animation-name: b;
    animation-duration: 6s;
    animation-delay: 0.5s;
  }

  &:nth-child(3n) {
    animation-duration: 4s;
    animation-delay: 0.25s;
  }

  /* ... */
}

And, once again, just by playing around with opacity values and CSS transforms on our SVG and playing around with child selectors and animation parameters, we’ve managed to create a more interesting background for our hero container.

Here is a markup for our circle SVG.

<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 100 100"><circle cx="50" cy="50" r="50" fill-opacity=".03"/></svg>

Be careful not to inline too much data with base64, so stylesheets can be downloaded and parsed quickly. When we convert it to base64, we get this handy CSS background-image snippet:

background-image: url(data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCAxMDAgMTAwIj48Y2lyY2xlIGN4PSI1MCIgY3k9IjUwIiByPSI1MCIgZmlsbC1vcGFjaXR5PSIuMDMiLz48L3N2Zz4=);

We can simply apply a simple animation where we offset the background-position by the background-size value and get this neat background animation.

.wrapper {
  animation: move-background 3.5s linear;
  background-image: url(data:image/svg+xml;base64,...);
  background-size: 96px;
  background-color: #16a757;
  /* ... */
}

@keyframes move-background {
  from {
    background-position: 0 0;
  }

  to {
    background-position: 96px 0;
  }
}

Our example looks more interesting with this subtle moving animation going on in the background. Remember to respect users’ accessibility preferences and turn off the animations if they have a preference set.

Before diving into the animation, we need to cover two SVG properties that we’ll be using: stroke-dasharray and stroke-dashoffset. They’re integral for pulling off this animation.

Stroke can be converted to dashes with a certain length using a stroke-dasharray property.

And we can offset the positions of those strokes by a certain amount using the stroke-dashoffset property.

So, what’s this have to do with our drawing and erasing animation? Imagine what would happen if we could have a dash that covers the whole stroke length and offset it by the same value. In that case, the starting point of the stroke would be way past the ending point of the stroke, and we wouldn’t see it.

svg path {
  stroke-linecap: round;
  stroke-linejoin: round;
  stroke-dasharray: 800;  /* Dash covering the whole stroke */
  stroke-dashoffset: 800; /* Offset it to make it invisible */
}

If we animate the offset value from that value back to 0, the stroke would slowly become visible, as it was drawing itself.

svg path {
  /* ... */
  animation: draw 6s linear infinite;
}

@keyframes draw{
  to {
    stroke-dashoffset: 0; /* Reduce offset to make it visible */
  }
}

If we continue to animate the offset value from 0 to a negative value, we’d get the erasing effect.

svg path {
  /* ... */
  animation: drawAndErase 6s linear infinite;
}

@keyframes drawAndErase {
  to {
    stroke-dashoffset: -800;
  }
}

You’re probably wondering where the magical 800 pixel value came from. This value depends on the SVG and the length of the dash needed to cover the whole stroke length. It can be easily guessed, but Chris Coyier has a handy function that can do it for you. However, depending on the stroke properties and SVG shape, this function might not always return an ideal value, but it can guide you closer to it.

Check out the complete demo and feel free to play around with values to see how the stroke properties affect the animation. If you are looking for more examples, CodyHouse has covered a fun-looking button animation using the same trick.

Let’s start by adding the mouse-tracking eye animation. We’ll skip manually implementing this feature in JavaScript and use a handy library called watching-you.

Using the browser’s inspect element tool, we’ll find the target elements inside the SVG and add the eye-left and eye-right CSS classes to these elements, respectively.

<ellipse class="cls-5 eye eye-left" cx="245.15133" cy="134.57033" rx="5.31264" ry="8.61816" transform="translate(-33.47349 110.5587) rotate(-23.83807)" />
<ellipse class="cls-4 eye eye-right" cx="284.42686" cy="116.68559" rx="5.31264" ry="8.61816" transform="translate(-22.89477 124.9063) rotate(-23.83807)" />

We’ll configure the library and make it target the classes that we’ve added.

const optionsLeft = { power: 4, rotatable: false };
const watcherLeft = new WatchingYou(".eye-left", optionsLeft);
watcherLeft.start();

const optionsRight = { power: 3, rotatable: false };
const watcherRight = new WatchingYou(".eye-right", optionsRight);
watcherRight.start();

We also need to remember to apply the transform-box property, so our eyes move around the center.

.eye {
  transform-box: fill-box;
  transform-origin: center center;
}

Let’s check out what we’ve got. With just a few lines of code and a tiny JavaScript library to do the heavy lifting, we’ve made the SVG element respond to the mouse position. Now that’s amazing, isn’t it?

Bowtie and hat animation will be created in a very similar way. Let’s start with a hat and find it using the browser’s inspect element tool. The hat graphic consists of two path elements, so let’s group them.

<g class="hat">
  <path class="cls-6" d="..." />
  <path class="cls-9" d="..." />
</g>

We’ll apply the same transform-box property and add a hat--active class that will run the animation when applied.

.hat {
  transform-box: fill-box;
  transform-origin: center bottom;
  cursor: pointer;
}

.hat--active {
  animation: hatJump 1s cubic-bezier(0, 0.7, 0.5, 1.25);
}

@keyframes hatJump {
  0% {
    transform: rotateZ(0) translateY(0);
  }

  50% {
    transform: rotateZ(-10deg) translateY(-50%);
  }

  100% {
    transform: rotateZ(0) translateY(0);
  }
}

Finally, let’s set up a click event listener that applies an active class to the element and then removes it after the animation has finished running.

const hat = document.querySelector(".hat");

hat.addEventListener("click", function () {
  if (hat.classList.contains("hat--active")) {
    return;
  }
  // Add the active class.
  hat.classList.add("hat--active");

  // Remove the active class after 1.2s.
  setTimeout(function () {
    hat.classList.remove("hat--active");
  }, 1200);
});

We use the same trick with the bowtie element, only applying a different animation and class. Feel free to check out the CodePen demo for more details.

Let’s move on to the coffee machine. Notice we don’t have any SVG element acting as a coffee on our SVG, so we’ll need to add it ourselves. You should feel comfortable editing SVG markup and we don’t even have to break a sweat here. Let’s make it easy for ourselves and find and copy the coffee machine’s pipe rectangle, which is similar to the coffee stream shape we want to have. We just have to change the color to brown and slightly adjust the dimensions.

<!-- Pipe -->
<rect class="cls-12" x="137.81171" y="243.99883" width="6.21967" height="12.29272" transform="translate(281.84309 500.29037) rotate(-180)" />

<!-- Copied and adjusted Pipe rect to act as a coffee -->
<rect class="coffee" x="139" y="243.99883" width="4" height="12.29272" transform="translate(281.84309 500.29037) rotate(-180)" fill="brown" />

Like in the previous examples, let’s add active classes and their respective animation keyframes. We’ll compose the two animations and play around with duration and delay.

.lever, .coffee {
  transform-box: fill-box;
  transform-origin: center bottom;
}

.lever {   
  cursor: pointer; 
}

.lever--active {
  animation: leverPush 2.5s linear;
}

@keyframes leverPush {
  0% {
    transform: translateY(0);
  }
  8% {
    transform: translateY(50%);
  }
  90% {
    transform: translateY(50%);
  }
  100% {
    transform: translateY(0);
  }
}

.coffee--active {
  animation: coffeeStream 2.4s 0.1s ease-out forwards;
}

@keyframes coffeeStream {
  0% {
    transform: translateY(0);
  }
  5% {
    transform: translateY(50%);
  }
  95% {
    transform: translateY(50%);
  }
  100% {
    transform: translateY(150%);
  }
}

Let’s apply the active classes on click and remove them after the animation has finished running. And that’s it!

const lever = document.querySelector(".lever");
const coffee = document.querySelector(".coffee");

lever.addEventListener("click", function () {
  if (lever.classList.contains("lever--active")) {
    return;
  }

  lever.classList.add("lever--active");
  coffee.classList.add("coffee--active");

  setTimeout(function () {
    lever.classList.remove("lever--active");
    coffee.classList.remove("coffee--active")
  }, 2500);
});

Check out the complete example below, and, as always, feel free to play around with the animations and experiment with other elements, like the speech bubble or making the coffee machine’s lights blink while coffee is pouring out. Have fun!

See the Pen Smashing cat interaction [forked] by Adrian Bece.

Conclusion

I hope that this article encourages you to play around and make some wonderful SVG animations and interactions and integrate this workflow into your day-to-day projects. We’ve used only a handful of tricks and CSS properties to create a whole variety of nice effects on the fly. With some extra time, knowledge, and effort, you can create some truly amazing and interactive graphics.

Feel free to reach out on Twitter and share your work. Happy to hear your thoughts and see what you come up with!

References

The Key To Good Component Design Is Selfishness

When developing a new feature, what determines whether an existing component will work or not? And when a component doesn’t work, what exactly does that mean?

Does the component functionally not do what it’s expected to do, like a tab system that doesn’t switch to the correct panel? Or is it too rigid to support the designed content, such as a button with an icon after the content instead of before it? Or perhaps it’s too pre-defined and structured to support a slight variant, like a modal that always had a header section, now requiring a variant without one?

Such is the life of a component. All too often, they’re built for a narrow objective, then hastily extended for minor one-off variations again and again until it no longer works. At that point, a new component is created, the technical debt grows, the onboarding learning curve becomes steeper, and the maintainability of the codebase is more challenging.

Is this simply the inevitable lifecycle of a component? Or can this situation be averted? And, most importantly, if it can be averted, how?

Selfishness. Or perhaps, self-interest. Better yet, maybe a little bit of both.

Far too often, components are far too considerate. Too considerate of one another and, especially, too considerate of their own content. In order to create components that scale with a product, the name of the game is self-interest bordering on selfishness — cold-hearted, narcissistic, the-world-revolves-around-me selfishness.

This article isn’t going to settle the centuries-old debate about the line between self-interest and selfishness. Frankly, I’m not qualified to take part in any philosophical debate. However, what this article is going to do is demonstrate that building selfish components is in the best interest of every other component, designer, developer, and person consuming your content. In fact, selfish components create so much good around them that you could almost say they’re selfless.

I don’t know 🤷‍♀️ Let’s look at some components and decide for ourselves.

Note: All code examples and demos in this article will be based on React and TypeScript. However, the concepts and patterns are framework agnostic.

The Consideration Iterations

Perhaps, the best way to demonstrate a considerate component is by walking through the lifecycle of one. We’ll be able to see how they start small and functional but become unwieldy once the design evolves. Each iteration backs the component further into a corner until the design and needs of the product outgrow the capabilities of the component itself.

Let’s consider the modest Button component. It’s deceptively complex and quite often trapped in the consideration pattern, and therefore, a great example of working through.

Iteration 1

While these sample designs are quite barebones, like not showing various :hover, :focus, and disabled states, they do showcase a simple button with two color themes.

At first glance, it’s possible the resulting Button component could be as barebones as the design.

// First, extend native HTML button attributes like onClick and disabled from React.
type ButtonProps = React.ComponentPropsWithoutRef<"button"> & {
  text: string;
  theme: 'primary' | 'secondary';
}
<Button
  onClick={someFunction}
  text="Add to cart"
  theme="primary"
/>

It’s possible, and perhaps even likely, that we’ve all seen a Button component like this. Maybe we’ve even made one like it ourselves. Some of the namings may be different, but the props, or the API of the Button, are roughly the same.

In order to meet the requirements of the design, the Button defines props for the theme and text. This first iteration works and meets the current needs of both the design and the product.

However, the current needs of the design and product are rarely the final needs. When the next design iterations are created, the Add to cart button now requires an icon.

Iteration 2

After validating the UI of the product, it was decided that adding an icon to the Add to cart button would be beneficial. The designs explain, though, that not every button will include an icon.

Returning to our Button component, its props can be extended with an optional icon prop which maps to the name of an icon to conditionally render.

type ButtonProps = {
  theme: 'primary' | 'secondary';
  text: string;
  icon?: 'cart' | '...all-other-potential-icon-names';
}
<Button
  theme="primary"
  onClick={someFunction}
  text="Add to cart"
  icon="cart"
/>

Whew! Crisis averted.

With the new icon prop, the Button can now support variants with or without an icon. Of course, this assumes the icon will always be shown at the end of the text, which, to the surprise of nobody, is not the case when the next iteration is designed.

Iteration 3

The previous Button component implementation included the icon at the text’s end, but the new designs require an icon to optionally be placed at the start of the text. The single icon prop will no longer fit the needs of the latest design requirements.

There are a few different directions that can be taken to meet this new product requirement. Maybe an iconPosition prop can be added to the Button. But what if there comes a need to have an icon on both sides? Maybe our Button component can get ahead of this assumed requirement and make a few changes to the props.

The single icon prop will no longer fit the needs of the product, so it’s removed. In its place, two new props are introduced, iconAtStart and iconAtEnd.

type ButtonProps = {
  theme: 'primary' | 'secondary' | 'tertiary';
  text: string;
  iconAtStart?: 'cart' | '...all-other-potential-icon-names';
  iconAtEnd?: 'cart' | '...all-other-potential-icon-names';
}

After refactoring the existing uses of Button in the codebase to use the new props, another crisis is averted. Now, the Button has some flexibility. It’s all hardcoded and wrapped in conditionals within the component itself, but surely, what the UI doesn’t know can’t hurt it.

Up until this point, the Button icons have always been the same color as the text. It seems reasonable and like a reliable default, but let’s throw a wrench into this well-oiled component by defining a variation with a contrasting color icon.

Iteration 4

In order to provide a sense of feedback, this confirmation UI stage was designed to be shown temporarily when an item has been added to the cart successfully.

Maybe this is a time when the development team chooses to push back against the product requirements. But despite the push, the decision is made to move forward with providing color flexibility to Button icons.

Again, multiple approaches can be taken for this. Maybe an iconClassName prop is passed into the Button to have greater control over the icon’s appearance. But there are other product development priorities, and instead, a quick fix is done.

As a result, an iconColor prop is added to the Button.

type ButtonProps = {
  theme: 'primary' | 'secondary' | 'tertiary';
  text: string;
  iconAtStart?: 'cart' | '...all-other-potential-icon-names';
  iconAtEnd?: 'cart' | '...all-other-potential-icon-names';
  iconColor?: 'green' | '...other-theme-color-names';
}

With the quick fix in place, the Button icons can now be styled differently than the text. The UI can provide the designed confirmation, and the product can, once again, move forward.

Of course, as product requirements continue to grow and expand, so do their designs.

Iteration 5

With the latest designs, the Button must now be used with only an icon. This can be done in a few different approaches, yet again, but all of them require some amount of refactoring.

Perhaps a new IconButton component is created, duplicating all other button logic and styles into two places. Or maybe that logic and styles are centralized and shared across both components. However, in this example, the development team decides to keep all the variants in the same Button component.

Instead, the text prop is marked as optional. This could be as quick as marking it as optional in the props but could require additional refactoring if there’s any logic expecting the text to exist.

But then comes the question, if the Button is to have only an icon, which icon prop should be used? Neither iconAtStart nor iconAtEnd appropriately describes the Button. Ultimately, it’s decided to bring the original icon prop back and use it for the icon-only variant.

type ButtonProps = {
  theme: 'primary' | 'secondary' | 'tertiary';
  iconAtStart?: 'cart' | '...all-other-potential-icon-names';
  iconAtEnd?: 'cart' | '...all-other-potential-icon-names';
  iconColor?: 'green' | '...other-theme-color-names';
  icon?: 'cart' | '...all-other-potential-icon-names';
  text?: string;
}

Now, the Button API is getting confusing. Maybe a few comments are left in the component to explain when and when not to use specific props, but the learning curve is growing steeper, and the potential for error is increasing.

For example, without adding great complexity to the ButtonProps type, there is no stopping a person from using the icon and text props at the same time. This could either break the UI or be resolved with greater conditional complexity within the Button component itself. Additionally, the icon prop can be used with either or both of the iconAtStart and IconAtEnd props as well. Again, this could either break the UI or be resolved with even more layers of conditionals within the component.

Our beloved Button has become quite unmanageable at this point. Hopefully, the product has reached a point of stability where no new changes or requirements will ever happen again. Ever.

Iteration 6

So much for never having any more changes. 🤦

This next and final iteration of the Button is the proverbial straw that breaks the camel’s back. In the Add to cart button, if the current item is already in the cart, we want to show the quantity of which on the button. On the surface, this is a straightforward change of dynamically building the text prop string. But the component breaks down because the current item count requires a different font weight and an underline. Because the Button accepts only a plain text string and no other child elements, the component no longer works.

Would this design have broken the Button if this was the second iteration? Maybe not. The component and codebase were both much younger then. But the codebase has grown so much by this point that refactoring for this requirement is a mountain to climb.

This is when one of the following things will likely happen:

  1. Do a much larger refactor to move the Button away from a text prop to accepting children or accepting a component or markup as the text value.
  2. The Button is split into a separate AddToCart button with an even more rigid API specific to this one use case. This also either duplicates any button logic and styles into multiple places or extracts them into a centralized file to share everywhere.
  3. The Button is deprecated, and a ButtonNew component is created, splitting the codebase, introducing technical debt, and increasing the onboarding learning curve.

Neither outcome is ideal.

So, where did the Button component go wrong?

Sharing Is Impairing

What is the responsibility of an HTML button element exactly? Narrowing down this answer will shine light onto the issues facing the previous Button component.

The responsibilities of the native HTML button element go no further than:

  1. Display, without opinion, whatever content is passed into it.
  2. Handle native functionality and attributes such as onClick and disabled.

Yes, each browser has its own version of how a button element may look and display content, but CSS resets are often used to strip those opinions away. As a result, the button element boils down to little more than a functional container for triggering events.

The onus of formatting any content within the button isn’t the responsibility of the button but of the content itself. The button shouldn’t care. The button should not share the responsibility for its content.

The core issue with the considerate component design is that component props define the content and not the component itself.

In the previous Button component, the first major limitation was the text prop. From the first iteration, a limitation was placed on the content of the Button. While the text prop fit with the designs at that stage, it immediately deviated from the two core responsibilities of the native HTML button. It immediately forced the Button to be aware of and responsible for its content.

In the following iterations, the icon was introduced. While it seemed reasonable to bake a conditional icon into the Button, also doing so deviated from the core button responsibilities. Doing so limited the use cases of the component. In later iterations, the icon needed to be available in different positions, and the Button props were forced to expand to style the icon.

When the component is responsible for the content it displays, it needs an API that can accommodate all content variations. Eventually, that API will break down because the content will forever and always change.

Introducing The Me In Team

There’s an adage used in all team sports, “There’s no ‘I’ in a team.” While this mindset is noble, some of the greatest individual athletes have embodied other ideas.

Michael Jordan famously responded with his own perspective, “There’s an ‘I’ in win.” The late Kobe Bryant had a similar idea, “There’s an ‘M-E’ in [team].”

Our original Button component was a team player. It shared the responsibility of its content until it reached the point of deprecation. How could the Button have avoided such constraints by embodying a “M-E in team” attitude?

Me, Myself, And UI
When the component is responsible for the content it displays, it will break down because the content will forever and always change.

How would a selfish component design approach have changed our original Button?

Keeping the two core responsibilities of the HTML button element in mind, the structure of our Button component would have immediately been different.

// First, extend native HTML button attributes like onClick and disabled from React.
type ButtonProps = React.ComponentPropsWithoutRef<"button"> & {
  theme: 'primary' | 'secondary' | 'tertiary';
}
<Button
  onClick={someFunction}
  theme="primary"
>
  <span>Add to cart</span>
</Button>

By removing the original text prop in lieu of limitless children, the Button is able to align with its core responsibilities. The Button can now act as little more than a container for triggering events.

By moving the Button to its native approach of supporting child content, the various icon-related props are no longer required. An icon can now be rendered anywhere within the Button regardless of size and color. Perhaps the various icon-related props could be extracted into their own selfish Icon component.

<Button
  onClick={someFunction}
  theme="primary"
>
  <Icon name="cart" />
  <span>Add to cart</span>
</Button>

With the content-specific props removed from the Button, it can now do what all selfish characters do best, think about itself.

// First, extend native HTML button attributes like onClick and disabled from React.
type ButtonProps = React.ComponentPropsWithoutRef<"button"> & {
  size: 'sm' | 'md' | 'lg';
  theme: 'primary' | 'secondary' | 'tertiary';
  variant: 'ghost' | 'solid' | 'outline' | 'link'
}

With an API specific to itself and independent content, the Button is now a maintainable component. The self-interest props keep the learning curve minimal and intuitive while retaining great flexibility for various Button use cases.

Button icons can now be placed at either end of the content.

<Button
  onClick={someFunction}
  size="md"
  theme="primary"
  variant="solid"
>
  <Box display="flex" gap="2" alignItems="center">
    <span>Add to cart</span>
    <Icon name="cart" />
  </Box>
</Button>

Or, a Button could have only an icon.

<Button
  onClick={someFunction}
  size="sm"
  theme="secondary"
  variant="solid"
>
  <Icon name="cart" />
</Button>

However, a product may evolve over time, and selfish component design improves the ability to evolve along with it. Let’s go beyond the Button and into the cornerstones of selfish component design.

The Keys to Selfish Design

Much like when creating a fictional character, it’s best to show, not tell, the reader that they’re selfish. By reading about the character’s thoughts and actions, their personality and traits can be understood. Component design can take the same approach.

But how exactly do we show in a component’s design and use that it is selfish?

HTML Drives The Component Design

Many times, components are built as direct abstractions of native HTML elements like a button or img. When this is the case, let the native HTML element drive the design of the component.

Specifically, if the native HTML element accepts children, the abstracted component should as well. Every aspect of a component that deviates from its native element is something that must be learned anew.

When our original Button component deviated from the native behavior of the button element by not supporting child content, it not only became rigid but it required a mental model shift just to use the component.

There has been a lot of time and thought put into the structure and definitions of HTML elements. The wheel doesn’t need to be reinvented every time.

Children Fend For Themselves

If you’ve ever read Lord of the Flies, you know just how dangerous it can be when a group of children is forced to fend for themselves. However, in the case of selfish component design, we’ll be doing exactly that.

As shown in our original Button component, the more it tried to style its content, the more rigid and complicated it became. When we removed that responsibility, the component was able to do a lot more but with a lot less.

Many elements are little more than semantic containers. It’s not often we expect a section element to style its content. A button element is just a very specific type of semantic container. The same approach can apply when abstracting it to a component.

Components Are Singularly Focused

Think of selfish component design as arranging a bunch of terrible first dates. A component’s props are like the conversation that is entirely focused on them and their immediate responsibilities:

  • How do I look?
    Props need to feed the ego of the component. In our refactored Button example, we did this with props like size, theme, and variant.
  • What am I doing?
    A component should only be interested in what it, and it alone, is doing. Again, in our refactored Button component, we do this with the onClick prop. As far as the Button is concerned, if there’s another click event somewhere within its content, that’s the content’s problem. The Button does. not. care.
  • When and where am I going next?
    Any jet-setting traveler is quick to talk about their next destination. For components like modals, drawers, and tooltips, when and where they’re going is just as gravely important. Components like these are not always rendered in the DOM. This means that in addition to knowing how they look and what they do, they need to know when and where to be. In other words, this can be described with props like isShown and position.

Composition Is King

Some components, such as modals and drawers, can often contain different layout variations. For example, some modals will show a header bar while others do not. Some drawers may have a footer with a call to action. Others may have no footer at all.

Instead of defining each layout in a single Modal or Drawer component with conditional props like hasHeader or showFooter, break the single component into multiple composable child components.

<Modal>
  <Modal.CloseButton />
  <Modal.Header> ... </Modal.Header>
  <Modal.Main> ... <Modal.Main>
</Modal>
<Drawer>
  <Drawer.Main> ... </Drawer.Main>
  <Drawer.Footer> ... </Drawer.Footer>
</Drawer>

By using component composition, each individual component can be as selfish as it wants to be and used only when and where it’s needed. This keeps the root component’s API clean and can move many props to their specific child component.

Let’s explore this and the other keys to selfish component design a bit more.

You’re So Vain, You Probably Think This Code Is About You

Perhaps the keys of selfish design make sense when looking back at the evolution of our Button component. Nevertheless, let’s apply them again to another commonly problematic component — the modal.

For this example, we have the benefit of foresight in the three different modal layouts. This will help steer the direction of our Modal while applying each key of selfish design along the way.

First, let’s go over our mental model and break down the layouts of each design.

In the Edit Profile modal, there are defined header, main and footer sections. There’s also a close button. In the Upload Successful modal, there’s a modified header with no close button and a hero-like image. The buttons in the footer are also stretched. Lastly, in the Friends modal, the close button returns, but now the content area is scrollable, and there’s no footer.

So, what did we learn?

We learned that the header, main and footer sections are interchangeable. They may or may not exist in any given view. We also learned that the close button functions independently and is not tied to any specific layout or section.

Because our Modal can be comprised of interchangeable layouts and arrangements, that’s our sign to take a composable child component approach. This will allow us to plug and play pieces into the Modal as needed.

This approach allows us to very narrowly define the responsibilities of our root Modal component.

Conditionally render with any combination of content layouts.

That’s it. So long as our Modal is just a conditionally-rendered container, it will never need to care about or be responsible for its content.

With the core responsibility of our Modal defined, and the composable child component approach decided, let’s break down each composable piece and its role.

Component Role
<Modal> This is the entry point of the entire Modal component. This container is responsible for when and where to render, how the modal looks, and what it does, like handle accessibility considerations.
<Modal.CloseButton /> An interchangeable Modal child component that can be included only when needed. This component will work similarly to our refactored Button component. It will be responsible for how it looks, where it’s shown, and what it does.
<Modal.Header> The header section will be an abstraction of the native HTML header element. It will be little more than a semantic container for any content, like headings or images, to be shown.
<Modal.Main> The main section will be an abstraction of the native HTML main element. It will be little more than a semantic container for any content.
<Modal.Footer> The footer section will be an abstraction of the native HTML footer element. It will be little more than a semantic container for any content.

With each component and its role defined, we can start creating props to support those roles and responsibilities.

Modal

Earlier, we defined the barebones responsibility of the Modal, knowing when to conditionally render. This can be achieved using a prop like isShown. Therefore, we can use these props, and whenever it’s true, the Modal and its content will render.

type ModalProps = {
  isShown: boolean;
}
<Modal isShown={showModal}>
  ...
</Modal>

Any styling and positioning can be done with CSS in the Modal component directly. There’s no need to create specific props at this time.

Modal.CloseButton

Given our previously refactored Button component, we know how the CloseButton should work. Heck, we can even use our Button to build our CloseButton component.

import { Button, ButtonProps } from 'components/Button';

export function CloseButton({ onClick, ...props }: ButtonProps) {
  return (
    <Button {...props} onClick={onClick} variant="ghost" theme="primary" />
  )
}
<Modal>
  <Modal.CloseButton onClick={closeModal} />
</Modal>

Modal.Header, Modal.Main, Modal.Footer

Each of the individual layout sections, Modal.Header, Modal.Main, and Modal.Footer, can take direction from their HTML equivalents, header, main, and footer. Each of these elements supports any variation of child content, and therefore, our components will do the same.

There are no special props needed. They serve only as semantic containers.

<Modal>
  <Modal.CloseButton onClick={closeModal} />
  <Modal.Header> ... </Modal.Header>
  <Modal.Main> ... </Modal.Main>
  <Modal.Footer> ... </Modal.Footer>
</Modal>

With our Modal component and its child component defined, let’s see how they can be used interchangeably to create each of the three designs.

Note: The full markup and styles are not shown so as not to take away from the core takeaways.

Edit Profile Modal

In the Edit Profile modal, we use each of the Modal components. However, each is used only as a container that styles and positions itself. This is why we haven’t included a className prop for them. Any content styling should be handled by the content itself, not our container components.

<Modal>
  <Modal.CloseButton onClick={closeModal} />

  <Modal.Header>
    <h1>Edit Profile</h1>
  </Modal.Header>

  <Modal.Main>
    <div className="modal-avatar-selection-wrapper"> ... </div>
    <form className="modal-profile-form"> ... </form>
  </Modal.Main>

  <Modal.Footer>
    <div className="modal-button-wrapper">
      <Button onClick={closeModal} theme="tertiary">Cancel</Button>
      <Button onClick={saveProfile} theme="secondary">Save</Button>
    </div>
  </Modal.Footer>
</Modal>

Upload Successful Modal

Like in the previous example, the Upload Successful modal uses its components as opinionless containers. The styling for the content is handled by the content itself. Perhaps this means the buttons could be stretched by the modal-button-wrapper class, or we could add a “how do I look?” prop, like isFullWidth, to the Button component for a wider or full-width size.

<Modal>
  <Modal.Header>
    <img src="..." alt="..." />
    <h1>Upload Successful</h1>
  </Modal.Header>

  <Modal.Main>
    <p> ... </p>
    <div className="modal-copy-upload-link-wrapper"> ... </div>
  </Modal.Main>

  <Modal.Footer>
    <div className="modal-button-wrapper">
      <Button onClick={closeModal} theme="tertiary">Skip</Button>
      <Button onClick={saveProfile} theme="secondary">Save</Button>
    </div>
  </Modal.Footer>
</Modal>

Friends Modal

Lastly, our Friends modal does away with the Modal.Footer section. Here, it may be enticing to define the overflow styles on Modal.Main, but that is extending the container’s responsibilities to its content. Instead, handling those styles is better suited in the modal-friends-wrapper class.

<Modal>
  <Modal.CloseButton onClick={closeModal} />

  <Modal.Header>
    <h1>AngusMcSix's Friends</h1>
  </Modal.Header>

  <Modal.Main>
      <div className="modal-friends-wrapper">
        <div className="modal-friends-friend-wrapper"> ... </div>
        <div className="modal-friends-friend-wrapper"> ... </div>
        <div className="modal-friends-friend-wrapper"> ... </div>
      </div>
  </Modal.Main>
</Modal>

With a selfishly designed Modal component, we can accommodate evolving and changing designs with flexible and tightly scoped components.

Next Modal Evolutions

Given all that we’ve covered, let’s throw around some hypotheticals regarding our Modal and how it may evolve. How would you approach these design variations?

A design requires a fullscreen modal. How would you adjust the Modal to accommodate a fullscreen variation?

Another design is for a 2-step registration process. How could the Modal accommodate this type of design and functionality?

Recap

Components are the workhorses of modern web development. Greater importance continues to be placed on component libraries, either standalone or as part of a design system. With how fast the web moves, having components that are accessible, stable, and resilient is absolutely critical.

Unfortunately, components are often built to do too much. They are built to inherit the responsibilities and concerns of their content and surroundings. So many patterns that apply this level of consideration break down further each iteration until a component no longer works. At this point, the codebase splits, more technical debt is introduced, and inconsistencies creep into the UI.

If we break a component down to its core responsibilities and build an API of props that only define those responsibilities, without consideration of content inside or around the component, we build components that can be resilient to change. This selfish approach to component design ensures a component is only responsible for itself and not its content. Treating components as little more than semantic containers means content can change or even move between containers without effect. The less considerate a component is about its content and its surroundings, the better for everybody — better for the content that will forever change, better for the consistency of the design and UI, which in turn is better for the people consuming that changing content, and lastly, better for the developers using the components.

The key to the good component design is selfishness. Being a considerate team player is the responsibility of the developer.

Collective #747




Our Sponsor

Start speaking a new language in just three weeks with Babbel

Learning to speak a new language goes beyond just vocabulary: it’s about being able to hold a real-life conversation with a local, and understanding the culture and the people of each place. Consider Babbel your expert-led passport to learning, with 10-minute lessons that are so effective, many users feel confident speaking a new language in just three weeks. Supplement those with the podcasts, games, articles and live online classes for a well-rounded education in weeks.

Start learning a new language (and culture) today for up to 55% off




Conditional CSS

Ahmad Shadeed goes over a few CSS features that we use every day, and shows how conditional they are.

Read it


Why Not document.write()?

This article by Harry Roberts provides a comprehensive examination of the reasons why using the document.write() JavaScript method should be avoided.

Check it out



Loops and Repetition

In the latest edition of Offscreen Canvas, Daniel Velasquez examines the technique of using sin/cosine for looping.

Read it


3D in CSS

In this piece from Brad Woods’ Digital Garden collection, readers will learn the proper techniques for creating a 3D space and manipulating the translation and rotation of an element using CSS.

Read it








Stylized Low Poly

Bruno Simon utilizes 3D Coat to create stunning, low poly “stylized” and “hand-painted” models reminiscent of the popular game World of Warcraft.

Check it out