Vanilla JavaScript, Libraries, And The Quest For Stateful DOM Rendering

In his seminal piece “The Market For Lemons”, renowned web crank Alex Russell lays out the myriad failings of our industry, focusing on the disastrous consequences for end users. This indignation is entirely appropriate according to the bylaws of our medium.

Frameworks factor highly in that equation, yet there can also be good reasons for front-end developers to choose a framework, or library for that matter: Dynamically updating web interfaces can be tricky in non-obvious ways. Let’s investigate by starting from the beginning and going back to the first principles.

Markup Categories

Everything on the web starts with markup, i.e. HTML. Markup structures can roughly be divided into three categories:

  1. Static parts that always remain the same.
  2. Variable parts that are defined once upon instantiation.
  3. Variable parts that are updated dynamically at runtime.

For example, an article’s header might look like this:

<header>
  <h1>«Hello World»</h1>
  <small>«123» backlinks</small>
</header>

Variable parts are wrapped in «guillemets» here: “Hello World” is the respective title, which only changes between articles. The backlinks counter, however, might be continuously updated via client-side scripting; we’re ready to go viral in the blogosphere. Everything else remains identical across all our articles.

The article you’re reading now subsequently focuses on the third category: Content that needs to be updated at runtime.

Color Browser

Imagine we’re building a simple color browser: A little widget to explore a pre-defined set of named colors, presented as a list that pairs a color swatch with the corresponding color value. Users should be able to search colors names and toggle between hexadecimal color codes and Red, Blue, and Green (RGB) triplets. We can create an inert skeleton with just a little bit of HTML and CSS:

See the Pen Color Browser (inert) [forked] by FND.

Client-Side Rendering

We’ve grudgingly decided to employ client-side rendering for the interactive version. For our purposes here, it doesn’t matter whether this widget constitutes a complete application or merely a self-contained island embedded within an otherwise static or server-generated HTML document.

Given our predilection for vanilla JavaScript (cf. first principles and all), we start with the browser’s built-in DOM APIs:

function renderPalette(colors) {
  let items = [];
  for(let color of colors) {
    let item = document.createElement("li");
    items.push(item);

    let value = color.hex;
    makeElement("input", {
      parent: item,
      type: "color",
      value
    });
    makeElement("span", {
      parent: item,
      text: color.name
    });
    makeElement("code", {
      parent: item,
      text: value
    });
  }

  let list = document.createElement("ul");
  list.append(...items);
  return list;
}
Note:
The above relies on a small utility function for more concise element creation:
function makeElement(tag, { parent, children, text, ...attribs }) {
  let el = document.createElement(tag);

  if(text) {
    el.textContent = text;
  }

  for(let [name, value] of Object.entries(attribs)) {
    el.setAttribute(name, value);
  }

  if(children) {
    el.append(...children);
  }

  parent?.appendChild(el);
  return el;
}
You might also have noticed a stylistic inconsistency: Within the items loop, newly created elements attach themselves to their container. Later on, we flip responsibilities, as the list container ingests child elements instead.

Voilà: renderPalette generates our list of colors. Let’s add a form for interactivity:

function renderControls() {
  return makeElement("form", {
    method: "dialog",
    children: [
      createField("search", "Search"),
      createField("checkbox", "RGB")
    ]
  });
}

The createField utility function encapsulates DOM structures required for input fields; it’s a little reusable markup component:

function createField(type, caption) {
  let children = [
    makeElement("span", { text: caption }),
    makeElement("input", { type })
  ];
  return makeElement("label", {
    children: type === "checkbox" ? children.reverse() : children
  });
}

Now, we just need to combine those pieces. Let’s wrap them in a custom element:

import { COLORS } from "./colors.js"; // an array of { name, hex, rgb } objects

customElements.define("color-browser", class ColorBrowser extends HTMLElement {
  colors = [...COLORS]; // local copy

  connectedCallback() {
    this.append(
      renderControls(),
      renderPalette(this.colors)
    );
  }
});

Henceforth, a <color-browser> element anywhere in our HTML will generate the entire user interface right there. (I like to think of it as a macro expanding in place.) This implementation is somewhat declarative1, with DOM structures being created by composing a variety of straightforward markup generators, clearly delineated components, if you will.

1 The most useful explanation of the differences between declarative and imperative programming I’ve come across focuses on readers. Unfortunately, that particular source escapes me, so I’m paraphrasing here: Declarative code portrays the what while imperative code describes the how. One consequence is that imperative code requires cognitive effort to sequentially step through the code’s instructions and build up a mental model of the respective result.

Interactivity

At this point, we’re merely recreating our inert skeleton; there’s no actual interactivity yet. Event handlers to the rescue:

class ColorBrowser extends HTMLElement {
  colors = [...COLORS];
  query = null;
  rgb = false;

  connectedCallback() {
    this.append(renderControls(), renderPalette(this.colors));
    this.addEventListener("input", this);
    this.addEventListener("change", this);
  }

  handleEvent(ev) {
    let el = ev.target;
    switch(ev.type) {
    case "change":
      if(el.type === "checkbox") {
        this.rgb = el.checked;
      }
      break;
    case "input":
      if(el.type === "search") {
        this.query = el.value.toLowerCase();
      }
      break;
    }
  }
}
Note:
handleEvent means we don’t have to worry about function binding. It also comes with various advantages. Other patterns are available.

Whenever a field changes, we update the corresponding instance variable (sometimes called one-way data binding). Alas, changing this internal state2 is not reflected anywhere in the UI so far.

2 In your browser’s developer console, check document.querySelector("color-browser").query after entering a search term.

Note that this event handler is tightly coupled to renderControls internals because it expects a checkbox and search field, respectively. Thus, any corresponding changes to renderControls — perhaps switching to radio buttons for color representations — now need to take into account this other piece of code: action at a distance! Expanding this component’s contract to include field names could alleviate those concerns.

We’re now faced with a choice between:

  1. Reaching into our previously created DOM to modify it, or
  2. Recreating it while incorporating a new state.
Rerendering

Since we’ve already defined our markup composition in one place, let’s start with the second option. We’ll simply rerun our markup generators, feeding them the current state.

class ColorBrowser extends HTMLElement {
  // [previous details omitted]

  connectedCallback() {
    this.#render();
    this.addEventListener("input", this);
    this.addEventListener("change", this);
  }

  handleEvent(ev) {
    // [previous details omitted]
    this.#render();
  }

  #render() {
    this.replaceChildren();
    this.append(renderControls(), renderPalette(this.colors));
  }
}

We’ve moved all rendering logic into a dedicated method3, which we invoke not just once on startup but whenever the state changes.

3 You might want to avoid private properties, especially if others might conceivably build upon your implementation.

Next, we can turn colors into a getter to only return entries matching the corresponding state, i.e. the user’s search query:

class ColorBrowser extends HTMLElement {
  query = null;
  rgb = false;

  // [previous details omitted]

  get colors() {
    let { query } = this;
    if(!query) {
      return [...COLORS];
    }

    return COLORS.filter(color => color.name.toLowerCase().includes(query));
  }
}
Note:
I’m partial to the bouncer pattern.
Toggling color representations is left as an exercise for the reader. You might pass this.rgb into renderPalette and then populate <code> with either color.hex or color.rgb, perhaps employing this utility:
function formatRGB(value) {
  return value.split(",").
    map(num => num.toString().padStart(3, " ")).
    join(", ");
}

This now produces interesting (annoying, really) behavior:

See the Pen Color Browser (defective) [forked] by FND.

Entering a query seems impossible as the input field loses focus after a change takes place, leaving the input field empty. However, entering an uncommon character (e.g. “v”) makes it clear that something is happening: The list of colors does indeed change.

The reason is that our current do-it-yourself (DIY) approach is quite crude: #render erases and recreates the DOM wholesale with each change. Discarding existing DOM nodes also resets the corresponding state, including form fields’ value, focus, and scroll position. That’s no good!

Incremental Rendering

The previous section’s data-driven UI seemed like a nice idea: Markup structures are defined once and re-rendered at will, based on a data model cleanly representing the current state. Yet our component’s explicit state is clearly insufficient; we need to reconcile it with the browser’s implicit state while re-rendering.

Sure, we might attempt to make that implicit state explicit and incorporate it into our data model, like including a field’s value or checked properties. But that still leaves many things unaccounted for, including focus management, scroll position, and myriad details we probably haven’t even thought of (frequently, that means accessibility features). Before long, we’re effectively recreating the browser!

We might instead try to identify which parts of the UI need updating and leave the rest of the DOM untouched. Unfortunately, that’s far from trivial, which is where libraries like React came into play more than a decade ago: On the surface, they provided a more declarative way to define DOM structures4 (while also encouraging componentized composition, establishing a single source of truth for each individual UI pattern). Under the hood, such libraries introduced mechanisms5 to provide granular, incremental DOM updates instead of recreating DOM trees from scratch — both to avoid state conflicts and to improve performance6.

4 In this context, that essentially means writing something that looks like HTML, which, depending on your belief system, is either essential or revolting. The state of HTML templating was somewhat dire back then and remains subpar in some environments.
5 Nolan Lawson’s “Let’s learn how modern JavaScript frameworks work by building one” provides plenty of valuable insights on that topic. For even more details, lit-html’s developer documentation is worth studying.
6 We’ve since learned that some of those mechanisms are actually ruinously expensive.

The bottom line: If we want to encapsulate markup definitions and then derive our UI from a variable data model, we kinda have to rely on a third-party library for reconciliation.

Actus Imperatus

At the other end of the spectrum, we might opt for surgical modifications. If we know what to target, our application code can reach into the DOM and modify only those parts that need updating.

Regrettably, though, that approach typically leads to calamitously tight coupling, with interrelated logic being spread all over the application while targeted routines inevitably violate components’ encapsulation. Things become even more complicated when we consider increasingly complex UI permutations (think edge cases, error reporting, and so on). Those are the very issues that the aforementioned libraries had hoped to eradicate.

In our color browser’s case, that would mean finding and hiding color entries that do not match the query, not to mention replacing the list with a substitute message if no matching entries remain. We’d also have to swap color representations in place. You can probably imagine how the resulting code would end up dissolving any separation of concerns, messing with elements that originally belonged exclusively to renderPalette.

class ColorBrowser extends HTMLElement {
  // [previous details omitted]

  handleEvent(ev) {
    // [previous details omitted]

    for(let item of this.#list.children) {
      item.hidden = !item.textContent.toLowerCase().includes(this.query);
    }
    if(this.#list.children.filter(el => !el.hidden).length === 0) {
      // inject substitute message
    }
  }

  #render() {
    // [previous details omitted]

    this.#list = renderPalette(this.colors);
  }
}

As a once wise man once said: That’s too much knowledge!

Things get even more perilous with form fields: Not only might we have to update a field’s specific state, but we would also need to know where to inject error messages. While reaching into renderPalette was bad enough, here we would have to pierce several layers: createField is a generic utility used by renderControls, which in turn is invoked by our top-level ColorBrowser.

If things get hairy even in this minimal example, imagine having a more complex application with even more layers and indirections. Keeping on top of all those interconnections becomes all but impossible. Such systems commonly devolve into a big ball of mud where nobody dares change anything for fear of inadvertently breaking stuff.

Conclusion

There appears to be a glaring omission in standardized browser APIs. Our preference for dependency-free vanilla JavaScript solutions is thwarted by the need to non-destructively update existing DOM structures. That’s assuming we value a declarative approach with inviolable encapsulation, otherwise known as “Modern Software Engineering: The Good Parts.”

As it currently stands, my personal opinion is that a small library like lit-html or Preact is often warranted, particularly when employed with replaceability in mind: A standardized API might still happen! Either way, adequate libraries have a light footprint and don’t typically present much of an encumbrance to end users, especially when combined with progressive enhancement.

I don’t wanna leave you hanging, though, so I’ve tricked our vanilla JavaScript implementation to mostly do what we expect it to:

See the Pen Color Browser [forked] by FND.

Building Components For Consumption, Not Complexity (Part 2)

Welcome back to my long read about building better components — components that are more likely to be found, understood, modified, and updated in ways that promote adoption rather than abandonment.

In the previous installment in the series, we took a good look through the process of building flexible and repeatable components, aligning with the FRAILS framework. In this second part, we will be jumping head first into building adoptable, indexable, logical, and specific components. We have many more words ahead of us.

Adoptable

According to Sparkbox’s 2022 design systems survey, the top three biggest challenges faced by teams were recently:

  1. Overcoming technical/creative debt,
  2. Parity between design & code,
  3. Adoption.

It’s safe to assume that points 1. and 2. are mostly due to tool limitations, siloed working arrangements, or poor organizational communication. There is no enterprise-ready design tool on the market that currently provides a robust enough code export for teams to automate the handover process. Neither have I ever met an engineering team that would adopt such a feature! Likewise, a tool won’t fix communication barriers or decades worth of forced silos between departments. This will likely change in the coming years, but I think that these points are an understandable constraint.

Point 3. is a concern, though. Is your brilliant design system adoptable? If we’re spending all this time working on design systems, why are people not using them effectively? Thinking through adoption challenges, I believe we can focus on three main points to make this process a lot smoother:

  1. Naming conventions,
  2. Community-building,
  3. (Over)communication.

Naming Conventions

There are too many ways to name components in our design tool, from camelCasing to kebab-casing, Slash/Naming/Conventions to the more descriptive, e.g., “Product Card — Cart”. Each approach has its pros and cons, but what we need to consider with our selection is how easy it is to find the component you need. Obvious, but this is central to any good name.

It’s tempting to map component naming 1:1 between design and code, but I personally don’t know whether this is what our goal should be. Designers and developers work in different ways and with different methods of searching for and implementing components, so we should cater to the audience. This would aid solutions based on intention, not blindly aiming for parity.

Figma can help bridge this gap with the “component description field” providing us a useful space to add additional, searchable names (or aliases, even) to every component. This means that if we call it a headerNavItemActive in code but a “Header link” in design with a toggled component property, the developer-friendly name can be added to the description field for searchable parity.

The same approach can be applied to styles as well.

There is a likelihood that your developers are working from a more tokenized set of semantic styles in code, whereas the design team may need less abstract styles for the ideation process. This delta can be tricky to navigate from a Figma perspective because we may end up in a world where we’re maintaining two or more sources of truth.

The advice here is to split the quick styles for ideation and semantic variables into different sets. The semantic styles can be applied at the component level, whereas the raw styles can be used for developing new ideas.

As an example, Brand/Primary may be used as the border color of an active menu item in your design files because searching “brand” and “primary” may be muscle memory and more familiar than a semantic token name. Within the component, though, we want to be aliasing that token to something more semantic. For example, border-active.

Note: Some teams go to a further component level with their naming conventions. For example, this may become header-nav-item-active. It’s hyper-specific, meaning that any use outside of this “Header link” example may not make sense for collaborators looking through the design file. Component-level tokens are an optional step in design systems. Be cautious, as introducing another layer to your token schema increases the amount of tokens you need to maintain.

This means if we’re working on a new idea — for example, we have a set of tabs in a settings page, and the border color for the active tab at the ideation stage might be using Brand/Primary as the fill — when this component is contributed back to the system, we will apply the correct semantic token for its usage, our border-active.

Do note that this advice is probably best suited to large design teams where your contribution process is lengthier and requires the distinct separation of ideation and production or where you work on a more fixed versioning release cycle for your system. For most teams, a single set of semantic variables will be all you need. Variables make this process a lot easier because we can manage the properties of these separate tokens in a central location. But! This isn’t an article about tokens, so let’s move on.

Community-building

A key pillar of a successful design system is advocacy across the PDE (product, design, and engineering) departments. We want people to be excited, not burdened by its rules. In order to get there, we need to build a community of internal design system advocates who champion the work being done and act as extensions of the central team. This may sound like unpaid support work, but I promise you it’s more than that.

Communicating constantly with designers taught me that with the popularity of design systems booming over the past few years, more and more of us are desperate to contribute to them. Have you ever seen a local component in a file that is remarkably similar to one that already exists? Maybe that designer wanted to scratch the itch of building something from the ground up. This is fine! We just need to encourage that more widely through a more open contribution model back to the central system.

How can the (central) systems team empower designers within the wider organization to build on top of the system foundations we create? What does that world look like for your team? This is commonly referred to as the “hub and spoke” model within design systems and can really help to accelerate interest in your system usage goals.

“There are numerous inflection points during the evolution of a design system. Many of those occur for the same fundamental reason — it is impossible to scale a design system team enough to directly support every demand from an enterprise-scale business. The design system team will always be a bottleneck unless a structure can be built that empowers business units and product teams to support themselves. The hub and spoke (sometimes also called ‘core + federated’) model is the solution.”

— Robin Cannon, “The hub and spoke design system model” (IBM)

In simple terms, a community can be anything as small as a shared Slack/Teams channel for the design system all the way up to fortnightly hangouts or learning sessions. What we do here is help to foster an environment where discussion and shared knowledge are at the center of the system rather than being tacked on after the components have been released.

The team at Zalando has developed a brilliant community within the design team for their system. This is in the form of a sophisticated web portal, frequent learning and educational meetings, and encouraging an “open house” mindset. Apart from the custom-built portal, I believe this approach is an easy-to-reach target for most teams, regardless of size. A starting point for this would be something as simple as an open monthly meeting or office hours, run by those managing your system, with invites sent out to all designers and cross-functional partners involved in production: product managers, developers, copywriters, product marketers, and the list goes on.

For those looking for inspiration on how to run semi-regular design systems events, take a look at what the Gov UK team have started over on Eventbrite. They have run a series of events ranging from accessibility deep dives all the way up to full “design system days.”

Leading with transparency is a solid technique for placing the design system as close as possible to those who use it. It can help to shift the mindset from being a siloed part of the design process to feeding all parts of the production pipeline for all key partners, regardless of whether you build it or use it.

Back to advocacy! As we roll out this transparent and communicative approach to the system, we are well-placed to identify key allies across the product, design, and engineering team/teams that can help steward excellence within their own reach. Is there a product manager who loves picking apart the documentation on the system? Let’s help to position them as a trusted resource for documentation best practices! Or a developer that always manages to catch incorrect spacing token usage? How can we enable them to help others develop this critical eye during the linting process?

This is the right place to mention Design Lint, a Figma plugin that I can only highly recommend. Design Lint will loop through layers you’ve selected to help you find possibly missing styles. When you write custom lint rules, you can check for errors like color styles being used in the wrong way, flag components that aren’t published to your library, mark components that don’t have a description, and more.

Each of these advocates for the system, spread across departments within the business, will help to ensure consistency and quality in the work being produced.

(Over)communication

Closely linked to advocacy is the importance of regular, informative, and actionable communication. Examples of the various types of communication we might send are:

  • Changelog/release notes.
  • Upcoming work.
  • System survey results. (Example: “Design Maturity Results, Sep-2023,” UK Department for Education.)
  • Resource sharing. Found something cool? Share it!
  • Hiring updates.
  • Small wins.

That’s a lot! This is a good thing, as it means there is always something to share among the team to keep people close, engaged, and excited about the system. If your partners are struggling to see how important and central a design system is to the success of a product, this list should help push that conversation in the right direction.

I recommend trying to build a pattern of regularity with your communication to firstly build the habit of sharing and, secondly, to introduce formality and weight to the updates. You might also want to decide whether you look forward or backward with the updates, meaning at the start or end of a sprint if you work that way.

Or perhaps you can follow a pattern as the following one:

  • Changelog/release notes are sent on the final day of every sprint.
  • “What’s next?” is shared at the start of a sprint.
  • Cool resources are shared mid-sprint to help inspire the team (and to provide a break between focus work sessions).
  • Small wins are shared quarterly.
  • Survey results are shared at the start of every second quarter.
  • Hiring updates are shared as they come up.

Outside of the system, communication really does make or break the success of a project, so leading from the front ensures we’re doing everything we can.

Indexable

The biggest issue when building or maintaining a system is knowing how your components will be used (or not used). Of course, we will never know until we try it out (btw, this is also the best piece of design advice I’ve ever been given!), but we need to start somewhere.

Design systems should prioritize quality over speed. But product teams often work in “ship at all costs” mode, prioritizing speed over quality.

“What do you do when a product team needs a UI component, pattern, or feature that the design system team cannot provide in time or is not part of their scope?”

— Josh Clark, “Ship Faster by Building Design Systems Slower

What this means is starting with real-world needs and problems. The likelihood when starting a system is that you will create all the form fields, then some navigational components, and maybe a few notification/alerts/callouts/notification components (more on naming conventions later) and then publish your library, hoping the team will use those components.

The harsh reality is, though, the following:

  • Your team members aren’t aware of which components exist.
  • They don’t know what components are called yet.
  • There is no immediate understanding of how components are translated into code.
  • You’re building components without needing them yet.

As you continue to sprint on your system, you will realize over time that more and more design work (user flows, feature work) is being pushed over to your product managers or developers without adhering to the wonderful design system you’ve been crafting. Why is that? It’s because people can’t discover your components! (Are they easily indexable?)

This is where the importance of education and communication comes into play. Whether it’s from design to development, design to copywriting, product to design, or brand to product, there is always a little bit more communication that can happen to ease these tensions within teams. Design Ops as a profession is growing in popularity amongst larger organizations for this very purpose — to better foster and facilitate communication channels not only amongst disparate design teams but also cross-functionally.

Note: Design Ops refers to the practice of integrating the design team’s workflow into the company’s broader development context. In practical terms, this means the design ops role is responsible for planning and managing the design team’s work and making sure that designers are collaborating effectively with product and engineering teams throughout the development process.

Back to discoverability! That communication layer could be introduced in a few ways, depending on how your team is structured. Using the channel within Slack or Teams (or whichever messaging tool you use) example from before, we can have a centralized communication channel about this very specific job — components.

Here’s an example message:

Within this channel, the person/s responsible for the system is encouraged to frequently post updates with as much context as is humanly possible.

For example:

  • What are you working on now?
  • What updates should we expect within the next day/week/month?
  • Who is working on what components?
  • How can the wider team support or contribute to this work?
  • Are there any blockers?

Starting with these questions and answers in a public forum will encourage wider communication and understanding around the system to ultimately force a wider adoption of what’s being worked on and when.

Secondly, within the tools themselves, we can be over-the-top communicative whilst we create. Making heavy use of the version history feature within Figma, we can add very intentional timestamps on activity, spelling out exactly what is happening, when, and by whom. Going into the weeds here to effectively use that section of the file as mini-documentation can allow your collaborators (even those without a paid license!) to get as close to the work as possible.

Additionally, if you are using a branch-based workflow for component management, we encourage you to use the branch descriptions as a way to achieve a similar result.

Note: If you are investigating a branch workflow within a large design organization, I recommend using them for smaller fixes or updates and for larger “major” releases to create new files. This will allow for a future world where one set of designers needs to work on v1, whereas others use v2.

Naming Conventions

Undoubtedly, the hardest part of design system work is naming things. What I call a dropdown, you may call a select, and someone else may call an option list. This makes it extremely difficult to align an entire team and encourage one way of naming anything.

However, there are techniques we can employ to ensure that we’re serving the largest number of users of our system as possible. Whether it’s using Figma features or working closer with our development team, there is a world in which people can find the components they need and when they need them.

I’m personally a big fan of prioritizing discoverability over complexity at every stage of design, from how we name our components to frames to entire files. What this means is that, more often than not, we’re better off introducing verbosity, rather than trying to make everything as concise as possible.

This is probably best served with an example!

What would you call this component?

  • Dropdown.
  • Popover.
  • Actions.
  • Modal.
  • Something else?

Of course, context is very important when naming anything, which is why the task is so hard. We are currently unaware of how this component will be used, so let’s introduce a little bit of context to the situation.

Has your answer changed? The way I look at this component is that, although the structure is quite generic — rounded card, inner list with icons — the usage is very specific. This is to be used on a search filter to provide the user with a set of actions that they can carry out on the results. You may:

  1. Import a predefined search query.
  2. Export your existing search query.
  3. Share your search query.

For this reason, why would we not call this something like search actions? This is a simplistic example (and doesn’t account for the many other areas of the product that this component could be used), but maybe that’s okay. As we build and mature our system, we will always hit walls where one component needs to — or can be — used in many other places. It’s at this time that we make decisions about scalability, not before we have usage.

Other options for this specific component could be:

  • Action list.
  • Search dropdown.
  • Search / Popover.
  • Filter menu.

Logical

Have you ever been in a situation where you searched for a component in the Figma Assets panel and not been sure of its purpose? Or have you been unsure of the customization possible within its settings? We all have!

I tend to find that this is the result of us (as design systems maintainers) optimizing for creation and not usage. This is so important, so I’ll say it again:

We tend to optimize for the people building the system, not for the people using it.

The consumers/users of a system will always far outweigh the people managing it. They will also be further away from the decisions that went into making the component and the reasons behind why it is built the way it is.

Here are a few hypothetical questions worth thinking through:

  • Why is this component called a navbar, and not a tab-bar?
  • Why does it have four tabs by default and not three, like the production app?
  • There’s only one navbar in the assets list, but we support many products. Where are the others?
  • How do I use the dark mode version of this component?
  • I need a tablet version of the table component. Should I modify this one, or do we have an alternative version ready to be used?

These may seem like familiar questions to you. And if not, congratulations, you’re doing a great job!

Figma makes it easy to build complexity into components, arguably too easy. I’m sure you’ve found yourself in a situation where you create a component set with too many permutations or ended up in a world where the properties applied to a component turn the component properties panel into what I like to call “prop soup.”

A good design system should be logical (usable). To me, usability means:

  1. Speed of discovery, and
  2. Efficient implementation of components.

The speed of discovery and the efficient implementation of components can — brace yourself! — sometimes mean repetition. That very much goes against our goals of a don’t repeat yourself system and will horrify those of you who yearn for a world in which consolidation is a core design system principle but bear with me for a bit more.

The canvas is a place for ideation and flexibility and a place where we need to encourage the fostering of new ideas fast. What isn’t fast is a confused designer. As design system builders, we then need to work in a world where components are customizable but only after being understood. And what is not easily understandable is a component with an infinite number of customization options and a generic name. What is understandable is a compact, descriptive, and lightweight component.

Let’s take an example. Who doesn’t love… buttons? (I don’t, but this atomic example is the simplest way to communicate our problem.)

Here, we have one component variant button with:

  • Four intentions (primary, secondary, error, warning);
  • Two types (fill, stroke);
  • Three different sizes (large, medium, small);
  • And four states (default, hover, focus, inactive).

Even while listing those out, we can see a problem. The easy way to think this through is by asking yourself, “Is a designer likely to need all of these options when it comes to usage?”

With this example, it might look like the following question: “Will a designer ever need to switch between a primary button and a warning one?” Or are they actually two separate use cases and, therefore two separate components?

To probably no one’s surprise, my preference is to split that component right down into its intended usage. That would then mean we have one variant for each component type:

  1. Primary,
  2. Secondary,
  3. Error (Destructive),
  4. Warning.

Four components for one button! Yes, that’s right, and there are two huge benefits if you decide to go this way:

  1. The Assets panel becomes easier to navigate, with each primary variant within each set being visually surfaced.
  2. The designer removes one decision from component usage: what type to use.

Let’s help set our (design) teams up for success by removing decisions! The design was intentionally placed within brackets there because, as you’re probably rightly thinking, we lose parity with our coded components here. You know what? I think that’s totally fine. Documentation and component handover happen once with every component, and it doesn’t mean we need to sacrifice usability within the design to satisfy the front-end framework composability. Documentation is still a vital part of a design system, and we can communicate component permutations in a method that meets design and development in the middle.

Auto Layout

Component usability is also heavily informed by the decision to use auto layout or not. It can be hard to grapple with, but my advice here is to go all in on using auto layout. Not only does it help to remove the need for eyeballing measurements within production designs, but it also helps remove the burden of spacing for non-design partners. If your copywriter needs to edit a line of text within a component, they can feel comfortable doing so with the knowledge that the surrounding content will flow and not “break” the design.

Note: Using padding and gap variables within main components can remove the “Is the spacing correct?” question from component composition.

Auto layout also provides us with some guardrails with regard to spacing and margins. We strive for consistency within systems, and using auto layout everywhere pushes us as far as possible in that direction.

Specific

We touched on this in the “usable” section, but naming conventions are so important for ensuring the discoverability and adoption of components within a system.

The more specific we can make components, the more likely they are to be used in the right place. Again, this may mean introducing inefficiencies within the system, but I strongly believe that efficiency is a long-term play and something we reach gradually over time. This means being incredibly inefficient in the short term and being okay with that!

Specific to me means calling a header a header, a filter a filter, and a search field a search field. Doesn’t it seem obvious? You’re right. It seems obvious, but if my Twitter “name that component” game has taught me anything, it’s that naming components is hard.

Let’s take our search field example.

  • Apple’s Human Interface Guidelines call it a “search field.”
  • Material Design calls it a “search bar.”
  • Microsoft Fluent 2 doesn’t have a search field. Instead, it has a “combobox” component with a typeahead search function.

Sure, the intentions may be different between a combobox and a search field or a search bar, but does your designer or developer know about these subtle nuances? Are they aware of the different use cases when searching for a component to use? Specificity here is the sharpest way for us to remove these questions and ensure efficiency within the system.

As I said before, this may mean that we end up performing inefficient activities within the system. For example, instead of bundling combobox and search into one component set with toggle-able settings, we should split them. This means searching for “search” in Figma would provide us with the only component we need, rather than having to think ahead if our combobox component can be customized to our needs (or not).

Conclusion

It was a long journey! I hope that throughout the past ten thousand words or so, you’ve managed to extract quite a few useful bits of information and advice, and you can now tackle your design systems within Figma in a way that increases the likelihood of adoption. As we know, this is right up there with the priorities of most design systems teams, and I firmly believe that following the principles laid out in this article will help you (as maintainers) sprint towards a path of more funding, more refined components, and happier team members.

And should you need some help or if you have questions, ask me in the comments below, or ping me on Twitter/Posts/Mastodon, and I’ll be more than happy to reply.

Further Reading

  • Driving change with design systems and process,” Matt Gottschalk and Aletheia Délivré (Config 2023)
    The conference talk explores in detail how small design teams can use design systems and design operations to help designers have the right environment for them.
  • Gestalt 2023 — Q2 newsletter
    In this article article, you will learn about the design systems roadmaps (from the Pinterest team).
  • Awesome Design Tokens
    A project that hosts a large collection of design token-related articles and links, such as GitHub repositories, articles, tools, Figma and Sketch plugins, and many other resources.
  • The Ondark Virus(D’Amato Design blog)
    An important article about naming conventions within design tokens.
  • API?(RedHat Help)
    This article will explain in detail how APIs (Application Programming Interface) work, what the SOAP vs. REST protocols are, and more.
  • Responsive Web Design,” by Ethan Marcotte (A List Apart)
    This is an old (but gold) article that set the de-facto standards in responsive web design (RWD).
  • Simple design system structure” (FigJam file, by Luis OuriachCC-BY license)
    For when you need to get started!
  • Fixed aspect ratio images with variants” (Figma file, by Luis OuriachCC-BY license)
    Aspect ratios are hard with image fills, so the trick to making them work is to define your breakpoints and create variants for each image. As the image dimensions are fixed, you will have much more flexibility — you can drag the components into your designs and use auto layout.
  • Mitosis
    Write components once, run everywhere; compiles to React, Vue, Qwik, Solid, Angular, Svelte, and others.
  • Create reusable components with Mitosis and Builder.io,” by Alex Merced
    A tutorial about Mitosis, a powerful tool that can compile code to standard JavaScript in addition to frameworks and libraries like Angular, React, and Vue, allowing you to create reusable components.
  • VueJS — Component Slots(Vue documentation)
    Components can accept properties (which can be JavaScript values of any type), but how about template content?
  • Magic Numbers in CSS,” by Chris Coyier (CSS Tricks)
    In CSS, magic numbers refer to values that work under some circumstances but are frail and prone to break when those circumstances change. The article will take a look at some examples so that you know what they are and how to avoid the issues related to their use.
  • Figma component properties(Figma, YouTube)
    In this quick video tip, you’ll learn what component properties are and how to create them.
  • Create and manage component properties(Figma Help)
    New to component properties? Learn how component properties work by exploring the different types, preferred values, and exposed nested instances.
  • Using auto layout(Figma Help)
    Master auto layout by exploring its properties, including resizing, direction, absolute position, and a few others.
  • Add descriptions to styles, components, and variables(Figma Help)
    There are a few ways to incorporate design system documentation in your Figma libraries. You can give styles, components, and variables meaningful names; you can add short descriptions to styles, components, and variables; you can add links to external documentation to components; and you can add descriptions to library updates.
  • Design system components, recipes, and snowflakes,” by Brad Frost
    Creating things with a component-based mindset right from the start saves countless hours. Everything is/should be a component!
  • What is digital asset management?(IBM)
    A digital asset management solution provides a systematic approach to efficiently storing, organizing, managing, retrieving, and distributing an organization’s digital assets.
  • Search fields (Components)(Apple Developer)
    A search field lets people search a collection of content for specific terms they enter.
  • Search — Components Overview(Material Design 3)
    Search lets people enter a keyword or phrase to get relevant information.
  • Combobox — Components(Fluent 2)
    A combobox lets people choose one or more options from a list or enter text in a connected input; entering text will filter options or allow someone to submit a free-form answer.
  • Pharos: JSTOR’s design system serving the intellectually curious(JSTOR)
    Building a design system from the ground up — a detailed account written by the JSTOR team.
  • Design systems are everybody’s business,” by Alex Nicholls (Director of Design at Workday)
    This is Part 1 in a three-part series that takes a deep dive into Workday’s experience of developing and releasing their design system out into the open. For the next parts, check Part II, “Productizing your design system,” and Part III, “The case for an open design system.”
  • Design maturity results ‘23,” (UK Dept. for Education)
    The results of the design maturity survey carried out in the Department for Education (UK), September 2023.
  • Design Guidance and Standards,” (UK Dept. for Education)
    Design principles, guidance, and standards to support people who use the Department for Education services (UK).
  • Sparkbox’s Design Systems Survey, 2022 (5th edition)
    The top three biggest challenges faced by design teams: are overcoming technical/creative debt, parity between design & code, and adoption. This article reviews in detail the survey results; 183 respondents maintaining design systems have responded.
  • The hub and spoke design system model,” by Robin Cannon (IBM)
    No design system team can scale enough to support an enterprise-scale business by itself. This article sheds some light on IBM’s hub and spoke model.
  • Building a design system around collaboration, not components(Figma, YouTube)
    It’s easy to focus your design system on the perfect component, missing out on the aspect that’ll ensure your success — collaboration. Louise From and Julia Belling (from Zalando) explain how they created and then scaled effectively their internal design system.
  • Friends of Figma, DesignOps(YouTube interest group)
    This group is about practices and resources that will help your design organization to grow. The core topics are centered around the standardization of design, design growth, design culture, knowledge management, and processes.
  • Linting meets Design,” by Konstantin Demblin (George Labs)
    The author is convinced that the concept of “design linting” (in Sketch) is groundbreaking for digital design and will remain state-of-the-art for a long time.
  • How to set up custom design linting in Figma using the Design Lint plugin,” by Daniel Destefanis (Product Design Manager at Discord)
    This is an article about Design Lint — a Figma plugin that loops through layers you’ve selected to help you find missing styles. You can check for errors such as color styles being used in the wrong way, flag components that aren’t published to your library, mark components that don’t have a description, and so on.
  • Design Systems and Speed,” by Brad Frost
    In this Twitter thread, Brad discusses the seemingly paradoxical relationship between design systems and speed. Design systems make the product work faster. At the same time, do design systems also need to go slower?
  • Ship Faster by Building Design Systems Slower,” by Josh Clark (Principal, Big Medium)
    Design systems should prioritize quality over speed, but product teams often have “ship at all costs” policies, prioritizing speed over quality. Actually, successful design systems move more slowly than the products they support, and the slower pace doesn’t mean that they have to be the bottleneck in the process.
  • Design Systems, a book by Alla Kholmatova (Smashing Magazine)
    Often, our design systems get out-of-date too quickly or just don’t get enough traction in our companies. What makes a design system effective? What works and what doesn’t work in real-life products? The book is aimed mainly at small to medium-sized product teams trying to integrate modular thinking into their organization’s culture. Visual and interaction designers, UX practitioners, and front-end developers particularly, will benefit from the knowledge in this book.
  • Making Your Collaboration Problems Go Away By Sharing Components,” by Shane Hudson (Smashing Magazine)
    Recently UXPin has extended its powerful Merge technology by adding npm integration, allowing designers to sync React component libraries without requiring any developer input.
  • Taking The Stress Out Of Design System Management,” by Masha Shaposhnikova (Smashing Magazine)
    In this article, the author goes over five tips that make it easier to manage a design system while increasing its effectiveness. This guide is aimed at smaller teams.
  • Around The Artifacts Of Design Systems (Case Study),” by Dan Donald (Smashing Magazine)
    Like many things, a design system isn’t ever a finished thing but a journey. How we go about that journey can affect the things we produce along the way. Before diving in and starting to plan anything out, be clear about where the benefits and the risks might lie.
  • Design Systems: Useful Examples and Resources,” by Cosima Mielke (Smashing Magazine)
    In complex projects, you’ll sooner or later get to the point where you start to think about setting up a design system. In this article, some interesting design systems and their features will be explored, as well as useful resources for building a successful design system.

Crafting A Killer Brand Identity For A Digital Product

It may seem obvious to state that the brand should be properly reflected and showcased in all applications, whether it is an app, a website, or social media, but it’s surprising how many well-established businesses fall short in this regard. Maintaining brand consistency can be a challenge for many teams due to issues like miscommunication, disconnect between graphic and web design teams, and management missteps.

Establishing a well-defined digital brand identity that includes elements like logos, typography, color palettes, imagery, and more ensures that your brand maintains a consistent presence. This consistency not only builds trust and loyalty among your customers but also allows them to instantly recognize any of your interfaces or digital communication.

It’s also about creating an identity for your digital experience that’s a natural and cohesive extension of the overall visual style. Think of Headspace, Figma, or Nike, for example. Their distinctive look and feel are instantly recognizable wherever you encounter them.

Maintaining visual consistency also yields measurable revenue results. As per a Lucidpress survey, 68% of company stakeholders credit 10% to more than 20% of revenue growth to the consistency of their brand. This underscores the significance of initiating and integrating design systems to promote brand consistency.

Brand Strategy

In an ideal world, every new product would kick off with a well-thought-out brand strategy. This means defining the vision, mission, purpose, positioning, and value proposition in the market before diving into design work. While brand strategy predominantly addresses intangible aspects, it’s one of the fundamental cornerstones of a brand, alongside visuals like the logo and website. Even with stunning design, a brand can stumble in the market if its positioning isn’t unique or if the company is unsure about what it truly represents.

However, let’s face it, we often don’t have this luxury. Tight timelines, limited budgets, and stakeholders who might not fully grasp the value of a brand strategy can pose challenges. We all live in the real world, after all. In such cases, the best approach is either to encourage the stakeholders to articulate their brand strategy elements or to work collaboratively to uncover them through a series of workshops.

Defining a brand’s core strategy is the crucial starting point, establishing the foundation for future design work. The brand strategy serves as a robust framework that shapes every aspect of a brand’s presence, whether it be in marketing, on the web, or within applications.

Brand Identity Research

The research phase is where you unearth the insights to distinguish yourself in the vast arena of competitors. By meticulously analyzing consumer trends, design tendencies, and industry landscapes, you gain a deeper understanding of the unique elements that can set your brand apart. This process not only provides a solid foundation for strategic decision-making but also unveils valuable opportunities for innovation and differentiation.

Typically, the research initiates with an analysis of the brand’s existing visual style, especially if it’s already established. This initial exploration serves as a valuable starting point for team discussions, facilitating a comprehensive understanding of what aspects are effective and what needs refinement.

Moving on, the next crucial step involves conducting a comprehensive industry analysis. This process entails an examination of key brand elements, such as logos, colors, and other design components utilized by competitors. This step serves as the strategic guide for making precise design decisions.

When performing industry analysis as part of brand research, aim for specificity rather than generic observations. For instance, when crafting a brand identity for a product brand, a focused investigation into app icon designs becomes imperative. The differentiation of colors among various apps emerges as a potent tool in this endeavor. According to a study conducted by the Pantone Color Institute, color plays a pivotal role, boosting brand recognition by a substantial 80%.

Moreover, it’s essential to consider app icon designs that, while not directly competitors, are ubiquitous on most phones (examples include Google apps, Chrome/Safari, Facebook, Twitter, and so on). It’s crucial that your app icon stands out distinctly, even in comparison to these widely used icons.

To bring in more innovation and fire up creativity in future designs, it’s a good call to widen your scope beyond just your competition. For instance, if sustainability is a core value of the brand, conducting a thorough examination of how brands outside the industry express and visually communicate this commitment becomes pivotal. Similarly, exploring each value outlined in the brand strategy can provide valuable insights for future design considerations.

A highly effective method for presenting research findings is to consolidate all outcomes in one document and engage in comprehensive discussions, whether in-house or with the client. In my practice, research typically serves as a reference document, offering a reliable source for revisiting and reassessing the uniqueness of our design choices and verifying alignment with identified needs. It’s also a perfect argument point for design choices made in the following phase. Essentially, the research phase functions as the guide steering the brand toward a distinctive and unique look and feel.

Brand Identity Concepts

Now, to the fun part — crafting the actual visuals for the brand. A brand concept is a unifying idea or theme. It’s an abstract articulation of the brand’s essence, an overarching idea that engages and influences the audience. Brand concepts sometimes come up organically through the brand strategy; some of them need elaborate effort and a deep search for inspiration.

There are various methods to generate unique brand ideas that seamlessly connect brand strategy, meanings, and visuals.

The ultimate goal of any brand is to establish a strong emotional connection with users, which makes having a powerful brand idea that permeates the identity, website, and app crucial.

  • Mind mapping and association
    This is a widely used approach, though it looks quite different between various creatives. Start with a diagram listing the brand’s key attributes and features. Then, build on these words with your associations. A typical mind map for me is like a tangled web of brand values, mission, visual drafts, messaging, references, and sketches, all mixed in with associations. Try to blend ideas from totally different parts of the mind map. For example, in the design of an identity for a database company, contrary segments of the mind map may include infinity symbols and bytes. The combination of these elements results in a unique design scalable to both the brand symbol and identity.

  • Reversal
    Sometimes, a brand calls for an unexpected symbol or even a mascot that doesn’t have direct ties to the industry, for example, using a monkey as the symbol for a mailing platform or a bird for a wealth management app. Delving into the process of drawing parallels between unrelated objects, engaging all our senses, and embracing a creative and randomized approach helps to generate fresh and innovative concepts.
  • Random stimuli
    There are instances when tapping into randomness can significantly boost creativity. This approach may involve anything from AI-generated concepts to team brainstorming sessions that incorporate idea shuffling and combinations, often resulting in surprising and inventive ideas.

  • Real-world references
    Designers can sometimes find themselves too deeply immersed in their design bubble. Exploring historical references, natural patterns, or influences from the tangible world can yield valuable insights for a project. It’s essential not to confine yourself solely to your workspace and Pinterest mood boards. This is particularly relevant in identity design, where these tangible parallels can provide rich sources of inspiration and meaning.

Imagine I’m crafting an identity for the adventure tours app. The last place to seek inspiration from is other tour companies. Why? Because if the references are derivatives, the work will be too. Begin at the roots. Adventure tours are all about tapping into nature and connecting with your origins. The exploration would kick off by delving into the elements of nature. What sights, smells, sounds, and sensory details do these adventures offer?

That’s the essence that both clients and non-designers appreciate the most — finding tangible connections in the real world. When people can connect not just aesthetically but also emotionally or intellectually to the visuals, they become much more loyal to the brand.

Condense your design concept and ideas to highlight brand identity across diverse yet fitting contexts. Go beyond conventional applications like banners and business cards.

If you’re conceptualizing brand identity for restaurant management software, explore ways to brand the virtual payment card or create merchandise for restaurant employees. When crafting a style for a new video conferencing app, consider integrating the brand seamlessly into commonly used features, such as the ‘call’ button, and think of a way to brand the interface so that users can easily recognize your app in screenshots. Consider these aspects as you move through this project phase. Plus, taking a closer look at the industry can spark some creative ideas and bring a more down-to-earth feel to the concept.

Once the core brand visual concept gains approval and the general visual direction becomes clear, it’s time to create the assets for the brand application. It’s essential to note that no UI work should commence until the core brand identity elements, such as the logo, colors, typography, and imagery style, are developed.

Brand Identity Design And Key Assets For A Digital Product

Now, let’s delve into how you actually apply the brand identity to your interfaces and what the UI team should anticipate from the brand identity design team. One key thing to keep in mind is that users come to a website or app to get things done, not just to admire the visuals. So, we need to let them accomplish their tasks smoothly and subtly weave our brand’s visual identity into their experience.

This section lists assets, along with specific details tailored for digital applications, to make sure your UI colleagues have all they need for a smooth integration of the brand identity into the digital product.

Logo

When crafting a logo for a digital product, it’s essential to ensure that the symbol remains crisp and scalable at any dimension. Even if your symbol boasts exceptional distinctiveness, you’ll frequently require a simplified, compact version suitable for mobile use and applications like app icons or social media profile pictures. In these compact logo versions, the details take on added prominence, with negative space coming to the forefront.

Additionally, it’s highly advisable to create a compact version not just for the symbol but also for the wordmark. In such instances, you’ll typically find a taller x-height, more open apertures, and wider spacing.

One logo approach is pairing a logotype with a standalone symbol. The alternative is to feature just the logotype, incorporating a distinctive detail that can serve as an app icon or avatar. The crucial point is to maintain a strong association with the main logo. To illustrate this point, consider the example of Bolt and how they ingeniously incorporated the negative space in their logo to create a lightning symbol.

Another factor to take into account is to maintain square-like proportions for your logomark. This ensures that the logomark can be seamlessly integrated into common digital applications such as app icons, favicons, and profile pictures without appearing awkward or unbalanced within these placeholders. Ensure your logomark isn’t overly horizontal or vertical to maximize its impact across all digital platforms.

The logo and symbol are core assets of any digital brand. Typically, the logotype letter shapes take roots from the primary font.

Typography

Typography plays a pivotal role in shaping a brand’s identity. The selection of a typeface is particularly crucial in the brand identity design phase. Unfortunately, the needs of the UI/UX team are sometimes overlooked by the brand team, especially when dealing with complex products. Typography assets can be categorized into several key components:

Primary Font

Choosing the right typeface can be a challenging task, and finding a distinctive one can be even trickier. Beyond stylistic elements like serifs, non-serifs, and extended or condensed styles, selecting a primary font for a digital product involves considering various requirements. Questions to ponder include the following:

  • How many languages will your product support?
  • Will the brand use special symbols such as arrows, currency symbols, or math symbols?
  • What level of readability will the headings need, and what will be the smallest point size the headings are used at?

Body Font

Selecting the body font for a digital product demands meticulous attention. This decision can significantly impact readability and, as a result, user loyalty, especially in data-rich environments like dashboards and apps that contain numerals, text, and spreadsheets. Designers must be attentive and responsible intermediaries between users and data. Factors to consider include the following:

  • Typeface’s x-height,
  • Simplified appearance,
  • Legibility at small sizes,
  • Low or no contrast to prevent readability issues.

Fonts with large apertures and open shapes are preferable to keep similar letters distinct, such as ‘c’ and ‘o’. Increased letter spacing can enhance legibility, and typefaces should include both regular and monospaced digits. Special symbols like currency or arrows should also be considered for the brand use.

Fallback fonts

In the realm of digital branding, there will be countless situations where you may need to substitute your fonts. This can include using a body font for iOS or Android apps to save on expensive licensing costs, creating customizations for various countries and scripts, or adapting fonts for other platforms. The flexibility of having fallback fonts is invaluable in ensuring consistent brand representation across diverse digital touchpoints.

Layout Principles

Typography isn’t just about choosing fonts; it’s also about arranging them to reflect the brand uniquely. Employing the same fonts but arranging them differently can distort the brand perception.

Getting the right layout is all about finding that sweet spot between your brand’s vibe and the ever-changing design scene.

When crafting a layout, one can choose from various types that translate different brand voices. Grid-based layouts, for instance, leverage a system of rows and columns to instill order, balance, and harmony by organizing and aligning elements. Asymmetrical layouts, on the other hand, rely on contrast, tension, and movement to yield dynamic and expressive designs that command attention. Modular layouts utilize blocks or modules, fostering flexibility and adaptability while maintaining variety, hierarchy, and structure. Choosing one of the types or creating a hybrid can effectively convey your brand identity and message.

Attention to technical details is crucial, including line spacing, consistent borders, text density, and contrast between text sizes. Text alignment should be clearly defined. Creating a layout that accurately represents your brand requires applying design principles that designers intuitively understand, even if others may sense them without articulating why.

Color

Color is undoubtedly one of the most significant elements for any identity, extending beyond products and digital realms. While a unique primary color palette is vital, it is important to recognize that color is not just an aesthetic aspect but a crucial tool for usability and functionality within the brand and the product. This chapter highlights key areas often overlooked during the brand design process.

  • Call to Action(CTA) Color
    Brand designers frequently present an extensive palette with impressive color combinations, but this can leave UI designers unsure of the appropriate choice for Call to Action (CTA) elements. It is imperative to establish a primary CTA color at the brand identity design phase. This color must have a good contrast both with light and dark backgrounds and does not unintentionally trigger associations, such as red for errors or yellow for alerts.
  • Contrast
    Brand identity tends to have more flexibility compared to the strict industry standards for legibility and contrast in screens and interfaces. Nevertheless, brand designers should always evaluate contrast and adhere to WCAG accessibility standards, too. The degree of contrast should be determined based on factors like target audience demographics and potential usage scenarios, aiming for at least AA compliance. Making your product accessible to all is a noble mission that can enhance your brand’s meaning.

  • Extended Color Palette
    To effectively implement color in user interfaces, UI designers need not only the primary colors but also various tints of these colors to indicate different UI statuses. Semantic colors, like red for caution and green for positive connotations, are valuable tools for emphasizing critical elements or providing quick visual feedback. Tints of CTA colors and other hues are essential for indicating states such as hover, click, or visited elements. It’s preferable to define these nuanced details in the brand identity guideline. This ensures the uniform use of the same colors, whether in marketing materials or the product interface.

  • Color proportions and usage
    The proportion and use of colors have a substantial impact on how a brand is perceived. Usually, the brand’s primary color should serve as an accent rather than dominate most layouts, especially in the interface. Collaborating with UI colleagues to establish a color usage chart can help strike the right balance. Varying the proportions of colors creates visual interest and allows you to set the mood and energy level in a design or illustration by choosing dominant or accent colors wisely.

  • Color compensation
    Colors may appear differently on dark and light backgrounds and might lose contrast when transitioning between dark and light themes. In the modern context of platforms offering both dark and light UI versions, this factor should be considered not only for interface elements but also for logos and logomarks. Logos designed for light backgrounds typically have slightly higher brightness and saturation, while logos for dark backgrounds are less bright.

Color leads to another component of brand identity — the illustration style — where its extensive application in various shades plays a significant role both emotionally and visually.

Scalable Illustration System

An illustration style includes pictograms, icons, and full-scale illustrations. It’s important to keep the whole system intact and maintain consistency throughout the project, ensuring a strong connection with the brand and its assets. This consistency in the illustration system also enhances the user interface’s intuitiveness.

In the context of an illustration system, “style” refers to a collection of construction techniques combined into a unified system. Pictograms and icon systems are made of consistent and reusable elements, and attention to detail is crucial to achieving uniformity. This includes sticking to the pixel grid, ensuring pixel precision, maintaining composition, using consistent rounding, making optical adjustments for intersecting lines, and keeping the thickness consistent.

Stylistically, illustrations can employ a broader arsenal compared to icons, utilizing a wider range of features such as incorporating depth effects, a broader color palette, gradients, blur, and other effects. While pictograms and icons serve more utilitarian purposes, illustrations have the unique ability to convey a deeper emotional connection to the brand. They play a crucial role in strengthening and confirming the brand message and positioning.

The elements discussed in the article play a vital role in enabling the web team to craft the UI kit and contribute to the brand’s success in the digital space. Supplying these assets is the critical minimum. Ensure that all components of your brand are clearly outlined and explained. It’s advisable to establish a guideline consolidating all the rules in one workspace with UI designers (commonly in Figma), facilitating seamless collaboration. Additionally, the brand designer should oversee the UI kit used in interfaces to ensure alignment with all identity components.

Uniform Mechanism

Your digital brand should effectively communicate your broader brand identity, leaving no room for doubt about your values and positioning. The brand acts as the cornerstone, ensuring consistency in the digital product. A well-designed digital product seamlessly integrates all its components, resulting in a cohesive user experience that enhances user loyalty.

Ensure you maintain effective communication with the UI team throughout the whole project. From my experience, despite things appearing straightforward in the brand guidelines and easy to implement, misunderstandings can still occur between the brand identity team and the UI team. Common challenges, such as letter spacing in brand typography, can arise.

The consistent and seamless integration of brand elements into the UI design ensures the brand’s effectiveness. Whether you have a small or large design team, whether it’s in-house or external, incorporating branding into your digital product development is crucial for achieving better results. Remember, while a brand can exist independently, a product cannot thrive without branding.

Creating Accessible UI Animations

Ever since I started practicing user interface design, I’ve always believed that animations are an added bonus for enhancing user experiences. After all, who hasn’t been captivated by interfaces created for state-of-the-art devices with their impressive effects, flips, parallax, glitter, and the like? It truly creates an enjoyable and immersive experience, don’t you think?

Mercado Libre is the leading e-commerce and fintech platform in Latin America, and we leverage animations to guide users through our products and provide real-time feedback. Plus, the animations add a touch of fun by creating an engaging interface that invites users to interact with our products.

Well-applied and controlled animations are capable of reducing cognitive load and delivering information progressively — even for complex flows that can sometimes become tedious — thereby improving the overall user experience. Yet, when we talk about caring for creating value for our users, are we truly considering all of them?

After delving deeper into the topic of animations and seeking guidance from our Digital Accessibility team, my team and I have come to realize that animations may not always be a pleasant experience for everyone. For many, animations can generate uncomfortable experiences, especially when used excessively. For certain other individuals, including those with attention disorders, animations can pose an additional challenge by hindering their ability to focus on the content. Furthermore, for those afflicted by more severe conditions, such as those related to balance, any form of motion can trigger physical discomfort manifested as nausea, dizziness, and headaches.

These reactions, known as vestibular disorders, are a result of damage, injury, or illnesses in the inner ear, which is responsible for processing all sensory information related to balance control and eye movements.

In more extreme cases, individuals with photosensitive epilepsy may experience seizures in response to certain types of visual stimuli. If you’d like to learn more about motion sensitivity, the following links are a nice place to start:

How is it possible to strike a balance between motion sensitivities and our goal of using animation to enhance the user interface? That is what our team wanted to figure out, and I thought I’d share how we approached the challenge. So, in this article, we will explore how my team tackles UI animations that are inclusive and considerate of all users.

We Started With Research And Analysis

When we realized that some of our animations might cause annoyance or discomfort to users, we were faced with our first challenge: Should we keep the animations or remove them altogether? If we remove them, how will we provide feedback to our users? And how will not having animations impact how users understand our products?

We tackled this in several steps:

  1. We organized collaborative sessions with our Digital Accessibility team to gain insights.
  2. We conducted in-depth research on the topic to learn from the experiences and lessons of other teams that have faced similar challenges.

Note: If you’re unfamiliar with the Mercado Libre Accessibility Team’s work, I encourage you to read about some of the things they do over at the Mercado Libre blog.

We walked away with two specific lessons to keep in mind as we considered more accessible UI animations.

Lesson 1: Animation ≠ Motion

During our research, we discovered an important distinction: Animation is not the same as motion. While all moving elements are animations, not every animated element necessarily involves a motion as far as a change in position.

The Web Content Accessibility Guidelines (WCAG) include three criteria related to motion in interfaces:

  1. Pause, stop, and hide
    According to Success Criterion 2.2.2 (Level AA), we ought to allow users to pause, stop, or hide any content that moves, flashes, or scrolls, as well as those that start or update automatically or that last longer than five seconds and is presented in parallel with other content.
  2. Moving or flashing elements
    Success Criterion 2.3 includes guidelines for avoiding seizures and negative physical reactions, including 2.3.1 (Level A) and 2.3.2 (Level AAA) for avoiding intermittent animations that flash more than three times per second as they could trigger seizures.
  3. Animation from interactions
    Success Criterion 2.3.3 specifies that users should be able to interact with the UI without solely relying on animations. In other words, the user should be able to stop any type of movement unless the animation is essential for functionality or conveying information.

These are principles that we knew we could lean on while figuring out the best approach for using animations in our work.

Lesson 2: Rely On Reduced Motion Preferences

Our Digital Accessibility team made sure we are aware of the prefers-reduced-motion media query and how it can be used to prevent or limit motion. MacOS, for example, provides a “Reduce motion” setting in the System Settings.

As long as that setting is enabled and the browser supports it, we can use prefers-reduced-motion to configure animations in a way that respects that preference.

:root {
  --animation-duration: 250ms; 
}

@media screen and (prefers-reduced-motion: reduce), (update: slow) {
  /* Increase duration to slow animation when a user requests a reduced animation experience */
.animated {
    --animation-duration: 0 !important; 
  }
}

Eric Bailey is quick to remind us that reduced motion is not the same as no motion. There are cases where removing animation will prevent the user’s understanding of the content it supports. In these cases, it may be more effective to slow things down rather than remove them completely.

:root {
  --animation-duration: 250ms; 
}

@media screen and (prefers-reduced-motion: reduce), (update: slow) {
  /* Increase duration to slow animation when reduced animation is preferred */
  * {
    --animation-duration: 6000ms !important; 
  }
}

Armed with a better understanding that animation doesn’t always mean changing positions and that we have a way to respect a user’s motion preferences, we felt empowered to move to the next phase of our work.

We Defined An Action Plan

When faced with the challenge of integrating reduced motion preferences without significantly impacting our product development and UX teams, we posed a crucial question to ourselves: How can we effectively achieve this without compromising the quality of our products?

We are well aware that implementing broad changes to a design system is not an easy task, as it subsequently affects all Mercado Libre products. It requires strategic and careful planning. That said, we also embrace a mindset of beta and continuous improvement. After all, how can you improve a product daily without facing new challenges and seeking innovative solutions?

With this perspective in mind, we devised an action plan with clear criteria and actionable steps. Our goal is to seamlessly integrate reduced motion preferences into our products and contribute to the well-being of all our users.

Taking into account the criteria established by the WCAG and the distinction between animation and motion, we classified animations into three distinct groups:

  1. Animations that do not apply to the criteria;
  2. Non-essential animations that can be removed;
  3. Essential animations that can be adapted.

Let me walk you through those in more detail.

1. Animations That Do Not Meet The Criteria

We identified animations that do not involve any type of motion and, therefore, do not require any adjustments as they did not pose any triggers for users with vestibular disorders or reduced motion preferences.

Animations in this first group include:

  • Objects that instantly appear and disappear without transitions;
  • Elements that transition color or opacity, such as changes in state.

A button that changes color on hover is an example of an animation included in this group.

Button changing color on mouse hover. (Large preview)

As long as we are not applying some sort of radical change on a hover effect like this — and the colors provide enough contrast for the button label to be legible — we can safely assume that it is not subject to accessibility guidelines.

2. Unessential Animations That Can Be Removed

Next, we categorized animations with motions that were unessential for the interface and contrasted them with those that did add context or help navigate the user. We consider unessential animations to be those that are not crucial for understanding the content or state of the interface and that could cause discomfort or distress to some individuals.

This is how we defined animations that are included in this second group:

  • Animated objects that take up more than one-third of the screen or move across a significant distance;
  • Elements with autoplay or automatic updates;
  • Parallax effects, multidirectional movements, or movements along the Z-axis, such as changes in perspective;
  • Content with flashes or looping animations;
  • Elements with vortex, scaling, zooming, or blurring effects;
  • Animated illustrations, such as morphing SVG shapes.

These are the animations we decided to completely remove when a user has enabled reduced motion preferences since they do not affect the delivery of the content, opting instead for a more accessible and comfortable experience.

Some of this is subjective and takes judgment. There were a few articles and resources that helped us define the scope for this group of animations, and if you’re curious, you can refer to them in the following links:

For objects that take up more than one-third of the screen or move position across a significant distance, we opted for instant transitions over smooth ones to minimize unnecessary movements. This way, we ensure that crucial information is conveyed to users without causing any discomfort yet still provide an engaging experience in either case.

Comparing a feedback screen with animations that take up more than one-third of the screen versus the same screen with instant animations. (Large preview)

Other examples of animations we completely remove include elements that autoplay, auto-update, or loop infinitely. This might be a video or, more likely, a carousel that transitions between panels. Whatever the case, the purpose of removing movement from animations that are “on” by default is that it helps us conform to WCAG Success Criterion 2.2.2 (Level AA) because we give the user absolute control to decide when a transition occurs, such as navigating between carousel panels.

Additionally, we decided to eliminate the horizontal sliding effect from each transition, opting instead for instantaneous changes that do not contribute to additional movement, further preventing the possibility of triggering vestibular disorders.

Comparing an auto-playing carousel with another carousel that incorporates instant changes instead of smooth transitions. (Large preview)

Along these same lines, we decided that parallax effects and any multidirectional movements that involve scaling, zooming, blurring, and vortex effects are also included in this second group of animations that ought to be replaced with instant transitions.

Comparing a card flip animation with smooth transitions with one that transitions instantly. (Large preview)

The last type of animation that falls in this category is animated illustrations. Rather than allowing them to change shape as they normally would, we merely display a static version. This way, the image still provides context for users to understand the content without the need for additional movement.

Comparing an animated illustration with the same illustration without motion. (Large preview)

3. Essential Animations That Can Be Adapted

The third and final category of animations includes ones that are absolutely essential to use and understand the user interface. This could potentially be the trickiest of them all because there’s a delicate balance to strike between essential animation and maintaining an accessible experience.

That is why we opted to provide alternative animations when the user prefers reduced motion. In many of these cases, it’s merely a matter of adjusting or reducing the animation so that users are still able to understand what is happening on the screen at all times, but without the intensity of the default configuration.

The best way we’ve found to do this is by adjusting the animation in a way that makes it more subtle. For example, adjusting the animation’s duration so that it plays longer and slower is one way to meet the challenge.

The loading indicator in our design system is a perfect case study. Is this animation absolutely necessary? It is, without a doubt, as it gives the user feedback on the interface’s activity. If it were to stop without the interface rendering updates, then the user might interpret it as a page error.

Rather than completely removing the animation, we picked it apart to identify what aspects could pose issues:

  • It could rotate considerably fast.
  • It constantly changes scale.
  • It runs in an infinite loop until it vanishes.
The loading indicator. (Large preview)

Considering the animation’s importance in this context, we proposed an adaptation of it that meets these requirements:

  • Reduce the rotation speed.
  • Eliminate the scaling effect.
  • Set the maximum duration to five seconds.
Comparing the loading indicator with and without reduced motion preferences enabled. (Large preview)

The bottom line:

Animation can be necessary and still mindful of reduced motion preferences at the same time.

This is the third and final category we defined to help us guide our decisions when incorporating animation in the user interface, and with this guidance, we were able to tackle the third and final phase of our work.

We Expanded It Across All Our Products

After gaining a clear understanding of the necessary steps in our execution strategy, we decided to begin integrating the reduced motion preferences we defined in our design system across all our product interfaces. Anyone who manages or maintains a design system knows the challenges that come with it, particularly when it comes to implementing changes organically without placing additional burden on our product teams.

Our approach was rooted in education.

Initially, we focused on documenting the design system, creating a centralized and easily accessible resource that offered comprehensive information on accessibility for animations. Our focus was on educating and fostering empathy among all our teams regarding the significance of reduced motion preferences. We delved into the criteria related to motion, how to achieve it, and, most importantly, explaining how our users benefit from it.

We also addressed technical aspects, such as when the design system automatically adapts to these preferences and when the onus shifts to the product teams to tailor their experiences while proposing and implementing animations in their projects. Subsequently, we initiated a training and awareness campaign, commencing with a series of company-wide presentations and the creation of accessibility articles like the one you’re reading now!

Conclusion

Our design system is the ideal platform to apply global features and promote a culture of teamwork and consistency in experiences, especially when it comes to accessibility. Don’t you agree?

We are now actively working to ensure that whenever our products detect the default motion settings on our users’ devices, they automatically adapt to their needs, thus providing enhanced value in their experiences.

How about you? Are you adding value to the user experience of your interfaces with accessible animation? If so, what principles or best practices are you using to guide your decisions, and how is it working for you? Please share in the comments so we can compare notes.

Exploring Enhanced Patterns In WordPress 6.3

Reusable blocks, introduced in WordPress 5.0, allow users to create and save custom blocks that can be used across different pages or posts. This increases efficiency and consistency by allowing users to create personalized blocks of content that can be easily reused. Subsequently, in WordPress 5.5, block patterns were introduced, allowing users to design layout patterns comprised of multiple blocks.

While reusable blocks have allowed users to create their own content blocks that can be reused across the site while maintaining their consistency, block patterns have offered a convenient to quickly apply common design patterns to pages and posts.

Reusable blocks and block patterns may seem similar at first glance, but there is one crucial distinction between them. Reusable blocks can be easily created directly in the Post Editor, allowing users to generate and reuse their own custom content blocks. In contrast, block patterns are established patterns installed or registered in block themes that cannot be created directly in the WordPress admin.

Starting with WordPress 6.3, reusable blocks and block patterns have been combined to form a feature called “Patterns” that provides users with the flexibility to choose whether they want to synchronize all instances of a pattern — similar to reusable blocks — or apply patterns without syncing content. The new functionality, available now in the Post Editor, empowers users to craft patterns that can function as both reusable blocks and patterns, catering to their specific requirements.

Selecting the “Create Reusable block” option triggers a popup that prompts you to name the reusable block.

Once named, the reusable block is saved and can be accessed in the Block Inserter. It’s a little tough to spot because it is the only section of the Block Inserter that is labeled with an icon instead of a text label.

Perhaps a more convenient way to access the block is to type a forward slash (/) in the Post Editor, followed by the reusable block’s name.

Making changes to a reusable block isn’t difficult, but finding where to make changes is. You must click on the Post Editor settings while editing a page or post, then select the “Manage Reusable blocks” option.

This will take you to another new editing screen where you can directly edit reusable blocks as you like. I sometimes bookmark this screen as a shortcut. Once saved, changes to reusable blocks are applied throughout the site.

Creating Block Patterns in WordPress 6.2

Unlike reusable blocks, site creators are unable to create block patterns from the Post Editor. Instead, they are treated more like plugins, where block patterns are installed and activated before they are available in the Post Editor. Once they are available, they can be accessed with the Block Inserter or a forward slash command the same way reusable blocks are added to pages and posts.

The neat thing about this plugin-like treatment is that there is a Patterns Directory full of patterns created and submitted by the WordPress community, just like we have the Plugins Directory. But that also means that patterns are developed and need to be included in a theme.

Registering Custom Block Patterns With PHP

The register-block-pattern API function was first introduced in WordPress 6.0, allowing theme authors to register custom block patterns:

register_block_pattern(
  'my-first-pattern/hello-world',
  array(
    'title' => __( 'Hello World', 'my-first-pattern' ),
    'description' => _x( 'A simple paragraph block.', 'my-first-pattern' ),
    'content' => "<!-- wp:paragraph -->Hello world<!-- /wp:paragraph -->",
  )
);

The content argument may contain any raw HTML markup, which means it’s possible to configure a group of blocks that you want to make into a pattern directly in the Post Editor, then copy and paste that group into the content field. Pasting blocks as plain text reveals the underlying raw HTML.

We want to make that into a custom function and add an action that fires the function when the theme is initialized.

function mytheme_register_block_patterns() {
  register_block_pattern( ... );
}
add_action( 'init', 'mytheme_register_block_patterns' );

Just as a block pattern can be registered, it can be unregistered programmatically using the unregister-block-pattern function. All it takes is the title argument.

function mytheme_unregister_my_patterns() {
  unregister_block_pattern(
    'my-first-pattern/hello-world',
    array(
      'title' => __( 'Hello World', 'my-first-pattern' ),
    )
  );
}
add_action( 'init', 'my_first_patterns' );

Registering Custom Block Patterns Via The /patterns Directory

Not to be confused with the Patterns Directory I shared earlier, where you can find and install patterns made by community contributors, WordPress 6.0 has also supported registering block patterns in a /patterns file directory that lives in the theme folder.

The process to register a block pattern from here is similar to the PHP approach. In fact, each pattern is contained in its own PHP file that contains the same raw HTML that can be copied and pasted into the register-block-pattern function’s content argument… only the function is not required.

Here is an example showing a pattern called “Footer with text” that is saved as footer.php in the /patterns folder:

<?php
/**
 * Title: Footer with text.
 * Slug: theme-slug/footer
 * Categories: site-footer
 * Block Types: core/template-parts/footer
 * Viewport Width: 1280
 */
?>
<!-- block markup here -->

This particular example demonstrates another feature of block patterns: contextual block types. Declaring the “Block Types” property as core/template-parts/footer attaches the pattern to a template part (located in a /template-parts folder that sits alongside the /patterns folder) called footer.php. The benefit of attaching a block pattern to a block type is that it registers the pattern as an available transform of that block type, which is a fancy way of saying that the pattern is applied on top of another block. That way, there’s no need to modify the structure of the existing template part to apply the pattern, which is sort of similar to how we typically think of child theming but with patterns instead.

Want to add your custom block pattern to a theme template? That’s possible with the wp:pattern context:

<!-- wp:pattern { "slug":"prefix/pattern-slug" } /-->

Any entire template can be created with nothing but block patterns if you’d like. The following is an example taken from the Automattic’s Archeo theme. The theme’s home.html template file clearly demonstrates how a template can be constructed from previously registered patterns, pattern files in the /patterns theme folder, and the wp:pattern context:

<!-- wp:template-part { "slug":"header","tagName":"header" } /-->

<!-- wp:group { "layout":{ "inherit":"true" } } -->
  <div class="wp-block-group">
    <!-- wp:pattern { "slug":"archeo/image-with-headline-description" } /-->
    <!-- wp:pattern { "slug":"archeo/simple-list-of-posts-with-background" } /-->
    <!-- wp:pattern { "slug":"archeo/layered-images-with-headline" } /-->
  </div>
<!-- /wp:group -->

<!-- wp:template-part { "area":"footer","slug":"footer","tagName":"footer" } /-->

The theme’s footer.php pattern is added to the /parts/footer.html template file before it is used in the home.html template, like this:

<!-- wp:pattern { "slug":"archeo/footer" } /-->

Additional information about registering block patterns is available in the WordPress Theme Handbook. You can also discover many use cases for block patterns in the explainer of Automattic’s themes repository on GitHub.

Reusable Blocks And Patterns In WordPress 6.3

WordPress 6.3 is notable for many reasons, one being that the reusable blocks and block patterns features are combined into a single feature simply called Patterns. The idea is that reusable blocks and block patterns are similar enough in nature that we can decide whether or not a pattern is reusable at the editing level. Instead of determining up-front whether or not you need a reusable block or a block pattern, create a Pattern and then determine whether to sync the Pattern’s content across the site.

The result is a single powerful feature that gives us the best of both worlds. WordPress 6.3 not only combined the reusable blocks and block patterns but made UI changes to the WordPress admin as well. Let’s zero in on those changes and how Patterns work in the new system.

Creating Synced Patterns

Not only are Patterns offered in the Site Editor, but they can be inserted into a page or post with the Post Editor. In fact, it works just like reusable blocks did before combining with block patterns. The only difference is that the “Create Reusable block” option in the contextual menu is now called “Create pattern/reusable block” instead.

The process for creating a pattern is mostly the same, too. Select any block or group of blocks that have been inserted into the page, open the contextual menu, and select “Create pattern/reusable block.” I hope that label becomes simply “Create Pattern” in a future release. This longer label is probably there to help with the transition.

This is where things start to diverge from WordPress 6.2. Clicking “Create pattern/reusable block” still triggers a popup asking you to name the Pattern, but what’s new is a toggle to enable synced content support.

Once the pattern is saved, it is immediately available in the Block Inserter or with a slash (/) command.

Creating Standard, Unsynced Patterns

This feature, which has been a long time coming, allows us to create our own custom patterns, akin to the flexibility of reusable blocks in the Site Editor.

Let’s demonstrate how standard, unsynced Patterns work but do it a little differently than the synced example. This time, we’ll start by copying this two-column text pattern from the Patterns Directory and pasting it into a page. I’m going to change the colors around a bit and make a few other minor tweaks to the copied pattern just for fun. I’m also naming it “Two-columns Text Unsynced Pattern” in the popup. The only difference between this Pattern and the synced Pattern we created earlier is that I’m disabling the Synced setting.

That’s really it! I just created a new custom pattern based on another pattern pulled from the Patterns Library and can use it anywhere on my site without syncing the content in it. No PHP or special file directories are needed!

Patterns Are Accessible From The Site Editor

You are probably very familiar with the Site Editor. As long as your WordPress site is configured as a block theme, navigating to Appearance → Site Editor opens up the site editing interface.

WordPress 6.3 introduces a newly redesigned sidebar panel that includes options to edit navigation, styles, pages, templates, and… patterns. This is a big deal! Patterns are now treated like modular components that can be used to craft templates at the Site Editor level. In other words, block patterns are no longer relegated solely to the Post Editor.

Clicking into Patterns in the Site Editor displays all of your saved Patterns. The patterns are conveniently split up between synced and unsynced patterns, and clicking on any of them opens up an editing interface where changes can be made and saved.

Another interesting Site Editor update in WordPress 6.3 is that patterns and template parts are now together. Previous versions of WordPress put Template Parts in the Site Editor’s top-level navigation. WordPress 6.3 replaces “Template Parts” in the Site Editor navigation with “Patterns” and displays “Template Parts” alongside patterns in the resulting screen.

I’ll reserve judgment for later, but it’s possible that this arrangement opens up some confusion over the differences between patterns and template parts. That’s what happened when patterns and reusable blocks were separate but equal features with overlapping functionality that needed to be combined. I wonder if template parts will get wrapped up in the same bundle down the road now that there’s less distinction between them and patterns in the Site Editor.

Another thing to notice about the patterns interface in the Site Editor is how patterns are organized in folders in the side panel. The folders are automatically created when a pattern is registered as a contextual block pattern, as we demonstrated earlier when revisiting how block patterns worked in previous versions of WordPress. A lock icon is displayed next to a folder when the patterns are bundled with the active theme, indicating that they are core to the theme’s appearance rather than a pattern that was created independently of the theme. Locked patterns are ones you want to build off of, the same way we registered a Pattern earlier as a contextual block type.

Finally, a new pattern (or template part, for that matter) can be created directly from the Site Editor without having to leave and create it in the Post Editor. This is an extremely nice touch that prevents us from having to jump between two UIs as we’ve had to do in previous versions of WordPress.

Remember that screen I showed earlier that displays when clicking “Manage Reusable blocks” in the Post Editor? Well, now it is called “Patterns,” and it, too, is a direct link in the Site Editor.

This screen displays all custom saved patterns but does not show patterns that are bundled with the theme. This may change in future releases. Matias Ventura, Gutenberg project architect, says in this GitHub discussion thread that patterns will eventually be served through the Pattern Directory instead of being bundled resources. Maybe then we’ll see all available patterns instead of only custom patterns.

Using Patterns As Starter Templates

A common use case of the earlier Patterns API that was introduced in WordPress 6.0 has been to display a few sets of starter content patterns as options that users may choose when creating a new page template in the Site Editor. The idea is to provide you with a template with a predefined layout rather than starting with a blank template and to show a preview of the template’s configuration.

The updated Patterns API in WordPress 6.2 allows us to do this more easily by creating custom patterns for specific template types. For example, we could create a set of patterns associated with the template for single posts. Or another set of patterns for the 404 template. The benefit of this, of course, is that we are able to use patterns as starter templates!

Let’s walk through the process of using patterns as starter page templates, beginning first by registering our custom patterns with our friend, register-block-pattern(). We do have the option to register patterns in the theme’s /patterns folder, as we did earlier, but I found it did not work. Let’s go with the function instead for the tour.

Registering Custom Patterns With register-block-pattern()

We’ll start with a function that registers a Pattern that we are going to associate with the theme’s 404 page template. Notice the templateTypes argument that allows us to link the pattern to the template:

function mytheme_register_block_patterns() {
  register_block_pattern(
    'wp-my-theme/404-template-pattern',
     array(
       'title' => __( '404 Only template pattern', 'wp-my-theme' ),
       'templateTypes' => array( '404' ),
       'content' => '<!-- wp:paragraph { "align":"center","fontSize":"x-large" } --><p class="has-text-align-center has-x-large-font-size">404 pattern</p><!-- /wp:paragraph -->',
    )
  );
}
add_action( 'init', 'mytheme_register_block_patterns' );

I pulled the bulk of this function from a GitHub Gist. It’s a small example, but you can see how cluttered things could get if we are registering many patterns for a single template. Plus, the more patterns registered for a template, the bigger that page gets, making the template as a whole difficult to read, preview, and maintain.

The default Twenty Twenty-Two WordPress theme comes with 66 patterns. That could get messy in the theme folder, but the theme smartly has added an /inc folder containing individual PHP files for each registered pattern. The same sort of strategy the themes have used to break up functions registered in the functions.php to prevent it from getting too convoluted.

For the sake of example, let’s register a few starter patterns the same way. First, we’ll add a new /inc folder to the top level of the theme folder, followed by another folder contained in it called /patterns. And in that folder, let’s add a new file called block-patterns.php. In that file, let’s add a modified version of the Twenty Twenty-Two theme’s block registration function mapped to four patterns we want to register for the 404 page template:

  • 404-blue.php
  • page-not-found.php

Here’s how it all looks:

Let’s turn our attention to the patterns themselves. Specifically, let’s open up the 404-blue.php file and add the code from this Pattern in the Patterns Directory and this one as well:

<?php
/**
  * Blue pattern
  * source: https://wordpress.org/patterns/pattern/seo-friendly-404-page/
**/
?>

return array(
  'title' => __( '404 Blue', 'mytheme' ),
  'categories' => array( 'post' ),
  'templateTypes' => array( '404' ),
  'inserter' => 'yes',
  'content' => '<!-- wp:columns { "align":"full" } -->
<div class="wp-block-columns alignfull"><!-- wp:column { "width":"100%" } -->
<div class="wp-block-column" style="flex-basis:100%"><!-- wp:columns { "style":{" color":{ "gradient":"linear-gradient(308deg,rgba(6,147,227,1) 0%,rgb(155,81,224) 100% )" },"spacing":{ "padding":{ "right":"20px","bottom":"100px","left":"20px","top":"100px"} } } } -->
<div class="wp-block-columns has-background" style="background:linear-gradient(308deg,rgba(6,147,227,1) 0%,rgb(155,81,224) 100%);padding-top:100px;padding-right:20px;padding-bottom:100px;padding-left:20px"><!-- wp:column { "width":"1920px" } -->
<div class="wp-block-column" style="flex-basis:1920px"><!-- wp:heading { "textAlign":"center","level":1,"style":{ "typography":{ "textTransform":"uppercase","fontSize":"120px" } },"textColor":"white" } -->
<h1 class="has-text-align-center has-white-color has-text-color" style="font-size:120px;text-transform:uppercase"><strong>404</strong></h1>
<!-- /wp:heading -->

<!-- wp:heading { "textAlign":"center","style":{ "typography":{ "textTransform":"uppercase" } },"textColor":"white" } -->
<h2 class="has-text-align-center has-white-color has-text-color" style="text-transform:uppercase">😭 <strong>Page Not Found</strong> 💔</h2>
<!-- /wp:heading -->

<!-- wp:paragraph { "align":"center","textColor":"white" } -->
<p class="has-text-align-center has-white-color has-text-color">The page you are looking for might have been removed had it's name changed or is temporary unavailable. </p>
<!-- /wp:paragraph -->

<!-- wp:search { "label":"","showLabel":false,"placeholder":"Try Searching for something else...","width":100,"widthUnit":"%","buttonText":"Search","buttonPosition":"no-button","align":"center","style":{ "border":{ "radius":"50px","width":"0px","style":"none" } },"backgroundColor":"black","textColor":"white" } /-->

<!-- wp:paragraph { "align":"center","textColor":"white" } -->
<p class="has-text-align-center has-white-color has-text-color">💡 Or you can return to our <a href="#">home page</a> or <a href="#">contact us</a> if you can't find what you are looking for</p>
<!-- /wp:paragraph -->

<!-- wp:buttons { "layout":{"type":"flex","justifyContent":"center" } } -->
<div class="wp-block-buttons"><!-- wp:button { "backgroundColor":"black","textColor":"white","style":{ "border":{ "radius":"50px" },"spacing":{ "padding":{ "top":"15px","right":"30px","bottom":"15px","left":"30px" } } } } -->
<div class="wp-block-button"><a class="wp-block-button__link has-white-color has-black-background-color has-text-color has-background" style="border-radius:50px;padding-top:15px;padding-right:30px;padding-bottom:15px;padding-left:30px">Go to Homepage</a></div>
<!-- /wp:button -->

<!-- wp:button { "backgroundColor":"black","textColor":"white","style":{ "border":{ "radius":"50px" },"spacing": { "padding":{ "top":"15px","bottom":"15px","left":"60px","right":"60px" } } } } -->
<div class="wp-block-button"><a class="wp-block-button__link has-white-color has-black-background-color has-text-color has-background" style="border-radius:50px;padding-top:15px;padding-right:60px;padding-bottom:15px;padding-left:60px">Contact Us</a></div>
<!-- /wp:button --></div>
<!-- /wp:buttons -->

<!-- wp:paragraph { "align":"center","textColor":"white","fontSize":"small" } -->
<p class="has-text-align-center has-white-color has-text-color has-small-font-size">Find the page at our <a href="#sitemap">sitemap</a></p>
<!-- /wp:paragraph --></div>
<!-- /wp:column --></div>
<!-- /wp:columns --></div>
<!-- /wp:column --></div>
<!-- /wp:columns -->'

Once again, I think it’s worth calling out the templatesTypes argument, as we’re using it to link this “404 Blue” pattern to the 404 page template. This way, the pattern is only registered to that template and that template alone.

Now that we’ve finished adding the right folders and files and have registered the “404 Blue” pattern to the 404 page template, we can create the 404 page template and see our patterns at work:

  • Open up the WordPress admin and navigate to the Site Editor (Appearance → Editor).
  • Open the Templates screen by clicking “Templates” in the Site Editor side panel.
  • Click “Add New Template”.
  • Select the “Page: 404” option.

Selecting the 404 page template triggers a popup modal that prompts you to choose a pattern for the page using — you guessed it — the patterns we just registered! The default starter pattern established by the theme is displayed as well.

Custom Template With Starter Patterns

What we just did was create a set of patterns linked to the theme’s 404 page template. But what if we want to link a pattern set to a custom page template? When the Site Editor was first introduced, it only supported a few core page templates, like page, post, and front page. Now, however, we not only have more options but the choice to create a custom page template as well.

So, let’s look at that process by adding new files to the /inc/patterns folder we created in the last example:

  • about-me.php,
  • my-portfolio.php.

We won’t grab code examples for these since we spelled out the full process in the last example. But I will point out that the main difference is that we change the templateTypes argument in each pattern file so that it links the patterns to the custom templates we plan on creating in the Site Editor:

<?php
/**
  * About Me
  * source: https://wordpress.org/patterns/pattern/seo-friendly-404-page/
**/
?>

return array(
  'title' => __( 'About Me', 'mytheme' ),
  'categories' => array( 'post' ),
  'templateTypes' => array( 'portfolio', 'author' ),
  // etc.
);

Now we can go back to the Site Editor, open the Templates screen, and select “Add new template” as we did before. But this time, instead of choosing one of the predefined template options, we will click the “Custom template” option at the bottom. From there, we get a new prompt to name the custom template. We’ll call this one “My Portfolio”:

Next, we could try to choose patterns for the template, but it leads to a blank page at the time of this writing. Instead, we can skip that step, open the template in the editor, and add the patterns to the template there as you would any other block or pattern. Click the + button in the top-left corner of the editor to open the block inserter side panel, then open the “Patterns” tab and select patterns to preview them in the custom template.

As a side note, do you see how the patterns are bundled in categories (e.g., Featured, Posts, Text, and so on)? That’s what the categories argument in the pattern file’s return array sets. If a pattern is not assigned a category, then it will automatically go into an “Unclassified” category.

The WordPress Developer Blog provides additional examples of custom starter templates.

Using Patterns In The Post Editor

We can insert custom patterns into pages and posts using the Post Editor in the same way we can insert them into templates using the Site Editor. In the Post Editor, any custom patterns that are registered but not linked to specific templates are listed in the “My patterns” category of the Block Inserter’s “Patterns” tab.

This discussion on GitHub suggests that displaying categories for custom patterns will be prioritized for a future release.

Using Patterns From The Patterns Directory

We’ve certainly danced around this topic throughout the rest of the examples we’ve covered. We’ve been copying and pasting items from the Patterns Directory to register our own custom patterns and link them to specific page templates. But let’s also see what it’s like to use a pattern directly from the Patterns Directory without modifying anything.

If you’ve installed a plugin from the Plugins Directory, then you are already familiar with installing patterns from the Patterns Directory. It’s the same concept: members from the community contribute open-source patterns, and anyone running a WordPress site can use them.

The library allows users to select patterns that are contributed by the “community” or “curated” by the WordPress.org team, all of which fall in a wide range of different categories, from Text and Gallery to Banners and Call to Action, among many others.

Adding a pattern to a site isn’t exactly the same as installing a plugin. A plugin can be installed directly from the Plugins Directory via the WordPress admin and activated from there. Patterns, however, should be added to a block theme’s theme.json, registered in the patterns object using the pattern’s slug as the value. Multiple patterns can be registered with comma-separation:

{
  "version": 2,
  "patterns": [ "short-text", "patterns-slug" ],
  // etc.
}

The following example uses a pattern called “Slanted Style Call To Action” from the Patterns Directory. It is used in the theme.json file of a theme I cloned from the default Twenty Twenty-Three theme:

{
  "version": 2,
  "patterns": [ "slanted-pattern", "slanted-style-call-to-action" ]
}

Now, we can view the newly added pattern in the Post Editor by opening the Block Inserter and selecting the Patterns tab, where the pattern is listed. Similarly, it’s possible to use the Block Inserter’s search function to pull up the pattern:

For those of you who would like to use patterns directly from the Pattern Directory without first registering them, the GutenbergHub team has created a page builder app that makes that possible. They have an introductory video that demonstrates it.

You can copy the code from the app and paste it into a site, which makes it much easier to build complex layout patterns in a low-code fashion. Jamie Marsland shows in this short video (at 1:27) how the app can be used to create an entire page layout, similar to a full-fledged page builder, by selecting desired page sections from the Patterns Directory.

Learn more about creating starter patterns in the “Utilizing patterns” section of the WordPress Developer Resources documentation.

Aspect Ratio For Large Images

You may have already noticed that the core/image block didn’t allow dimensions or aspect-ratio controls for images that were added to the block. With WP 6.3, you can control the aspect ratio of an image, which will be preserved when you change it with another one of different sizes.

This feature will be helpful when replacing images in block patterns. This short video shows you how image aspect ratio can be used in block patterns.

For an additional in-depth discussion and rationale, please visit GitHub PRs #51078, #51144, #50028, and #48079.

Wrapping Up

In this article, we discussed the new evolving block patterns feature in WordPress 6.3 and showed a few use cases for creating custom patterns within the site editor. This new feature provides users with unlimited ways to arrange blocks and save them as patterns for widespread use. The integration of reusable blocks and traditional patterns within the Site and Post Editors aims to streamline workflows, enhance content creation, and prepare for upcoming enhancements in WordPress 6.4.

In addition, the WordPress 6.4 roadmap includes more advanced features for patterns that we have to look forward to:

You can check out this WordPress TV video to learn more details about how the block patterns are evolving. Additionally, work-in-progress issues can be tracked on GitHub.

Note: Since this article was written, WordPress 6.4 Beta 1 has been released. The new release allows users to better organize synced and unsynced patterns with categories as part of the creation process. Please refer to the release note for more up-to-date information.

Further Reading

Using AI To Detect Sentiment In Audio Files

I don’t know if you’ve ever used Grammarly’s service for writing and editing content. But if you have, then you no doubt have seen the feature that detects the tone of your writing.

It’s an extremely helpful tool! It can be hard to know how something you write might be perceived by others, and this can help affirm or correct you. Sure, it’s some algorithm doing the work, and we know that not all AI-driven stuff is perfectly accurate. But as a gut check, it’s really useful.

Now imagine being able to do the same thing with audio files. How neat would it be to understand the underlying sentiments captured in audio recordings? Podcasters especially could stand to benefit from a tool like that, not to mention customer service teams and many other fields.

An audio sentiment analysis has the potential to transform the way we interact with data.

That’s what we are going to accomplish in this article.

The idea is fairly straightforward:

  • Upload an audio file.
  • Convert the content from speech to text.
  • Generate a score that indicates the type of sentiment it communicates.

But how do we actually build an interface that does all that? I’m going to introduce you to three tools and show how they work together to create an audio sentiment analyzer.

But First: Why Audio Sentiment Analysis?

By harnessing the capabilities of an audio sentiment analysis tool, developers and data professionals can uncover valuable insights from audio recordings, revolutionizing the way we interpret emotions and sentiments in the digital age. Customer service, for example, is crucial for businesses aiming to deliver personable experiences. We can surpass the limitations of text-based analysis to get a better idea of the feelings communicated by verbal exchanges in a variety of settings, including:

  • Call centers
    Call center agents can gain real-time insights into customer sentiment, enabling them to provide personalized and empathetic support.
  • Voice assistants
    Companies can improve their natural language processing algorithms to deliver more accurate responses to customer questions.
  • Surveys
    Organizations can gain valuable insights and understand customer satisfaction levels, identify areas of improvement, and make data-driven decisions to enhance overall customer experience.

And that is just the tip of the iceberg for one industry. Audio sentiment analysis offers valuable insights across various industries. Consider healthcare as another example. Audio analysis could enhance patient care and improve doctor-patient interactions. Healthcare providers can gain a deeper understanding of patient feedback, identify areas for improvement, and optimize the overall patient experience.

Market research is another area that could benefit from audio analysis. Researchers can leverage sentiments to gain valuable insights into a target audience’s reactions that could be used in everything from competitor analyses to brand refreshes with the use of audio speech data from interviews, focus groups, or even social media interactions where audio is used.

I can also see audio analysis being used in the design process. Like, instead of asking stakeholders to write responses, how about asking them to record their verbal reactions and running those through an audio analysis tool? The possibilities are endless!

The Technical Foundations Of Audio Sentiment Analysis

Let’s explore the technical foundations that underpin audio sentiment analysis. We will delve into machine learning for natural language processing (NLP) tasks and look into Streamlit as a web application framework. These essential components lay the groundwork for the audio analyzer we’re making.

Natural Language Processing

In our project, we leverage the Hugging Face Transformers library, a crucial component of our development toolkit. Developed by Hugging Face, the Transformers library equips developers with a vast collection of pre-trained models and advanced techniques, enabling them to extract valuable insights from audio data.

With Transformers, we can supply our audio analyzer with the ability to classify text, recognize named entities, answer questions, summarize text, translate, and generate text. Most notably, it also provides speech recognition and audio classification capabilities. Basically, we get an API that taps into pre-trained models so that our AI tool has a starting point rather than us having to train it ourselves.

UI Framework And Deployments

Streamlit is a web framework that simplifies the process of building interactive data applications. What I like about it is that it provides a set of predefined components that works well in the command line with the rest of the tools we’re using for the audio analyzer, not to mention we can deploy directly to their service to preview our work. It’s not required, as there may be other frameworks you are more familiar with.

Building The App

Now that we’ve established the two core components of our technical foundation, we will next explore implementation, such as

  1. Setting up the development environment,
  2. Performing sentiment analysis,
  3. Integrating speech recognition,
  4. Building the user interface, and
  5. Deploying the app.

Initial Setup

We begin by importing the libraries we need:

import os
import traceback
import streamlit as st
import speech_recognition as sr
from transformers import pipeline

We import os for system operations, traceback for error handling, streamlit (st) as our UI framework and for deployments, speech_recognition (sr) for audio transcription, and pipeline from Transformers to perform sentiment analysis using pre-trained models.

The project folder can be a pretty simple single directory with the following files:

  • app.py: The main script file for the Streamlit application.
  • requirements.txt: File specifying project dependencies.
  • README.md: Documentation file providing an overview of the project.

Creating The User Interface

Next, we set up the layout, courtesy of Streamlit’s framework. We can create a spacious UI by calling a wide layout:

st.set_page_config(layout="wide")

This ensures that the user interface provides ample space for displaying results and interacting with the tool.

Now let’s add some elements to the page using Streamlit’s functions. We can add a title and write some text:

// app.py
st.title("🎧 Audio Analysis 📝")
st.write("[Joas](https://huggingface.co/Pontonkid)")

I’d like to add a sidebar to the layout that can hold a description of the app as well as the form control for uploading an audio file. We’ll use the main area of the layout to display the audio transcription and sentiment score.

Here’s how we add a sidebar with Streamlit:

// app.py
st.sidebar.title("Audio Analysis")
st.sidebar.write("The Audio Analysis app is a powerful tool that allows you to analyze audio files and gain valuable insights from them. It combines speech recognition and sentiment analysis techniques to transcribe the audio and determine the sentiment expressed within it.")

And here’s how we add the form control for uploading an audio file:

// app.py
st.sidebar.header("Upload Audio")
audio_file = st.sidebar.file_uploader("Browse", type=["wav"])
upload_button = st.sidebar.button("Upload")

Notice that I’ve set up the file_uploader() so it only accepts WAV audio files. That’s just a preference, and you can specify the exact types of files you want to support. Also, notice how I added an Upload button to initiate the upload process.

Analyzing Audio Files

Here’s the fun part, where we get to extract text from an audio file, analyze it, and calculate a score that measures the sentiment level of what is said in the audio.

The plan is the following:

  1. Configure the tool to utilize a pre-trained NLP model fetched from the Hugging Face models hub.
  2. Integrate Transformers’ pipeline to perform sentiment analysis on the transcribed text.
  3. Print the transcribed text.
  4. Return a score based on the analysis of the text.

In the first step, we configure the tool to leverage a pre-trained model:

// app.py
def perform_sentiment_analysis(text):
  model_name = "distilbert-base-uncased-finetuned-sst-2-english"

This points to a model in the hub called DistilBERT. I like it because it’s focused on text classification and is pretty lightweight compared to some other models, making it ideal for a tutorial like this. But there are plenty of other models available in Transformers out there to consider.

Now we integrate the pipeline() function that does the sentiment analysis:

// app.py
def perform_sentiment_analysis(text):
  model_name = "distilbert-base-uncased-finetuned-sst-2-english"
  sentiment_analysis = pipeline("sentiment-analysis", model=model_name)

We’ve set that up to perform a sentiment analysis based on the DistilBERT model we’re using.

Next up, define a variable for the text that we get back from the analysis:

// app.py
def perform_sentiment_analysis(text):
  model_name = "distilbert-base-uncased-finetuned-sst-2-english"
  sentiment_analysis = pipeline("sentiment-analysis", model=model_name)
  results = sentiment_analysis(text)

From there, we’ll assign variables for the score label and the score itself before returning it for use:

// app.py
def perform_sentiment_analysis(text):
  model_name = "distilbert-base-uncased-finetuned-sst-2-english"
  sentiment_analysis = pipeline("sentiment-analysis", model=model_name)
  results = sentiment_analysis(text)
  sentiment_label = results[0]['label']
  sentiment_score = results[0]['score']
  return sentiment_label, sentiment_score

That’s our complete perform_sentiment_analysis() function!

Transcribing Audio Files

Next, we’re going to transcribe the content in the audio file into plain text. We’ll do that by defining a transcribe_audio() function that uses the speech_recognition library to transcribe the uploaded audio file:

// app.py
def transcribe_audio(audio_file):
  r = sr.Recognizer()
  with sr.AudioFile(audio_file) as source:
    audio = r.record(source)
    transcribed_text = r.recognize_google(audio)
  return transcribed_text

We initialize a recognizer object (r) from the speech_recognition library and open the uploaded audio file using the AudioFile function. We then record the audio using r.record(source). Finally, we use the Google Speech Recognition API through r.recognize_google(audio) to transcribe the audio and obtain the transcribed text.

In a main() function, we first check if an audio file is uploaded and the upload button is clicked. If both conditions are met, we proceed with audio transcription and sentiment analysis.

// app.py
def main():
  if audio_file and upload_button:
    try:
      transcribed_text = transcribe_audio(audio_file)
      sentiment_label, sentiment_score = perform_sentiment_analysis(transcribed_text)

Integrating Data With The UI

We have everything we need to display a sentiment analysis for an audio file in our app’s interface. We have the file uploader, a language model to train the app, a function for transcribing the audio into text, and a way to return a score. All we need to do now is hook it up to the app!

What I’m going to do is set up two headers and a text area from Streamlit, as well as variables for icons that represent the sentiment score results:

// app.py
st.header("Transcribed Text")
st.text_area("Transcribed Text", transcribed_text, height=200)
st.header("Sentiment Analysis")
negative_icon = "👎"
neutral_icon = "😐"
positive_icon = "👍"

Let’s use conditional statements to display the sentiment score based on which label corresponds to the returned result. If a sentiment label is empty, we use st.empty() to leave the section blank.

// app.py
if sentiment_label == "NEGATIVE":
  st.write(f"{negative_icon} Negative (Score: {sentiment_score})", unsafe_allow_html=True)
else:
  st.empty()

if sentiment_label == "NEUTRAL":
  st.write(f"{neutral_icon} Neutral (Score: {sentiment_score})", unsafe_allow_html=True)
else:
  st.empty()

if sentiment_label == "POSITIVE":
  st.write(f"{positive_icon} Positive (Score: {sentiment_score})", unsafe_allow_html=True)
else:
  st.empty()

Streamlit has a handy st.info() element for displaying informational messages and statuses. Let’s tap into that to display an explanation of the sentiment score results:

// app.py
st.info(
  "The sentiment score measures how strongly positive, negative, or neutral the feelings or opinions are."
  "A higher score indicates a positive sentiment, while a lower score indicates a negative sentiment."
)

We should account for error handling, right? If any exceptions occur during the audio transcription and sentiment analysis processes, they are caught in an except block. We display an error message using Streamlit’s st.error() function to inform users about the issue, and we also print the exception traceback using traceback.print_exc():

// app.py
except Exception as ex:
  st.error("Error occurred during audio transcription and sentiment analysis.")
  st.error(str(ex))
  traceback.print_exc()

This code block ensures that the app’s main() function is executed when the script is run as the main program:

// app.py
if __name__ == "__main__": main()

It’s common practice to wrap the execution of the main logic within this condition to prevent it from being executed when the script is imported as a module.

Deployments And Hosting

Now that we have successfully built our audio sentiment analysis tool, it’s time to deploy it and publish it live. For convenience, I am using the Streamlit Community Cloud for deployments since I’m already using Streamlit as a UI framework. That said, I do think it is a fantastic platform because it’s free and allows you to share your apps pretty easily.

But before we proceed, there are a few prerequisites:

  • GitHub account
    If you don’t already have one, create a GitHub account. GitHub will serve as our code repository that connects to the Streamlit Community Cloud. This is where Streamlit gets the app files to serve.
  • Streamlit Community Cloud account
    Sign up for a Streamlit Cloud so you can deploy to the cloud.

Once you have your accounts set up, it’s time to dive into the deployment process:

  1. Create a GitHub repository.
    Create a new repository on GitHub. This repository will serve as a central hub for managing and collaborating on the codebase.
  2. Create the Streamlit application.
    Log into Streamlit Community Cloud and create a new application project, providing details like the name and pointing the app to the GitHub repository with the app files.
  3. Configure deployment settings.
    Customize the deployment environment by specifying a Python version and defining environment variables.

That’s it! From here, Streamlit will automatically build and deploy our application when new changes are pushed to the main branch of the GitHub repository. You can see a working example of the audio analyzer I created: Live Demo.

Conclusion

There you have it! You have successfully built and deployed an app that recognizes speech in audio files, transcribes that speech into text, analyzes the text, and assigns a score that indicates whether the overall sentiment of the speech is positive or negative.

We used a tech stack that only consists of a language model (Transformers) and a UI framework (Streamlit) that has integrated deployment and hosting capabilities. That’s really all we needed to pull everything together!

So, what’s next? Imagine capturing sentiments in real time. That could open up new avenues for instant insights and dynamic applications. It’s an exciting opportunity to push the boundaries and take this audio sentiment analysis experiment to the next level.

Further Reading on Smashing Magazine

How ChatGPT Writes Code for Automation Tool Cypress

In its first week of launch, ChatGPT shattered Internet records by becoming extremely popular. As a person who works in QA automation, my initial thinking when I started looking into it was how to use this wonderful platform to make the jobs of testers for Web and UI automation simpler. 

ChatGPT may be used to write code in a variety of programming languages and technologies. After more investigation, I made the decision to create some scenarios using it. I have created some use cases around UI, API, and Cucumber feature file generation using ChatGPT.

JavaScript Snippets For Better UX and UI

JavaScript can be used to significantly improve the user experience (UX) and user interface (UI) of your website. In this article, we will discuss some JavaScript snippets that you can use to boost the UX and UI of your website.

UNLIMITED DOWNLOADS: 500,000+ WordPress & Design Assets

Sign up for Envato Elements and get unlimited downloads starting at only $16.50 per month!

Smooth Scrolling

Smooth scrolling is a popular UX feature that makes scrolling through web pages smoother and more fluid. With this feature, instead of abruptly jumping to the next section of the page, the user will be smoothly transitioned to the next section.

To add smooth scrolling to your website, you can use the following JavaScript code:

$('a[href*="#"]').on('click', function(e) {
  e.preventDefault()

  $('html, body').animate(
    {
      scrollTop: $($(this).attr('href')).offset().top,
    },
    500,
    'linear'
  )
})

This code will create a smooth scrolling effect whenever the user clicks on a link that includes a # symbol in the href attribute. The code targets all such links and adds a click event listener to them. When the user clicks on a link, the code will prevent the default action of the link (i.e., navigating to a new page) and instead animate the page to scroll smoothly to the section of the page specified by the link’s href attribute.

Dropdown Menus

Dropdown menus are a common UI element that can help to organize content and improve the navigation of your website. With JavaScript, you can create dropdown menus that are easy to use and intuitive for your users.

To create a basic dropdown menu with JavaScript, you can use the following code:

var dropdown = document.querySelector('.dropdown')
var dropdownToggle = dropdown.querySelector('.dropdown-toggle')
var dropdownMenu = dropdown.querySelector('.dropdown-menu')

dropdownToggle.addEventListener('click', function() {
  if (dropdownMenu.classList.contains('show')) {
    dropdownMenu.classList.remove('show')
  } else {
    dropdownMenu.classList.add('show')
  }
})

This code will create a simple dropdown menu that can be toggled by clicking on a button with the class dropdown-toggle. When the button is clicked, the code will check if the dropdown menu has the class show. If it does, the code will remove the class, hiding the dropdown menu. If it doesn’t, the code will add the class, showing the dropdown menu.

Modal Windows

Modal windows are another popular UI element that can be used to display important information or to prompt the user for input. With JavaScript, you can create modal windows that are responsive, accessible, and easy to use.

To create a basic modal window with JavaScript, you can use the following code:

var modal = document.querySelector('.modal')
var modalToggle = document.querySelector('.modal-toggle')
var modalClose = modal.querySelector('.modal-close')

modalToggle.addEventListener('click', function() {
  modal.classList.add('show')
})

modalClose.addEventListener('click', function() {
  modal.classList.remove('show')
})

This code will create a modal window that can be toggled by clicking on a button with the class modal-toggle. When the button is clicked, the code will add the class show to the modal window, displaying it on the page. When the close button with the class modal-close is clicked, the code will remove the show class, hiding the modal window.

Sliders

Sliders are a popular UI element that can be used to display images or other types of content in a visually appealing and engaging way. With JavaScript, you can create sliders that are easy to use and customizable to fit your website’s design.

To create a basic slider with JavaScript, you can use the following code:

var slider = document.querySelector('.slider')
var slides = slider.querySelectorAll('.slide')
var prevButton = slider.querySelector('.prev')
var nextButton = slider.querySelector('.next')
var currentSlide = 0

function showSlide(n) {
  slides[currentSlide].classList.remove('active')
  slides[n].classList.add('active')
  currentSlide = n
}

prevButton.addEventListener('click', function() {
  var prevSlide = currentSlide - 1
  if (prevSlide &lt; 0) {
    prevSlide = slides.length - 1
  }
  showSlide(prevSlide)
})

nextButton.addEventListener('click', function() {
  var nextSlide = currentSlide + 1
  if (nextSlide &gt;= slides.length) {
    nextSlide = 0
  }
  showSlide(nextSlide)
})

This code will create a slider that can be navigated by clicking on buttons with the classes prev and next. The code uses the showSlide function to show the current slide and hide the previous slide whenever the slider is navigated.

Form Validation

Form validation is an essential UX feature that can help to prevent errors and improve the usability of your website’s forms. With JavaScript, you can create form validation that is responsive and user-friendly.

To create form validation with JavaScript, you can use the following code:

var form = document.querySelector('form')

form.addEventListener('submit', function(e) {
  e.preventDefault()
  var email = form.querySelector('[type="email"]').value
  var password = form.querySelector('[type="password"]').value

  if (!email || !password) {
    alert('Please fill in all fields.')
  } else if (password.length &lt; 8) {
    alert('Your password must be at least 8 characters long.')
  } else {
    alert('Form submitted successfully!')
  }
})

This code will validate a form’s email and password fields when the form is submitted. If either field is empty, the code will display an alert message prompting the user to fill in all fields. If the password field is less than 8 characters long, the code will display an alert message prompting the user to enter a password that is at least 8 characters long. If the form passes validation, the code will display an alert message indicating that the form was submitted successfully.

In conclusion, JavaScript is a powerful tool that can be used to enhance the UX and UI of your website. By using these JavaScript snippets, you can create a more engaging and user-friendly experience for your users. However, it is important to use these JavaScript snippets wisely and sparingly to ensure that they do not negatively impact the performance of your website.

How AI Technology Will Transform Design

AI-generated art is everywhere on the web. If you are an active Instagram, Twitter, or Pinterest user, you likely saw interesting artworks created using text-based tools like DALLE, Midjourney, or Stable Diffusion. The magic of these tools is that to generate images, all you need to do is to provide a string of text that describes what the image is all about. Many AI-generated works look stunning, but it’s only the beginning. In the foreseeable future, AI tools will be so intuitive that everyone can express their ideas. The rise of tools that have AI at their core makes design practitioners wonder if AI will replace designers.

In this article, we will overview the current state of design, answer common questions designers have about AI tools and share practical tips on how designers can make the most of using AI tools.

Design Tools Learning Curve And Creativity

Mastering any skill takes time, and design is no exception. Designers have a lot of great tools in their arsenal, but the process of honing design talent takes years. You need to invest years of your life to get to the point when you can create decent artwork.

Human-made design: glass reflection CGI. A few seconds of rendering was 87 hours on 5 RTX. (Image by Gleb Kuznetsov)

No matter how creative you are, you must spend time creating something using your hand. Most of the time, it’s impossible to go from idea to solution in a few minutes. As a result, sometimes it feels like design is 95% craft and only 5% art.

Much energy goes into the visualization of ideas, and it can be very frustrating to learn that your idea doesn’t resonate with the audience. Once you publish your work, you might learn that it’s not something your audience wants. An unsuccessful design pitch leads to a situation when your work goes straight to the garbage bin.

But in the near future, you will be able to use shortcuts and go from your idea to the final work in a minute rather than hours or days. You will be able to avoid the tedious process of physically making art and instead become a visioner who tells the computer what you want to build and lets the computer do the work for you. And you can experience the power of AI tools even today. Use Dalle.2 by OpenAI, Midjourney, or Stable Diffusion.

Let’s answer a few popular questions that designers have regarding AI.

Can I Take Credit For Artworks Created By AI?

The answer is yes, you can, but you shouldn’t. Many AI artwork generation tools available on the market don’t give designers much freedom to control the process of artwork creation. As a designer, you explain your intention to the AI system through plain words and let the tool do its magic. You have limited or no information on how the tool works.

Because modern AI tools don’t give you much freedom to impact the design direction, the final result misses the human touch. Right now, you cannot convey a lot of personality in works generated by AI tools. At the same time, it doesn’t mean this will be true in the future. We will likely see the tools that give designers more control over the process of creating visual assets.

Will AI Take My Job?

Many professional artists panic because they see how good artificial intelligence has become at creating artwork. AI-generated art fills the market and takes potential clients. Instead of hiring a human digital artist, many companies ‘hire’ AI to do the job because it can do design work for a fraction of the cost. This trend not only takes jobs but also lowers the market value of the art — the artworks become less valuable because people see how easy it is to generate artwork using AI.

What happens right now is a predictable situation. It’s just how business works. If a business can save money by following a more effective approach, it will do it. During the industrial revolution of the 19th century, some English textile workers intentionally destroyed textile machines because they were afraid that machines would replace them. Of course, machines replaced some of the roles (typically, roles where heavy lifting or monotonous work was required), but they didn't replace humans. The same is true for AI tools. AI won’t completely replace human ingenuity; it will complement human potential.

The true power of AI is not about replacing humans but instead giving them a massive boost in productivity.

If you think about the primary reason why people invented new tools in the first place, it becomes evident that work efficiency was the number one reason — the same works for AI. AI will help us work more efficiently.

The quality of your ideas and your ability to understand user problems and create solutions that help people is critically important at any age of product design, including the age of AI design.

Will AI Tools Lead Us To Generic Design?

When designers use the same tools and data inputs, they could easily end up making a homogenized design that looks generic.

But the problem of homogenized design is not new. Dribbblisation of design was a massive topic in the design field for a few years. Many people in the industry worry about the situation when a vast majority of the product design work on Dribbble looks the same (the same styles are applied).

Will the problem become worse when AI tools are popularized? The answer is no. If you look closer at the artists who publish their work at Dribbble, you will notice that there aren’t many artists who set trends. Once a new trend emerges and it resonates with the audience, many designers start to follow it and designs that look trendy.

“Out in the sun, some painters are lined up. The first is copying nature; the second is copying the first; the third is copying the second.”
— Paul Gauguin

AI tools won’t replace all designers anytime soon because imagination and creativity will still be the powerful properties of the artist’s mind. Until AI technology becomes sophisticated enough to do creative thinking, we don’t have to worry about creating AI trends. The point is, soon, it will be possible to curate the data you will provide as an input to the system, and the AI system will learn from you, so the results will include a lot of your personality.

Can We Face Legal Troubles Using AI Tools?

Early in 2023, a group of artists filed a class-action lawsuit against Midjourney and Stability AI, claiming copyright infringement. Both Midjourney and Stability AI were trained using billions of internet images, and this suit alleges that the companies behind those tools “violated the rights of millions of artists” who created the original images. Whether or not AI art tools violate copyright law can be challenging to determine because the database used for training is massing (billions of images). But one thing is for sure — the AI tools create new images based on the knowledge they learned due to the training.

Can designers face legal troubles using AI tools in the future? So far, there is no single correct answer to this question, but the world is quickly embracing AI art (i.e., stock photo banks will start selling AI-generated stock imagery), and we will likely have more clear rules on how to use AI-generated images in the future.

The New Chapter In Design: Co-creation With AI

When Steve Jobs explained the power of computers, he said,

“What a computer is to me is it's the most remarkable tool that we've ever come up with, and it's the equivalent of a bicycle for our minds.”
— Steve Jobs

It’s possible to rephrase this quote in the context of AI, saying that AI is a bicycle for our creativity — our ability to create something new. Creativity is based on life experiences and ideas that creators have. AI cannot replace humans because it uses the work that humans create as an input to produce new designs. But AI can boost creativity greatly because it becomes a sort of ‘second brain’ that works with a creator and provides new inputs.

Of course, modern AI tools don’t give us much freedom to tweak the AI engine, but they still give us a lot of power. They can provide us with ideas we didn't think of. It makes AI an excellent tool for discovery and exploration.

Here are just a few directions of how humans and machines can work together in the future:

Conduct Visual Exploration

AI tools capture the collective experience of millions of images from photo banks and give creators a unique opportunity to quickly explore the desired direction without spending too much energy. AI becomes your creative assistant during the process of visual exploration. You prompt the system with various directions you want to pursue and let the system generate various outcomes for you. You evaluate each direction and choose the best to pursue. The process of co-creation can be iterative. For example, once you see a particular design direction, you can tell the system to dive into it to explore it.

There are two ways you can approach visual exploration, either by following text-to-image or image-to-image scenarios.

In an image-to-text scenario, you provide a prompt and tweak some settings to produce an image. Let’s discuss the most important properties of this scenario:

  • Prompt
    A prompt is a text string that we submit to the system so that it can create an image for you. Generally, the more specific details you provide, the better results the system will generate for you. You can use resources like Lexica to find a relevant prompt.
  • Steps
    Think of steps as iterations of the image creation process. During the first steps, the image looks very noisy, and many elements in the image are blurry. The system refines it with every iteration by altering the visual details of the image. If you use Stable Diffusion, set steps to 60 or more.

  • Seed
    You can use the Seed number to create a close copy of a specific picture. For example, if you want to generate a copy of the image you saw on Lexica, you need to specify the prompt and seed number of this image.

In the image-to-image (img2img) scenario, AI will use your image as a source and produce variations of the image based on it. For example, here is how we can use a famous painting, Under the Wave off Kanagawa, as a source for Stable Diffusion.

We can play with Image Strength by setting it close to 0 so that AI can have more freedom in the way it can interpret the image. As you can see below, the image that the system generated for us has only a few visual attributes of the original image.

Or set Image Strength up to 95% so that AI can only create a slightly different version of the original image.

Our experiment clearly proves that AI tools have an opportunity to replace mood boards. You no longer need to create mood boards (at least do it manually using tools like Pinterest) but rather tell the system to find ideas you want to explore.

Create A Complete Design For Your Product

AI can be an excellent tool to implement ideas quickly. Today we have a long and painful product design process. Going from idea to implementation takes weeks. But with AI, it can take minutes. You can create a storyboard with your product, specify the context of use for your future product, and let AI design a product.

Providing these details is important because AI should understand the nature of the problem you’re trying to solve with this design. For example, below are the visuals that you can create right now using a tool called Midjourney. All you need to do is to specify the text prompt “mobile app UI design, hotel booking, Dribbble, Behance --v 4 --q 2”.

I think that part “mobile app UI design, hotel booking, Dribbble, Behance” is self-explanatory. But you might wonder what –v and –q means.

  • –v means a version of the Midjourney.
    On November 10, 2022, the alpha iteration of version 4 was released to users.
  • –q means quality.
    This setting specifies how much rendering quality time you want to spend. The default number is 1. Creating the image in higher values takes more time and costs more.

It’s important to mention a couple of common issues that images generated by Midjourney have:

  • Gibberish texts
    You likely noticed that the text on mobile app screens in the above example is not English.
  • Extra fingers
    If you generate an image of a person, you will likely see extra fingers on their hands and legs.

In the foreseeable future, a design created by AI will automatically inherit all industry best practices, freeing designers from time-consuming activities like UI design audits. AI tools will significantly speed up the user research and design exploration phase because the tools analyze massive amounts of data and can easily provide relevant details for a particular product (i.e., create a user persona, draft a user journey, and so on). As a result, it will be possible to develop new products right during brainstorming sessions, so designers are no longer limited to low-fidelity wireframes or paper sketches. The product team members will be able to see how the product will look and work right during the session.

Create Virtual Worlds And Virtual People In It

No doubt that the metaverse will be the next big thing. It will be the most sophisticated digital platform humans have ever created, and content production will be an integral part of the platform design. Designers will have to find ways to speed up the creation of virtual environments and activities in them. At first, designers will likely try to recreate real-world places in the virtual world, but after that, they will rely on AI to do the rest. The role of designers in the metaverse will be more like a director (a person who will tailor the results) rather than a craftsman who does it with their hand. Imagine that you can create large virtual areas such as cities and get a sense of the scale of the city by experiencing it.

It’s Time To Open A New Chapter In Design

AI-powered design solutions have an opportunity to become much more than just tools designers use to create assets. They have the chance to become a natural extension of the team. I believe that the true power of AI tools will shine when tools will learn from a creator and will be able to reflect the creator’s personality in the final design. Next-gen AI will learn both about you and from you and create works that functionality and aesthetics meet your needs and taste. As a result, the output the tools will produce will have a more authentic human fingerprint.

The future of design is bright because technology will allow more people to express their creativity and make our world more interesting and richer.

Further Reading On SmashingMag

Building Future-Proof High-Performance Websites With Astro Islands And Headless CMS

This article is a sponsored by Storyblok

Nowadays, web performance is one of the crucial factors for the success of any online project. Most of us have probably experienced the situation that you left a website due to its unbearable slowness. This is certainly frustrating for a website’s user, but even more so for its owner: in fact, there is a direct correlation between web performance and business revenue which has been corroborated time and again in a plethora of case studies.

As developers, optimizing web performance must therefore be an integral part of our value proposition. However, before moving on, let’s actually define the term. According to the MDN Web Docs, “web performance is the objective measurement and perceived user experience of a website or application”. Primarily, it involves optimizing a site’s initial overall load time, making the site interactive as soon as possible, and ensuring it is enjoyable to use throughout.

Achieving an excellent measurable performance as well as an outstanding perceived performance certainly constitutes a potentially very strenuous challenge for developers, especially when dealing with increasingly complex, large-scale websites. Fortunately, while it is easy to get lost in the subtle details of performance optimization measures, there are a few factors that should be the focus points of our efforts due to their extraordinarily high impact. One of these is image optimization, a topic that has been thoroughly covered in ‘A Guide To Image Optimization On Jamstack Sites’ by my colleague Alba Silvente.

Another key factor? Shipping less JavaScript (JS). A large JS bundle size takes longer to be transmitted, parsed, and executed. As a consequence, the initial page load and Time to Interactive can be delayed quite significantly. In recent years, we have witnessed the rise of extremely powerful JS frontend frameworks that offer client-side rendering and strive for an app-like experience. While their versatility, their features, and their developer experience is impressive by all means, they all share one major disadvantage in regard to performance optimization: their JS bundle size is comparably heavy, negatively impacting both the initial page load and the time-to-interactive quite substantially.

Depending on the type of your project, the question arises whether a less JS-centric approach might be feasible. In fact, if you think of your average content-driven marketing website, you would probably conclude that only a fraction of the functionality actually relies on JavaScript, whereas the majority of the site could probably be rendered as static HTML.

And that is precisely where Astro enters the game, shipping zero JS by default and letting you partially hydrate only those components that de facto rely on interactivity. Importantly, Astro accomplishes all of that without sacrificing the wonderful developer experience (DX) that we have been getting spoiled by, but actually even improving it. Let’s take a closer look.

Introducing Astro

Astro defines itself as an “all-in-one web framework for building fast, content-focused websites”. One of its key features is that it replaces unused JS with HTML on the server, effectively resulting in zero JavaScript runtime out-of-the-box. This, in turn, leads to very fast load times and quicker interactivity. Notably, Astro explicitly states that it is specifically designed for content-driven websites, such as marketing, documentation, or eCommerce sites. The Astro team transparently acknowledges that other frameworks may be a much better fit if your project classifies as a web application rather than a mostly content-driven site.

Moreover, Astro provides a powerful Islands architecture that utilizes the technical concept of partial hydration. In a nutshell, this allows you to hydrate only those components that you actually need to be interactive. Importantly, this happens in isolation, leaving the rest of the site as static HTML. All in all, the impact on web performance is huge, making developers’ lives a lot easier along the way. And it gets even better: it is possible to bring your own framework. Thus, you could effortlessly use, for example, Vue, Svelte or React components in your Astro project.

Speaking of isolated islands, it is worth pointing out that developers actually rarely work alone: most larger-scale web projects typically rely on close collaboration between teams of developers and content creators. Therefore, let’s explore how going Headless with Storyblok can improve the experience and productivity of everyone involved.

Introducing Storyblok

Storyblok is a powerful headless CMS that meets the requirements of developers and content creators alike. Completely framework-agnostic, you can connect Storyblok and your favorite technology within minutes. Storyblok’s Visual Editor allows you to create and manage your content with ease, even when dealing with complex layouts. Furthermore, localizing and personalizing your content becomes a breeze. Beyond that, Storyblok’s API-first design allows you to create outstanding cross-platform experiences.

Let’s explore in a case study how we can effectively combine the power of Astro and Storyblok.

Case Study: Interactive Components In Storyblok And Astro

In this example, we will create a simple landing page consisting of a hero component and a tabbed content component. Whereas the former will be a basic Astro component, the latter will be rendered as a dynamic island. In order to demonstrate the flexibility of this technology stack, we will examine how to render the tabbed content component using both Vue and Svelte.

This is what we will build:

(Large preview)

Step 1: Create The Astro Project And The Storyblok Space

Once we’ve created an account on Storyblok (the Community plan is free forever), we can create a new space.

Now, we can copy and run the command to run Storyblok’s CLI in order to quickly create a project that is connected to your fresh new space:

npx @storyblok/create-demo@latest --key <your-access-token>

You can copy the complete command, including your personal access token, from the Get Started section of your space:

In the scaffolding steps, choose Astro, the package manager of your choice, the region of your space, and a local folder for your project. Now, in your chosen folder, you can run npm install && npm run dev to install all dependencies and launch the development server.

For the Storyblok Visual Editor to work, we need to go to Settings > Visual Editor and specify https://127.0.0.1:3000/ as the default environment.

Next, let’s go to the Content section and open our Home story. Here, we need to open the Entry configuration and set the Real path to / in order for src/pages/index.astro to be able to load this story correctly.

After having saved, you should now see the page being rendered correctly in the Visual Editor.

Perfect, we’re ready to move on.

Step 2: Create The Hero Component In Storyblok

In the Block Library, which you can easily access from within your Home story, you will find four default components. Let’s delete all of the nestable blocks (Grid, Teaser, and Feature). For our case study, we just need the Page content type block.

Note: In order to learn more about nestable and content type blocks, you can read the Structures of Content tutorial.

Now, we can create the first component that we will need for our case study: A nestable block called Hero (hero) and the following fields:

  • caption (Text)
  • image (Asset > Images)

Next, close the Block Library, delete the instances of the Teaser and Grid blocks, and create a Hero, providing any caption and image of your choice.

Step 3: Create The Hero Component In Astro

The next step is to create a matching counterpart for our Hero component in our Astro project. Let’s open up the project.

First of all, let’s modify astro.config.mjs in order to register our Hero component properly:

storyblok({
  accessToken: '<your-access-token>', // ideally, you would want to use an environment variable for the token
  components: {
    page: 'storyblok/Page',
    hero: 'storyblok/Hero',
  },
})

Next, let’s delete the Grid, Feature and Teaser components in src/storyblok and create a new src/storyblok/Hero.astro component with the following content:

---
import { storyblokEditable } from '@storyblok/astro'

const { blok } = Astro.props
---

<section
  {...storyblokEditable(blok)}
  class='relative w-full h-[50vh] min-h-[400px] max-h-[800px] flex items-center justify-center'
>
  <h2 class='relative z-10 text-white text-7xl'>{blok.caption}</h2>
  <img
    src={blok.image?.filename}
    alt={blok.image?.alt}
    class='absolute top-0 left-0 object-cover w-full h-full z-0'
  />
</section>

Having taken care of that, the Hero block should now be displayed correctly in your Home story. In this particular case, we are using a native Astro component, which means that this component will be rendered as static HTML, requiring zero JS!

Amazing, but what happens if you actually need interactivity on your frontend? This is precisely where dynamic islands come into play, which we will explore next.

Step 4: Create The Tabbed Content Component In Storyblok

Let’s proceed by creating the blocks that we need for our tabbed content component, which will have a slightly more complex setup.

First of all, we want to create a new nestable block Tabbed Content Entry (tabbed_content_entry) with the following fields:

  • headline (Text)
  • description (Textarea)
  • image (Asset > Images)

This nestable block will be used in superordinate nestable block called Tabbed Content (tabbed_content) consisting of these fields:

  • entries (Blocks > Allow only tabbed_content_entry components to be inserted)
  • directive (Single-Option > Source: Self) with the key-value pair options: load → load and idle → idle, and visible → visible (Default: idle)

The entries field is used to allow nesting of the previously created Tabbed Content Entry nestable blocks. In order to prevent any kind of block could get inserted, we can limit it to blocks of the type tabbed_content_entry.

Additionally, the directive field is used to take advantage of Astro’s client directives, which determine if and when a framework component should be hydrated. Utilizing the single-option field type in Storyblok enables content creators to choose whether this particular instance of the component should be hydrated with the highest priority (load), after the initial page load has been completed (idle), or as soon as the component instance actually enters the viewport (visible).

Utilizing Astro’s visible directive would result in the biggest performance gain as long as the component is below the fold. As the default option, we will use Astro’s idle directive, hydrating the component immediately on page load. However, in all cases, the rest of our landing page will remain as static HTML. As a result, the out-of-the-box performance should theoretically always be superior when compared to alternative frameworks.

Before moving on, we can use our newly created Tabbed Content component and insert three example entries in the entries field.

Step 5: Create The Tabbed Content Component In Astro

First of all, let’s register our Tabbed Content component in our astro.config.mjs:

storyblok({
  accessToken: '<your-access-token>',
  components: {
    page: 'storyblok/Page',
    hero: 'storyblok/Hero',
    tabbed_content: 'storyblok/TabbedContent',
  },
}),

Next, let’s create storyblok/TabbedContent.astro with the following preliminary content:

---
import { storyblokEditable } from '@storyblok/astro'

const { blok } = Astro.props
---

<section {...storyblokEditable(blok)}></section>

This will serve as our wrapper component, wherein we can subsequently import the actual component using the UI framework of our choice and dynamically assign a client directive derived from the value we receive from Storyblok.

Step 6: Render the Tabbed Content Component using Vue

With everything in place, we can now start building our tabbed content component using Vue. First, we need to install Vue in our project. Fortunately, Astro makes that very simple for us. All we have to do is to run the following command:

npx astro add vue

Next, let’s create a new Vue component (storyblok/TabbedContent.vue) with the following content:

<script setup lang="ts">
import { ref } from 'vue'
const props = defineProps({ blok: Object })

const activeTab = ref(0)

const setActiveTab = (index) => {
  activeTab.value = index
}

const tabWidth = ref(100 / props.blok.entries.length)
</script>

<template>
  <ul class="relative border-b border-gray-900 mb-8 flex">
    <li
      v-for="(entry, index) in blok.entries"
      :key="entry._uid"
      :style="'width:' + tabWidth + '%'"
    >
      <button
        @click.prevent="setActiveTab(index)"
        class="cursor-pointer p-3 text-center"
        :class="index === activeTab ? 'font-bold' : ''"
      >
        {{ entry.headline }}
      </button>
    </li>
  </ul>
  <section
    v-for="(entry, index) in blok.entries"
    :key="entry._uid"
    :id="'entry-' + entry._uid"
  >
    <div v-if="index === activeTab" class="grid grid-cols-2 gap-12">
      <div>
        <p>{{ entry.description }}</p>
        <a
          :href="entry.link?.cached_url"
          class="inline-flex bg-gray-900 text-white py-3 px-6 mt-6"
          >Explore {{ entry.headline }}</a
        >
      </div>
      <img :src="entry.image?.filename" :alt="entry.image?.alt" />
    </div>
  </section>
</template>

<style scoped>
ul:after {
  content: '';
  @apply absolute bottom-0 left-0 h-0.5 bg-gray-900 transition-all duration-500;
  width: v-bind(tabWidth + '%');
  margin-left: v-bind(activeTab * tabWidth + '%');
}
</style>

Finally, we can import this component in TabbedContent.astro, pass the whole blok object as a property and assign the client directive based on the value we receive from Storyblok.

---
import { storyblokEditable } from '@storyblok/astro'
import TabbedContent from './TabbedContent.vue'

const { blok } = Astro.props
---

<section {...storyblokEditable(blok)} class='container py-12'>
  {blok.directive === 'load' && <TabbedContent blok={blok} client:load />}
  {blok.directive === 'idle' && <TabbedContent blok={blok} client:idle />}
  {blok.directive === 'visible' && <TabbedContent blok={blok} client:visible />}
</section>

Furthermore, our Astro wrapper component is the right place to assign a client directive to the Vue component. Since we would like to give the content creators the possibility to choose between different directives, we need to assign them based on the value we retrieve from Storyblok.

Our tabbed content component will now be rendered correctly. Using Astro’s dynamic islands and hydration directives can tremendously boost your site’s performance, and combined with Storyblok, you provide content creators with straightforward and easy-to-use possibilities to tap into the power of this next-gen approach.

Let’s conclude our case study by examining how to render the very same component with Svelte (or any other popular framework supported by Astro).

Step 7: Render The Tabbed Content Component Using Svelte

First of all, as before, we need to install Svelte in our Astro project. Again, we can easily accomplish that by running the following command:

npx astro add svelte

Now, we can create the Svelte component (storyblok/TabbedContent.svelte) with the following content:

<script>
  export let blok

  let tabWidth = 100 / blok.entries.length
  let activeTab = 0
  let marginLeft = 0

  const setActiveTab = (index) => {
    activeTab = index
    marginLeft = activeTab * tabWidth
  }
</script>

<ul
  class="relative border-b border-gray-900 mb-8 flex"
  style="--tab-width: {tabWidth}%; --margin-left: {marginLeft}%;"
>
  {#each blok.entries as entry, index (entry._uid)}
    <li style="width: var(--tab-width)">
      <button
        class="{index === activeTab
          ? 'font-bold'
          : ''} w-full cursor-pointer p-3 text-center"
        on:click={() => setActiveTab(index)}>{entry.headline}</button
      >
    </li>
  {/each}
</ul>
{#each blok.entries as entry, index (entry._uid)}
  {#if index === activeTab}
    <section id={entry._uid}>
      <div class="grid grid-cols-2 gap-12">
        <div>
          <p>{entry.description}</p>
          <a
            href={entry.link?.cached_url}
            class="inline-flex bg-gray-900 text-white py-3 px-6 mt-6"
            >Explore {entry.headline}</a
          >
        </div>
        <img src={entry.image?.filename} alt={entry.image?.alt} />
      </div>
    </section>
  {/if}
{/each}

<style>
  ul:after {
    content: '';
    @apply absolute bottom-0 left-0 h-0.5 bg-gray-900 transition-all duration-500;
    width: var(--tab-width);
    margin-left: var(--margin-left);
  }
</style>

The only change that we have to make in order to load the Svelte component instead of the Vue component can easily be completed by simply changing the import in TabbedContent.astro:

//import TabbedContent from './TabbedContent.vue'
import TabbedContent from './TabbedContent.svelte'

And that’s it! Everything else can remain the same. Amazingly, our tabbed content component still works but is now using Svelte instead of Vue. Since Astro makes it possible to pass down the blok object, containing all of the data coming from Storyblok, as a property to the different framework components, we can simply reuse all of the information in various environments.

Wrapping Up

With Astro, you as a developer benefit from phenomenal DX, mind-blowing performance out of the box, and a high degree of flexibility thanks to the possibility to bring your own component framework (or even combine multiple component frameworks) and the availability of integrations. Moreover, Astro is highly future-proof: Considering moving from Vue to Svelte? From React to Vue? Astro makes the transition seamless, keeping the foundation of your project the same.

With Storyblok, your clients or fellow colleagues from the content marketing team get to enjoy a high degree of autonomy and flexibility, effectively utilizing the full potential of your Astro code base. Landing pages can be created in a matter of mere minutes, and dynamic, interactive components will have no negative impact on their performance.

Taking everything into account, Astro and Storyblok may very well be the last technology stack you will ever need for your content-driven website projects.

Resources

The Key To Good Component Design Is Selfishness

When developing a new feature, what determines whether an existing component will work or not? And when a component doesn’t work, what exactly does that mean?

Does the component functionally not do what it’s expected to do, like a tab system that doesn’t switch to the correct panel? Or is it too rigid to support the designed content, such as a button with an icon after the content instead of before it? Or perhaps it’s too pre-defined and structured to support a slight variant, like a modal that always had a header section, now requiring a variant without one?

Such is the life of a component. All too often, they’re built for a narrow objective, then hastily extended for minor one-off variations again and again until it no longer works. At that point, a new component is created, the technical debt grows, the onboarding learning curve becomes steeper, and the maintainability of the codebase is more challenging.

Is this simply the inevitable lifecycle of a component? Or can this situation be averted? And, most importantly, if it can be averted, how?

Selfishness. Or perhaps, self-interest. Better yet, maybe a little bit of both.

Far too often, components are far too considerate. Too considerate of one another and, especially, too considerate of their own content. In order to create components that scale with a product, the name of the game is self-interest bordering on selfishness — cold-hearted, narcissistic, the-world-revolves-around-me selfishness.

This article isn’t going to settle the centuries-old debate about the line between self-interest and selfishness. Frankly, I’m not qualified to take part in any philosophical debate. However, what this article is going to do is demonstrate that building selfish components is in the best interest of every other component, designer, developer, and person consuming your content. In fact, selfish components create so much good around them that you could almost say they’re selfless.

I don’t know 🤷‍♀️ Let’s look at some components and decide for ourselves.

Note: All code examples and demos in this article will be based on React and TypeScript. However, the concepts and patterns are framework agnostic.

The Consideration Iterations

Perhaps, the best way to demonstrate a considerate component is by walking through the lifecycle of one. We’ll be able to see how they start small and functional but become unwieldy once the design evolves. Each iteration backs the component further into a corner until the design and needs of the product outgrow the capabilities of the component itself.

Let’s consider the modest Button component. It’s deceptively complex and quite often trapped in the consideration pattern, and therefore, a great example of working through.

Iteration 1

While these sample designs are quite barebones, like not showing various :hover, :focus, and disabled states, they do showcase a simple button with two color themes.

At first glance, it’s possible the resulting Button component could be as barebones as the design.

// First, extend native HTML button attributes like onClick and disabled from React.
type ButtonProps = React.ComponentPropsWithoutRef<"button"> & {
  text: string;
  theme: 'primary' | 'secondary';
}
<Button
  onClick={someFunction}
  text="Add to cart"
  theme="primary"
/>

It’s possible, and perhaps even likely, that we’ve all seen a Button component like this. Maybe we’ve even made one like it ourselves. Some of the namings may be different, but the props, or the API of the Button, are roughly the same.

In order to meet the requirements of the design, the Button defines props for the theme and text. This first iteration works and meets the current needs of both the design and the product.

However, the current needs of the design and product are rarely the final needs. When the next design iterations are created, the Add to cart button now requires an icon.

Iteration 2

After validating the UI of the product, it was decided that adding an icon to the Add to cart button would be beneficial. The designs explain, though, that not every button will include an icon.

Returning to our Button component, its props can be extended with an optional icon prop which maps to the name of an icon to conditionally render.

type ButtonProps = {
  theme: 'primary' | 'secondary';
  text: string;
  icon?: 'cart' | '...all-other-potential-icon-names';
}
<Button
  theme="primary"
  onClick={someFunction}
  text="Add to cart"
  icon="cart"
/>

Whew! Crisis averted.

With the new icon prop, the Button can now support variants with or without an icon. Of course, this assumes the icon will always be shown at the end of the text, which, to the surprise of nobody, is not the case when the next iteration is designed.

Iteration 3

The previous Button component implementation included the icon at the text’s end, but the new designs require an icon to optionally be placed at the start of the text. The single icon prop will no longer fit the needs of the latest design requirements.

There are a few different directions that can be taken to meet this new product requirement. Maybe an iconPosition prop can be added to the Button. But what if there comes a need to have an icon on both sides? Maybe our Button component can get ahead of this assumed requirement and make a few changes to the props.

The single icon prop will no longer fit the needs of the product, so it’s removed. In its place, two new props are introduced, iconAtStart and iconAtEnd.

type ButtonProps = {
  theme: 'primary' | 'secondary' | 'tertiary';
  text: string;
  iconAtStart?: 'cart' | '...all-other-potential-icon-names';
  iconAtEnd?: 'cart' | '...all-other-potential-icon-names';
}

After refactoring the existing uses of Button in the codebase to use the new props, another crisis is averted. Now, the Button has some flexibility. It’s all hardcoded and wrapped in conditionals within the component itself, but surely, what the UI doesn’t know can’t hurt it.

Up until this point, the Button icons have always been the same color as the text. It seems reasonable and like a reliable default, but let’s throw a wrench into this well-oiled component by defining a variation with a contrasting color icon.

Iteration 4

In order to provide a sense of feedback, this confirmation UI stage was designed to be shown temporarily when an item has been added to the cart successfully.

Maybe this is a time when the development team chooses to push back against the product requirements. But despite the push, the decision is made to move forward with providing color flexibility to Button icons.

Again, multiple approaches can be taken for this. Maybe an iconClassName prop is passed into the Button to have greater control over the icon’s appearance. But there are other product development priorities, and instead, a quick fix is done.

As a result, an iconColor prop is added to the Button.

type ButtonProps = {
  theme: 'primary' | 'secondary' | 'tertiary';
  text: string;
  iconAtStart?: 'cart' | '...all-other-potential-icon-names';
  iconAtEnd?: 'cart' | '...all-other-potential-icon-names';
  iconColor?: 'green' | '...other-theme-color-names';
}

With the quick fix in place, the Button icons can now be styled differently than the text. The UI can provide the designed confirmation, and the product can, once again, move forward.

Of course, as product requirements continue to grow and expand, so do their designs.

Iteration 5

With the latest designs, the Button must now be used with only an icon. This can be done in a few different approaches, yet again, but all of them require some amount of refactoring.

Perhaps a new IconButton component is created, duplicating all other button logic and styles into two places. Or maybe that logic and styles are centralized and shared across both components. However, in this example, the development team decides to keep all the variants in the same Button component.

Instead, the text prop is marked as optional. This could be as quick as marking it as optional in the props but could require additional refactoring if there’s any logic expecting the text to exist.

But then comes the question, if the Button is to have only an icon, which icon prop should be used? Neither iconAtStart nor iconAtEnd appropriately describes the Button. Ultimately, it’s decided to bring the original icon prop back and use it for the icon-only variant.

type ButtonProps = {
  theme: 'primary' | 'secondary' | 'tertiary';
  iconAtStart?: 'cart' | '...all-other-potential-icon-names';
  iconAtEnd?: 'cart' | '...all-other-potential-icon-names';
  iconColor?: 'green' | '...other-theme-color-names';
  icon?: 'cart' | '...all-other-potential-icon-names';
  text?: string;
}

Now, the Button API is getting confusing. Maybe a few comments are left in the component to explain when and when not to use specific props, but the learning curve is growing steeper, and the potential for error is increasing.

For example, without adding great complexity to the ButtonProps type, there is no stopping a person from using the icon and text props at the same time. This could either break the UI or be resolved with greater conditional complexity within the Button component itself. Additionally, the icon prop can be used with either or both of the iconAtStart and IconAtEnd props as well. Again, this could either break the UI or be resolved with even more layers of conditionals within the component.

Our beloved Button has become quite unmanageable at this point. Hopefully, the product has reached a point of stability where no new changes or requirements will ever happen again. Ever.

Iteration 6

So much for never having any more changes. 🤦

This next and final iteration of the Button is the proverbial straw that breaks the camel’s back. In the Add to cart button, if the current item is already in the cart, we want to show the quantity of which on the button. On the surface, this is a straightforward change of dynamically building the text prop string. But the component breaks down because the current item count requires a different font weight and an underline. Because the Button accepts only a plain text string and no other child elements, the component no longer works.

Would this design have broken the Button if this was the second iteration? Maybe not. The component and codebase were both much younger then. But the codebase has grown so much by this point that refactoring for this requirement is a mountain to climb.

This is when one of the following things will likely happen:

  1. Do a much larger refactor to move the Button away from a text prop to accepting children or accepting a component or markup as the text value.
  2. The Button is split into a separate AddToCart button with an even more rigid API specific to this one use case. This also either duplicates any button logic and styles into multiple places or extracts them into a centralized file to share everywhere.
  3. The Button is deprecated, and a ButtonNew component is created, splitting the codebase, introducing technical debt, and increasing the onboarding learning curve.

Neither outcome is ideal.

So, where did the Button component go wrong?

Sharing Is Impairing

What is the responsibility of an HTML button element exactly? Narrowing down this answer will shine light onto the issues facing the previous Button component.

The responsibilities of the native HTML button element go no further than:

  1. Display, without opinion, whatever content is passed into it.
  2. Handle native functionality and attributes such as onClick and disabled.

Yes, each browser has its own version of how a button element may look and display content, but CSS resets are often used to strip those opinions away. As a result, the button element boils down to little more than a functional container for triggering events.

The onus of formatting any content within the button isn’t the responsibility of the button but of the content itself. The button shouldn’t care. The button should not share the responsibility for its content.

The core issue with the considerate component design is that component props define the content and not the component itself.

In the previous Button component, the first major limitation was the text prop. From the first iteration, a limitation was placed on the content of the Button. While the text prop fit with the designs at that stage, it immediately deviated from the two core responsibilities of the native HTML button. It immediately forced the Button to be aware of and responsible for its content.

In the following iterations, the icon was introduced. While it seemed reasonable to bake a conditional icon into the Button, also doing so deviated from the core button responsibilities. Doing so limited the use cases of the component. In later iterations, the icon needed to be available in different positions, and the Button props were forced to expand to style the icon.

When the component is responsible for the content it displays, it needs an API that can accommodate all content variations. Eventually, that API will break down because the content will forever and always change.

Introducing The Me In Team

There’s an adage used in all team sports, “There’s no ‘I’ in a team.” While this mindset is noble, some of the greatest individual athletes have embodied other ideas.

Michael Jordan famously responded with his own perspective, “There’s an ‘I’ in win.” The late Kobe Bryant had a similar idea, “There’s an ‘M-E’ in [team].”

Our original Button component was a team player. It shared the responsibility of its content until it reached the point of deprecation. How could the Button have avoided such constraints by embodying a “M-E in team” attitude?

Me, Myself, And UI
When the component is responsible for the content it displays, it will break down because the content will forever and always change.

How would a selfish component design approach have changed our original Button?

Keeping the two core responsibilities of the HTML button element in mind, the structure of our Button component would have immediately been different.

// First, extend native HTML button attributes like onClick and disabled from React.
type ButtonProps = React.ComponentPropsWithoutRef<"button"> & {
  theme: 'primary' | 'secondary' | 'tertiary';
}
<Button
  onClick={someFunction}
  theme="primary"
>
  <span>Add to cart</span>
</Button>

By removing the original text prop in lieu of limitless children, the Button is able to align with its core responsibilities. The Button can now act as little more than a container for triggering events.

By moving the Button to its native approach of supporting child content, the various icon-related props are no longer required. An icon can now be rendered anywhere within the Button regardless of size and color. Perhaps the various icon-related props could be extracted into their own selfish Icon component.

<Button
  onClick={someFunction}
  theme="primary"
>
  <Icon name="cart" />
  <span>Add to cart</span>
</Button>

With the content-specific props removed from the Button, it can now do what all selfish characters do best, think about itself.

// First, extend native HTML button attributes like onClick and disabled from React.
type ButtonProps = React.ComponentPropsWithoutRef<"button"> & {
  size: 'sm' | 'md' | 'lg';
  theme: 'primary' | 'secondary' | 'tertiary';
  variant: 'ghost' | 'solid' | 'outline' | 'link'
}

With an API specific to itself and independent content, the Button is now a maintainable component. The self-interest props keep the learning curve minimal and intuitive while retaining great flexibility for various Button use cases.

Button icons can now be placed at either end of the content.

<Button
  onClick={someFunction}
  size="md"
  theme="primary"
  variant="solid"
>
  <Box display="flex" gap="2" alignItems="center">
    <span>Add to cart</span>
    <Icon name="cart" />
  </Box>
</Button>

Or, a Button could have only an icon.

<Button
  onClick={someFunction}
  size="sm"
  theme="secondary"
  variant="solid"
>
  <Icon name="cart" />
</Button>

However, a product may evolve over time, and selfish component design improves the ability to evolve along with it. Let’s go beyond the Button and into the cornerstones of selfish component design.

The Keys to Selfish Design

Much like when creating a fictional character, it’s best to show, not tell, the reader that they’re selfish. By reading about the character’s thoughts and actions, their personality and traits can be understood. Component design can take the same approach.

But how exactly do we show in a component’s design and use that it is selfish?

HTML Drives The Component Design

Many times, components are built as direct abstractions of native HTML elements like a button or img. When this is the case, let the native HTML element drive the design of the component.

Specifically, if the native HTML element accepts children, the abstracted component should as well. Every aspect of a component that deviates from its native element is something that must be learned anew.

When our original Button component deviated from the native behavior of the button element by not supporting child content, it not only became rigid but it required a mental model shift just to use the component.

There has been a lot of time and thought put into the structure and definitions of HTML elements. The wheel doesn’t need to be reinvented every time.

Children Fend For Themselves

If you’ve ever read Lord of the Flies, you know just how dangerous it can be when a group of children is forced to fend for themselves. However, in the case of selfish component design, we’ll be doing exactly that.

As shown in our original Button component, the more it tried to style its content, the more rigid and complicated it became. When we removed that responsibility, the component was able to do a lot more but with a lot less.

Many elements are little more than semantic containers. It’s not often we expect a section element to style its content. A button element is just a very specific type of semantic container. The same approach can apply when abstracting it to a component.

Components Are Singularly Focused

Think of selfish component design as arranging a bunch of terrible first dates. A component’s props are like the conversation that is entirely focused on them and their immediate responsibilities:

  • How do I look?
    Props need to feed the ego of the component. In our refactored Button example, we did this with props like size, theme, and variant.
  • What am I doing?
    A component should only be interested in what it, and it alone, is doing. Again, in our refactored Button component, we do this with the onClick prop. As far as the Button is concerned, if there’s another click event somewhere within its content, that’s the content’s problem. The Button does. not. care.
  • When and where am I going next?
    Any jet-setting traveler is quick to talk about their next destination. For components like modals, drawers, and tooltips, when and where they’re going is just as gravely important. Components like these are not always rendered in the DOM. This means that in addition to knowing how they look and what they do, they need to know when and where to be. In other words, this can be described with props like isShown and position.

Composition Is King

Some components, such as modals and drawers, can often contain different layout variations. For example, some modals will show a header bar while others do not. Some drawers may have a footer with a call to action. Others may have no footer at all.

Instead of defining each layout in a single Modal or Drawer component with conditional props like hasHeader or showFooter, break the single component into multiple composable child components.

<Modal>
  <Modal.CloseButton />
  <Modal.Header> ... </Modal.Header>
  <Modal.Main> ... <Modal.Main>
</Modal>
<Drawer>
  <Drawer.Main> ... </Drawer.Main>
  <Drawer.Footer> ... </Drawer.Footer>
</Drawer>

By using component composition, each individual component can be as selfish as it wants to be and used only when and where it’s needed. This keeps the root component’s API clean and can move many props to their specific child component.

Let’s explore this and the other keys to selfish component design a bit more.

You’re So Vain, You Probably Think This Code Is About You

Perhaps the keys of selfish design make sense when looking back at the evolution of our Button component. Nevertheless, let’s apply them again to another commonly problematic component — the modal.

For this example, we have the benefit of foresight in the three different modal layouts. This will help steer the direction of our Modal while applying each key of selfish design along the way.

First, let’s go over our mental model and break down the layouts of each design.

In the Edit Profile modal, there are defined header, main and footer sections. There’s also a close button. In the Upload Successful modal, there’s a modified header with no close button and a hero-like image. The buttons in the footer are also stretched. Lastly, in the Friends modal, the close button returns, but now the content area is scrollable, and there’s no footer.

So, what did we learn?

We learned that the header, main and footer sections are interchangeable. They may or may not exist in any given view. We also learned that the close button functions independently and is not tied to any specific layout or section.

Because our Modal can be comprised of interchangeable layouts and arrangements, that’s our sign to take a composable child component approach. This will allow us to plug and play pieces into the Modal as needed.

This approach allows us to very narrowly define the responsibilities of our root Modal component.

Conditionally render with any combination of content layouts.

That’s it. So long as our Modal is just a conditionally-rendered container, it will never need to care about or be responsible for its content.

With the core responsibility of our Modal defined, and the composable child component approach decided, let’s break down each composable piece and its role.

Component Role
<Modal> This is the entry point of the entire Modal component. This container is responsible for when and where to render, how the modal looks, and what it does, like handle accessibility considerations.
<Modal.CloseButton /> An interchangeable Modal child component that can be included only when needed. This component will work similarly to our refactored Button component. It will be responsible for how it looks, where it’s shown, and what it does.
<Modal.Header> The header section will be an abstraction of the native HTML header element. It will be little more than a semantic container for any content, like headings or images, to be shown.
<Modal.Main> The main section will be an abstraction of the native HTML main element. It will be little more than a semantic container for any content.
<Modal.Footer> The footer section will be an abstraction of the native HTML footer element. It will be little more than a semantic container for any content.

With each component and its role defined, we can start creating props to support those roles and responsibilities.

Modal

Earlier, we defined the barebones responsibility of the Modal, knowing when to conditionally render. This can be achieved using a prop like isShown. Therefore, we can use these props, and whenever it’s true, the Modal and its content will render.

type ModalProps = {
  isShown: boolean;
}
<Modal isShown={showModal}>
  ...
</Modal>

Any styling and positioning can be done with CSS in the Modal component directly. There’s no need to create specific props at this time.

Modal.CloseButton

Given our previously refactored Button component, we know how the CloseButton should work. Heck, we can even use our Button to build our CloseButton component.

import { Button, ButtonProps } from 'components/Button';

export function CloseButton({ onClick, ...props }: ButtonProps) {
  return (
    <Button {...props} onClick={onClick} variant="ghost" theme="primary" />
  )
}
<Modal>
  <Modal.CloseButton onClick={closeModal} />
</Modal>

Modal.Header, Modal.Main, Modal.Footer

Each of the individual layout sections, Modal.Header, Modal.Main, and Modal.Footer, can take direction from their HTML equivalents, header, main, and footer. Each of these elements supports any variation of child content, and therefore, our components will do the same.

There are no special props needed. They serve only as semantic containers.

<Modal>
  <Modal.CloseButton onClick={closeModal} />
  <Modal.Header> ... </Modal.Header>
  <Modal.Main> ... </Modal.Main>
  <Modal.Footer> ... </Modal.Footer>
</Modal>

With our Modal component and its child component defined, let’s see how they can be used interchangeably to create each of the three designs.

Note: The full markup and styles are not shown so as not to take away from the core takeaways.

Edit Profile Modal

In the Edit Profile modal, we use each of the Modal components. However, each is used only as a container that styles and positions itself. This is why we haven’t included a className prop for them. Any content styling should be handled by the content itself, not our container components.

<Modal>
  <Modal.CloseButton onClick={closeModal} />

  <Modal.Header>
    <h1>Edit Profile</h1>
  </Modal.Header>

  <Modal.Main>
    <div className="modal-avatar-selection-wrapper"> ... </div>
    <form className="modal-profile-form"> ... </form>
  </Modal.Main>

  <Modal.Footer>
    <div className="modal-button-wrapper">
      <Button onClick={closeModal} theme="tertiary">Cancel</Button>
      <Button onClick={saveProfile} theme="secondary">Save</Button>
    </div>
  </Modal.Footer>
</Modal>

Upload Successful Modal

Like in the previous example, the Upload Successful modal uses its components as opinionless containers. The styling for the content is handled by the content itself. Perhaps this means the buttons could be stretched by the modal-button-wrapper class, or we could add a “how do I look?” prop, like isFullWidth, to the Button component for a wider or full-width size.

<Modal>
  <Modal.Header>
    <img src="..." alt="..." />
    <h1>Upload Successful</h1>
  </Modal.Header>

  <Modal.Main>
    <p> ... </p>
    <div className="modal-copy-upload-link-wrapper"> ... </div>
  </Modal.Main>

  <Modal.Footer>
    <div className="modal-button-wrapper">
      <Button onClick={closeModal} theme="tertiary">Skip</Button>
      <Button onClick={saveProfile} theme="secondary">Save</Button>
    </div>
  </Modal.Footer>
</Modal>

Friends Modal

Lastly, our Friends modal does away with the Modal.Footer section. Here, it may be enticing to define the overflow styles on Modal.Main, but that is extending the container’s responsibilities to its content. Instead, handling those styles is better suited in the modal-friends-wrapper class.

<Modal>
  <Modal.CloseButton onClick={closeModal} />

  <Modal.Header>
    <h1>AngusMcSix's Friends</h1>
  </Modal.Header>

  <Modal.Main>
      <div className="modal-friends-wrapper">
        <div className="modal-friends-friend-wrapper"> ... </div>
        <div className="modal-friends-friend-wrapper"> ... </div>
        <div className="modal-friends-friend-wrapper"> ... </div>
      </div>
  </Modal.Main>
</Modal>

With a selfishly designed Modal component, we can accommodate evolving and changing designs with flexible and tightly scoped components.

Next Modal Evolutions

Given all that we’ve covered, let’s throw around some hypotheticals regarding our Modal and how it may evolve. How would you approach these design variations?

A design requires a fullscreen modal. How would you adjust the Modal to accommodate a fullscreen variation?

Another design is for a 2-step registration process. How could the Modal accommodate this type of design and functionality?

Recap

Components are the workhorses of modern web development. Greater importance continues to be placed on component libraries, either standalone or as part of a design system. With how fast the web moves, having components that are accessible, stable, and resilient is absolutely critical.

Unfortunately, components are often built to do too much. They are built to inherit the responsibilities and concerns of their content and surroundings. So many patterns that apply this level of consideration break down further each iteration until a component no longer works. At this point, the codebase splits, more technical debt is introduced, and inconsistencies creep into the UI.

If we break a component down to its core responsibilities and build an API of props that only define those responsibilities, without consideration of content inside or around the component, we build components that can be resilient to change. This selfish approach to component design ensures a component is only responsible for itself and not its content. Treating components as little more than semantic containers means content can change or even move between containers without effect. The less considerate a component is about its content and its surroundings, the better for everybody — better for the content that will forever change, better for the consistency of the design and UI, which in turn is better for the people consuming that changing content, and lastly, better for the developers using the components.

The key to the good component design is selfishness. Being a considerate team player is the responsibility of the developer.

Figma Auto Layout Masterclass: An Online Workshop To Support Ukraine 🇺🇦

Auto layout is driving you bananas? You’re scared about what will happen with your design in the browser? Then our deep-dive Figma Auto Layout Masterclass with Christine Vallaure is for you. Monday, March 27, 09:00AM – 12:00PM PT / 18:00 – 21:00 CET. (Check your time zone ⏰) All proceeds from the workshop will be donated to humanitarian aid in Ukraine. Donate and join!

In the online workshop, you will learn everything about how to set up responsive designs with Figma. We’ll dive deep into constraints, auto layout, and, most importantly but rarely discussed, breakpoints for your UI design. Combining those tools will allow you to really test and document your designs and components in line with the actual code settings.

Our Figma Auto Layout Masterclass equips you with everything you need to know to master responsive designs in Figma.

Want to join us? Donate 35 USD or more to get your ticket — all proceeds from this workshop will be donated to humanitarian aid in Ukraine.

In This Workshop, We’ll Explore:

Constraints

  • What they are;
  • How to apply them correctly;
  • How they help you when working with grids;
  • How to combine them with auto layout.

Auto Layout

  • What it is;
  • How and where to apply it;
  • Understanding the auto layout menu;
  • Spacing and stacking;
  • Building a responsive card and learning about the power of resizing;
  • Playing with the mighty power of nested auto layout frames;
  • Absolute positioning;
  • Creating more complex card setups;
  • Setting up an entire page in auto layout;
  • Learning about different stacking options;
  • Fixed aspect ratio with images.

How To Deal With Breakpoints In Figma

  • What they are;
  • How components and pages adapt;
  • How breakpoints and media queries work in CSS;
  • Which breakpoint values to use in your design;
  • How to set up breakpoints in Figma;
  • How to test pages and components with breakpoints;
  • Documenting the findings;
  • Responsive typography.
Who Is This Workshop For?

The masterclass is suitable for you if you have basic knowledge of Figma or are an advanced Figma user and want to brush up on your skills.

You might also like this workshop if you’re switching to Figma from other software like Sketch or XD. And, of course, a special welcome to developers who want to improve the collaboration between design and code and better understand the responsive setup in Figma.

About Christine Vallaure

Christine is a UX/UI Designer and founder of moonlearning.io. She has worked internationally, in-house, and remotely on projects for leading brands, agencies, and startups. Christine cares deeply about creating well-thought-through and aesthetic products and firmly believes that designers should understand code and that UX/UI is a match made in heaven.

Schedule: Monday, March 27, 2023

8:45 AM ET (Check your time zone ⏰)

Virtual doors open, registration, chat, and introductions.

9:00 AM – 11:30 AM

Deep dive into Figma responsive design, deep dive into constraints, auto layout, and, most importantly but rarely discussed, breakpoints for your UI design.

11:30 AM – 12:30 PM

Q&A with Christine on the day’s material. Networking!

You can always re-watch the session at a more convenient time and follow the webinar at your own pace.

What Hardware/Software Do You Need?

To participate, please install the Zoom client for Meetings, which is available for all the main OSs. It may take a little time to download and install, so please grab it ahead of time if you can.

See You There?

Donate 35 USD or more to join — we’ll donate 100% of the proceeds to humanitarian aid in Ukraine. We are already looking forward to diving deeper into the little secrets of auto layout together with you. Thank you for your kind support. 💙💛

A Guide To Command-Line Data Manipulation

Allow me to preface this article by saying that I’m not a terminal person. I don’t use Vim. I find sed, grep, and awk convoluted and counter-intuitive. I prefer seeing my files in a nice UI. Despite all that, I got into the habit of reaching for command-line interfaces (CLIs) when I had small, dedicated tasks to complete. Why? I’ll explain all of that below. In this article, you’ll also learn how to use a CLI tool named Miller to manipulate data from CSV, TSV and/or JSON files.

Why Use The Command Line?

Everything that I’m showing here can be done with regular code. You can load the file, parse the CSV data, and then transform it using regular JavaScript, Python, or any other language. But there are a few reasons why I reach out for command-line interfaces (CLIs) whenever I need to transform data:

  • Easier to read.
    It is faster (for me) to write a script in JavaScript or Python for my usual data processing. But, a script can be confusing to come back to. In my experience, command-line manipulations are harder to write initially but easier to read afterward.
  • Easier to reproduce.
    Thanks to package managers like Homebrew, CLIs are much easier to install than they used to be. No need to figure out the correct version of Node.js or Python, the package manager takes care of that for you.
  • Ages well.
    Compared to modern programming languages, CLIs are old. They change a lot more slowly than languages and frameworks.
What Is Miller?

The main reason I love Miller is that it’s a standalone tool. There are many great tools for data manipulation, but every other tool I found was part of a specific ecosystem. The tools written in Python required knowing how to use pip and virtual environments; for those written in Rust, it was cargo, and so on.

On top of that, it’s fast. The data files are streamed, not held in memory, which means that you can perform operations on large files without freezing your computer.

As a bonus, Miller is actively maintained, John Kerl really keeps on top of PRs and issues. As a developer, I always get a satisfying feeling when I see a neat and maintained open-source project with great documentation.

Installation
  • Linux: apt-get install miller or Homebrew.
  • macOS: brew install miller using Homebrew.
  • Windows: choco install miller using Chocolatey.

That’s it, and you should now have the mlr command available in your terminal.

Run mlr help topics to see if it worked. This will give you instructions to navigate the built-in documentation. You shouldn’t need it, though; that’s what this tutorial is for!

How mlr Works

Miller commands work the following way:

mlr [input/output file formats] [verbs] [file]

Example: mlr --csv filter '$color != "red"' example.csv

Let’s deconstruct:

  • --csv specifies the input file format. It’s a CSV file.
  • filter is what we’re doing on the file, called a “verb” in the documentation. In this case, we’re filtering every row that doesn’t have the field color set to "red". There are many other verbs like sort and cut that we’ll explore later.
  • example.csv is the file that we’re manipulating.
Operations Overview

We can use those verbs to run specific operations on your data. There’s a lot we can do. Let’s explore.

Data

I’ll be using a dataset of IMDb ratings for American TV dramas created by The Economist. You can download it here or find it in the repo for this article.

Note: For the sake of brevity, I’ve renamed the file from mlr --csv head ./IMDb_Economist_tv_ratings.csv to tv_ratings.csv.

Above, I mentioned that every command contains a specific operation or verb. Let’s learn our first one, called head. What it does is show you the beginning of the file (the “head”) rather than print the entire file in the console.

You can run the following command:

`mlr --csv head ./tv_ratings.csv`

And this is the output you’ll see:

titleId,seasonNumber,title,date,av_rating,share,genres
tt2879552,1,11.22.63,2016-03-10,8.489,0.51,"Drama,Mystery,Sci-Fi"
tt3148266,1,12 Monkeys,2015-02-27,8.3407,0.46,"Adventure,Drama,Mystery"
tt3148266,2,12 Monkeys,2016-05-30,8.8196,0.25,"Adventure,Drama,Mystery"
tt3148266,3,12 Monkeys,2017-05-19,9.0369,0.19,"Adventure,Drama,Mystery"
tt3148266,4,12 Monkeys,2018-06-26,9.1363,0.38,"Adventure,Drama,Mystery"
tt1837492,1,13 Reasons Why,2017-03-31,8.437,2.38,"Drama,Mystery"
tt1837492,2,13 Reasons Why,2018-05-18,7.5089,2.19,"Drama,Mystery"
tt0285331,1,24,2002-02-16,8.5641,6.67,"Action,Crime,Drama"
tt0285331,2,24,2003-02-09,8.7028,7.13,"Action,Crime,Drama"
tt0285331,3,24,2004-02-09,8.7173,5.88,"Action,Crime,Drama"

This is a bit hard to read, so let’s make it easier on the eye by adding --opprint.

mlr --csv --opprint head ./tv_ratings.csv

The resulting output will be the following:

titleId   seasonNumber title            date          av_rating   share   genres
tt2879552      1       11.22.63         2016-03-10    8.489       0.51    Drama,Mystery,Sci-Fi
tt3148266      1       12 Monkeys       2015-02-27    8.3407      0.46    Adventure,Drama,Mystery
tt3148266      2       12 Monkeys       2016-05-30    8.8196      0.25    Adventure,Drama,Mystery
tt3148266      3       12 Monkeys       2017-05-19    9.0369      0.19    Adventure,Drama,Mystery
tt3148266      4       12 Monkeys       2018-06-26    9.1363      0.38    Adventure,Drama,Mystery
tt1837492      1       13 Reasons Why   2017-03-31    8.437       2.38    Drama,Mystery
tt1837492      2       13 Reasons Why   2018-05-18    7.5089      2.19    Drama,Mystery
tt0285331      1       24               2002-02-16    8.5641      6.67    Action,Crime,Drama
tt0285331      2       24               2003-02-09    8.7028      7.13    Action,Crime,Drama
tt0285331      3       24               2004-02-09    8.7173      5.88    Action,Crime,Drama

Much better, isn’t it?

Note: Rather than typing --csv --opprint every time, we can use the --c2p option, which is a shortcut.

Chaining

That’s where the fun begins. Rather than run multiple commands, we can chain the verbs together by using the then keyword.

Remove columns

You can see that there’s a titleId column that isn’t very useful. Let’s get rid of it using the cut verb.

mlr --c2p cut -x -f titleId then head ./tv_ratings.csv

It gives you the following output:

seasonNumber  title            date         av_rating   share    genres
     1      11.22.63          2016-03-10    8.489       0.51     Drama,Mystery,Sci-Fi
     1      12 Monkeys        2015-02-27    8.3407      0.46     Adventure,Drama,Mystery
     2      12 Monkeys        2016-05-30    8.8196      0.25     Adventure,Drama,Mystery
     3      12 Monkeys        2017-05-19    9.0369      0.19     Adventure,Drama,Mystery
     4      12 Monkeys        2018-06-26    9.1363      0.38     Adventure,Drama,Mystery
     1      13 Reasons Why    2017-03-31    8.437       2.38     Drama,Mystery
     2      13 Reasons Why    2018-05-18    7.5089      2.19     Drama,Mystery
     1      24                2002-02-16    8.5641      6.67     Action,Crime,Drama
     2      24                2003-02-09    8.7028      7.13     Action,Crime,Drama
     3      24                2004-02-09    8.7173      5.88     Action,Crime,Drama
Fun Fact

This is how I first learned about Miller! I was playing with a CSV dataset for https://details.town/ that had a useless column, and I looked up “how to remove a column from CSV command line.” I discovered Miller, loved it, and then pitched an article to Smashing magazine. Now here we are!

Filter

This is the verb that I first showed earlier. We can remove all the rows that don’t match a specific expression, letting us clean our data with only a few characters.

If we only want the rating of the first seasons of every series in the dataset, this is how you do it:

mlr --c2p filter '$seasonNumber == 1' then head ./tv_ratings.csv

Sorting

We can sort our data based on a specific column like it would be in a UI like Excel or macOS Numbers. Here’s how you would sort your data based on the series with the highest rating:

mlr --c2p sort -nr av_rating then head ./tv_ratings.csv

The resulting output will be the following:

titleId   seasonNumber title                         date         av_rating  share   genres
tt0098887      1       Parenthood                    1990-11-13   9.6824     1.68    Comedy,Drama
tt0106028      6       Homicide: Life on the Street  1997-12-05   9.6        0.13    Crime,Drama,Mystery
tt0108968      5       Touched by an Angel           1998-11-15   9.6        0.08    Drama,Family,Fantasy
tt0903747      5       Breaking Bad                  2013-02-20   9.554      18.95   Crime,Drama,Thriller
tt0944947      6       Game of Thrones               2016-05-25   9.4943     15.18   Action,Adventure,Drama
tt3398228      5       BoJack Horseman               2018-09-14   9.4738     0.45    Animation,Comedy,Drama
tt0103352      3       Are You Afraid of the Dark?   1994-02-23   9.4349     2.6     Drama,Family,Fantasy
tt0944947      4       Game of Thrones               2014-05-09   9.4282     11.07   Action,Adventure,Drama
tt0976014      4       Greek                         2011-03-07   9.4        0.01    Comedy,Drama
tt0090466      4       L.A. Law                      1990-04-05   9.4        0.1     Drama

We can see that Parenthood, from 1990, has the highest rating on IMDb — who knew!

Saving Our Operations

By default, Miller only prints your processed data to the console. If we want to save it to another CSV file, we can use the > operator.

If we wanted to save our sorted data to a new CSV file, this is what the command would look like:

mlr --csv sort -nr av_rating ./tv_ratings.csv > sorted.csv

Convert CSV To JSON

Most of the time, you don’t use CSV data directly in your application. You convert it to a format that is easier to read or doesn’t require additional dependencies, like JSON.

Miller gives you the --c2j option to convert your data from CSV to JSON. Here’s how to do this for our sorted data:

mlr --c2j sort -nr av_rating ./tv_ratings.csv > sorted.json
Case study: Top 5 Athletes With Highest Number Of Medals In Rio 2016

Let’s apply everything we learned above to a real-world use case. Let’s say that you have a detailed dataset of every athlete who participated in the 2016 Olympic games in Rio, and you want to know who the 5 with the highest number of medals are.

First, download the athlete data as a CSV, then save it in a file named athletes.csv.

Let’s open up the following file:

mlr --c2p head ./athletes.csv

The resulting output will be something like the following:

id        name                nationality sex    date_of_birth height weight sport      gold silver bronze info
736041664 A Jesus Garcia      ESP         male   1969-10-17    1.72    64     athletics    0    0      0      -
532037425 A Lam Shin          KOR         female 1986-09-23    1.68    56     fencing      0    0      0      -
435962603 Aaron Brown         CAN         male   1992-05-27    1.98    79     athletics    0    0      1      -
521041435 Aaron Cook          MDA         male   1991-01-02    1.83    80     taekwondo    0    0      0      -
33922579  Aaron Gate          NZL         male   1990-11-26    1.81    71     cycling      0    0      0      -
173071782 Aaron Royle         AUS         male   1990-01-26    1.80    67     triathlon    0    0      0      -
266237702 Aaron Russell       USA         male   1993-06-04    2.05    98     volleyball   0    0      1      -
382571888 Aaron Younger       AUS         male   1991-09-25    1.93    100    aquatics     0    0      0      -
87689776  Aauri Lorena Bokesa ESP         female 1988-12-14    1.80    62     athletics    0    0      0      -

Optional: Clean Up The File

The CSV file has a few fields we don’t need. Let’s clean it up by removing the info , id , weight, and date_of_birth columns.

mlr --csv -I cut -x -f id,info,weight,date_of_birth athletes.csv

Now we can move to our original problem: we want to find who won the highest number of medals. We have how many of each medal (bronze, silver, and gold) the athletes won, but not the total number of medals per athlete.

Let’s compute a new value called medals which corresponds to this total number (bronze, silver, and gold added together).

mlr --c2p put '$medals=$bronze+$silver+$gold' then head ./athletes.csv

It gives you the following output:

name                 nationality   sex      height  sport        gold silver bronze medals
A Jesus Garcia       ESP           male     1.72    athletics      0    0      0      0
A Lam Shin           KOR           female   1.68    fencing        0    0      0      0
Aaron Brown          CAN           male     1.98    athletics      0    0      1      1
Aaron Cook           MDA           male     1.83    taekwondo      0    0      0      0
Aaron Gate           NZL           male     1.81    cycling        0    0      0      0
Aaron Royle          AUS           male     1.80    triathlon      0    0      0      0
Aaron Russell        USA           male     2.05    volleyball     0    0      1      1
Aaron Younger        AUS           male     1.93    aquatics       0    0      0      0
Aauri Lorena Bokesa  ESP           female   1.80    athletics      0    0      0      0
Ababel Yeshaneh      ETH           female   1.65    athletics      0    0      0      0

Sort by the highest number of medals by adding a sort.

mlr --c2p put '$medals=$bronze+$silver+$gold' \
    then sort -nr medals \
    then head ./athletes.csv

Respectively, the resulting output will be the following:

name              nationality  sex     height  sport       gold silver bronze medals
Michael Phelps    USA          male    1.94    aquatics      5    1      0      6
Katie Ledecky     USA          female  1.83    aquatics      4    1      0      5
Simone Biles      USA          female  1.45    gymnastics    4    0      1      5
Emma McKeon       AUS          female  1.80    aquatics      1    2      1      4
Katinka Hosszu    HUN          female  1.75    aquatics      3    1      0      4
Madeline Dirado   USA          female  1.76    aquatics      2    1      1      4
Nathan Adrian     USA          male    1.99    aquatics      2    0      2      4
Penny Oleksiak    CAN          female  1.86    aquatics      1    1      2      4
Simone Manuel     USA          female  1.78    aquatics      2    2      0      4
Alexandra Raisman USA          female  1.58    gymnastics    1    2      0      3

Restrict to the top 5 by adding -n 5 to your head operation.

mlr --c2p put '$medals=$bronze+$silver+$gold' \
    then sort -nr medals \
    then head -n 5 ./athletes.csv

You will end up with the following file:

name             nationality  sex      height  sport        gold silver bronze medals
Michael Phelps   USA          male     1.94    aquatics       5     1      0      6
Katie Ledecky    USA          female   1.83    aquatics       4     1      0      5
Simone Biles     USA          female   1.45    gymnastics     4     0      1      5
Emma McKeon      AUS          female   1.80    aquatics       1     2      1      4
Katinka Hosszu   HUN          female   1.75    aquatics       3     1      0      4

As a final step, let’s convert this into a JSON file with the --c2j option.

Here is our final command:

mlr --c2j put '$medals=$bronze+$silver+$gold' \
    then sort -nr medals \
    then head -n 5 ./athletes.csv > top5.json

With a single command, we've computed new data, sorted the result, truncated it, and converted it to JSON.

[
  {
    "name": "Michael Phelps",
    "nationality": "USA",
    "sex": "male",
    "height": 1.94,
    "weight": 90,
    "sport": "aquatics",
    "gold": 5,
    "silver": 1,
    "bronze": 0,
    "medals": 6
  }
  // Other entries omitted for brevity.
]

Bonus: If you wanted to show the top 5 women, you could add a filter.

mlr --c2p put '$medals=$bronze+$silver+$gold' then sort -nr medals then filter '$sex == "female"' then head -n 5 ./athletes.csv

Respectively, you would end up with the following output:

name              nationality   sex       height   sport        gold silver bronze medals
Katie Ledecky     USA           female    1.83     aquatics       4    1      0      5
Simone Biles      USA           female    1.45     gymnastics     4    0      1      5
Emma McKeon       AUS           female    1.80     aquatics       1    2      1      4
Katinka Hosszu    HUN           female    1.75     aquatics       3    1      0      4
Madeline Dirado   USA           female    1.76     aquatics       2    1      1      4
Conclusion

I hope this article showed you how versatile Miller is and gave you a taste of the power of command-line tools. Feel free to scourge the internet for the best CLI next time you find yourself writing yet another random script.

Resources

Further Reading on Smashing Magazine

The Future Of Design: Human-Powered Or AI-Driven?

This article is a sponsored by STUDIO

For years, reports have been warning of technology taking away jobs, particularly in fields like food preparation, truck driving, and warehouse operations. These jobs are often considered “blue-collar” and involve repetitive manual labor. However, many in the creative community believed their careers were immune to automation. After all, a designer’s craft is anything but monotonous. While computers can crunch numbers quickly, how are they going to be able to design?

Then something surprising happened: Artificial intelligence (AI) made inroads into the design. In product design, Mattel is using AI technology for product design. In interior design, designers are creating mockups with AI that can detect floors, walls, and furniture and change them. In graphic design, Nestle used an AI-retouched Vermeer painting in marketing to sell one of its yogurt brands. Advertising agency BBDO is experimenting with producing materials with Stable Diffusion.

But how about fields with a distinctly defined medium, like web design? Turning the focus to web design, this article will provide a brief history of AI in web design, explore its current implications for creativity, and offer suggestions for how web designers can stay ahead of the curve.

The Road Leading Here

AI’s capabilities outlined are a result of development dating fifty years ago and have rapidly accelerated in recent years with advanced computation models, additional training data that goes into improving the models, and improved computing power to run the models.

In 1950, Alan Turing, known as the Father of modern computer science, asked the famous question: Can machines think? Research began by attempting to teach machines human knowledge with declarative rules, which eventually proved to be difficult given the many implicit rules in our daily lives.

In the 90s, the above knowledge-feeding approach transitioned to a data-driven approach. Scientists began creating programs for computers to learn from large amounts of data with neural network architectures, much as how a human brain functions. This shift accelerated progress, producing breakthroughs, including IBM’s Deep Blue beating the world champion at chess in 1997, and Google Brain’s deep neural network learning to discover and categorize objects.

Recently, advancements in neural network model sophistication, data availability, and computing power further accelerated machines’ capabilities. In 2014, Ian Goodfellow created the first generative adversarial neural network, which allowed machines to generate new data with the same statistics as the original data set. This discovery readies the stage for AI models like DALL·E 2, StableDiffusion, and MidJourney in 2022, which demonstrate original creations abilities outlined at the beginning of the article.

Next, we will explore the implications of these technologies for web designers.

Today’s Implications

Today, designers and clients typically go through six stages together before arriving at a new website. The term “client” is used loosely and can refer to inter-departmental teams working on in-house websites or the individual responsible for building a website on their own.

  • Forming
    The designer works with the client to assess the context for a website design.
  • Defining
    The designer extracts the complete set of requirements and drafts a project plan to meet expectations.
  • Ideating
    The designer generates tailored ideas meeting the requirements.
  • Socializing
    The designer presents the ideas to the client and supports in choosing one to proceed.
  • Implementing
    The designer creates high-fidelity designs, which are then turned into code for deploying.

In order to better understand the impact of AI, we will break down the six stages of the web design process and examine the specific activities involved. Using the latest academic research and deployment examples, we will assess AI’s theoretical capabilities to perform activities in each stage. Our team will also create a webpage with AI technologies that everyone has access to today and compare it with the manual process for a practical perspective.

Forming

Forming calls for the designer to inquire about the unique instance, explore ambiguous perspectives, and ignite stakeholder enthusiasm.

  • Inquires unique instance: Undemonstrated capacity.
    When taking on a new client, it’s crucial to evaluate their unique context and determine whether web design is the right solution to meet their business goals. However, current AI models often struggle with analyzing subjects that aren’t included in their training data sets. With it being impossible to pre-collect comprehensive data on every business, it’s clear that current AI models lack the ability to be inquisitive about each unique instance.
  • Explores ambiguous perspectives: Undemonstrated capacity.
    At the beginning of the engagement, it is essential to consider multiple perspectives and use that information to guide exploration. For example, a designer might learn about the emotional roots of a client’s brand and use that knowledge to inform the website redesign. While AI models from institutions like MIT and Microsoft have shown early promise in recognizing abstract concepts and understanding emotions, they still lack the ability to fully adopt human perspectives. As a recent article from Harvard Business Review pointed out, empathy is still a key missing ingredient in today’s AI models.
  • Ignites stakeholder enthusiasm: Undemonstrated capacity.
    In order to set up a project for success, both the client and designer must be enthusiastic and committed to seeing it through to completion. While AI has shown potential in creating copy that resonates with consumers and motivates them to make a purchase, it remains unproven when it comes to sparking motivation for long-term business engagements that require sustained effort and input.
The AI Experiment

Designers
In preparation for a product launch, our designers evaluated the different launch approaches and decided to build a landing page. They intuitively decided to focus on nostalgic emotions because of the emotional connection many designers have with their tools. The team worked closely with product managers to get them excited.

AI
For the purpose of this article, the design team also attempted to use AI for the same tasks. General conversational models like ChatGPT were unable to diagnose a website’s necessity for us and only offered generic advice. When it came to generating early directions, models mostly produced results that skewed towards functional differentiation, failing to consider empathy and emotions that could make designers and stakeholders enthusiastic.

Defining

Defining calls for the designer to collect detailed requirements, set expectations, and draft a project plan.

  • Collects requirements: Theoretical capacity
    To ensure that all detailed requirements are collected, clients should be encouraged to verbalize their needs in terms of technical specifications, page count, and launch dates. AI models are now capable of performing these requirement-collection tasks. Thanks to examples of human exchanges fed to the models, Natural Language Processing (NLP) and Natural Language Understanding (NLU) have enabled AI to parse, understand, and respond to inputs. One of the latest models, OpenAI’s ChatGPT, can ask for additional context, answer follow-up questions, and reject inappropriate requests. AI models are already being deployed for customer service and have shown positive results in terms of trust and satisfaction.
  • Aligns expectations: Theoretical capacity
    The client and designer should align on criteria such as acceptance standards and future communication schedules. To help facilitate this alignment, AI models are now capable of handling negotiations autonomously. In academia, research from Meta (formerly Facebook) shows how AI models can use simulation and prediction to complete negotiations on their own. In the business world, companies like Pactum are helping global retailers secure the best possible terms in B2B purchases with their proprietary AI models.
  • Drafts project plan: Theoretical capacity
    To ensure that a project stays on track, it’s important for the designer to establish milestones and deadlines. AI models are now capable of estimating task durations and sequencing activities in a project. In 2017, researchers demonstrated the use of a machine learning algorithm called Support Vector Machine for accurate forecasting of project timelines. Further research has also established the use of Artificial Neural Networks for defining task relationships and creating work breakdown structure (WBS) charts.
The AI Experiment

Designers
Designers collected requirements from the product team using a tried-and-true questionnaire. The landing page needs to match the product launch date, so the teams chatted about the scope. After some frustrating back-and-forth where both teams accused the other of not having a clue, they finally came to a mutual agreement on a project plan.

AI
Designers tried the same with ChatGPT. Designers have AI role-play as the design team to collect requirements from the product team. AI performed admirably, even inspiring the team to add new items to their questionnaire. Designers then asked it to create a project plan while feeding it the same concerns received from the product team. Though the designers did not expect to use the produced schedule as-is, as factors like the team’s current workload were not considered, they still thought it performed reasonably well.

Ideating

Ideating calls for the designer to develop ideas relevant to previously defined criteria, ensure they are novel to contribute to the client’s differentiation and ensure they are of value to support the client’s business outcomes.

  • Develops relevant ideas: Theoretical capacity
    Ideas generated should align with consensus from earlier stages. Today’s AI models, like OpenAI’s DALL·E 2, can generate output that aligns with prompt criteria by learning the relationship between prompts and outputs through training data. This allows the AI to produce design ideas, including for UI design, that reflect the prompt criteria.
  • Ensures novelty: Theoretical capacity
    Ideas generated should offer fresh impressions and not a mere copy of existing executions. Today’s AI models can generate novel output using diffusion techniques. By scrambling and reassembling learned data in new ways, AI can create new data that resembles the learned data. This allows the AI to combine aspects of what it has learned in order to generate new ideas, similar to how humans combine known concepts to create new ideas. Imagen Video by Google, Make-a-Video by Meta, MidJourney, and Stable Diffusion are all examples of how AI models can generate completely new output.
  • Ensures value-add: Theoretical capacity
    Ideas generated should offer value-add to the client. AI models can compete with or surpass humans in this area thanks to their ability to learn from large amounts of data and their unmatched computational power for identifying patterns. This makes AI a strong candidate for inspiring, deriving, and supercharging ideas, providing value that may be difficult for humans to achieve on their own alone.
The AI Experiment

Designers
Designers brainstormed a couple of ideas for the hero based on “nostalgia” directions discussed in earlier stages.

AI
Next, designers tried to put ChatGPT to the test to generate design ideas. They were positively surprised at the “wizard” and “time machine” approaches. They then turned to DALL·E 2 to generate visuals. Obviously, some additional work in UI design tools is still necessary before the ideas can be presented. See the samples generated below.

Socializing

Socializing calls for the designer to form a recommendation, convey the recommendation, and respond to feedback.

  • Forms a recommendation: Theoretical capacity
    A designer should develop a point of view on the ideas presented. AI models have attained the ability to sort options based on scoring. By using datasets that track design and attention, AI models can be trained to evaluate and rank design options according to their potential for improving conversions. However, the ability of AI models to evaluate more subjective, emotionally-charged objectives has yet to be proven.
  • Conveys recommendation: Theoretical capacity
    A designer should provide a persuasive narrative to aid the client’s decision. AI models have proven to be capable of creating persuasive narratives that can aid in decision-making, much like a human. For example, IBM Research’s Project Debater is able to generate relevant arguments that support positions held. However, the ability of AI models to strike a balance between assertiveness and overbearingness in practical use cases remains an area of study.
  • Updates based on received feedback: Theoretical capacity
    A designer should take in the client’s feedback as a source of input for course correction. AI models like DALL·E 2 and ChatGPT are able to adapt and improve their output based on feedback. By updating the input prompts with feedback, these models are able to generate more accurate, aligned outputs. In cases where the feedback includes new or unrecognized concepts, textual inversion techniques can be used to help the model learn and incorporate these concepts into its output.
The AI Experiment

Designers
Designers gathered the latest design ideas, prioritized them by aesthetic intuition and conversion best practices, and prepared a review deck.

AI
With AI, designers sorted the earlier ideas from DALL·E 2 through models trained on design and attention data. The model provided designers with a simulated course of gaze, giving them confidence in a particular idea. However, they would still like to put it through an actual usability test if selected. They then enlisted ChatGPT to generate a script to sell the idea. With feedback received, they updated the prompt to DALL·E 2. Designers agreed that the ability to quickly and mostly effortlessly iterate felt productive.

Implementing

Implementing calls for the designer to complete designs, author code, and compile both into a functioning website.

  • Completes designs: Theoretical capacity
    Creative directions should be fleshed out based on the aligned decision. Today’s AI models are capable of completing designs based on textual or pictorial input. These AI models use machine learning techniques to identify connections between input prompts and outputs and, based on your input instance, interpolate a completed design output. Under research, there are already models which can return medium-fidelity mockups by detecting and brushing up UI elements on low-fidelity sketches. In deployment, OpenAI’s Outpainting feature allows extensions of original designs, producing stunning results such as extensions of Johannes Vermeer’s Girl with a Pearl Earring scene. The ability to automatically generate web page designs based on the style of a specific section from a design proposal isn’t too far-fetched, given the demonstrated capabilities of current models.
  • Authors code: Theoretical capacity
    HTML, CSS, and JavaScript should be produced to realize the design. Today’s AI models have shown early capabilities to produce code from functionality descriptions. This capability is made possible as these models have been trained on large amounts of data that demonstrate the relationship between descriptions of functionality and code that implements it. By learning from this data, AI models are able to generate code that accurately implements the desired functionality. Models in use today include the assistive feature in Microsoft’s PowerApps software, where the feature turns natural language into ready-to-use code for querying. At GitHub Next, which researches emerging technologies in software development, its VP predicts in the next couple of years, coders “will just be sketching the architectural design, (where) you’ll describe the functionality, and the AI will fill in the details.” Although output from models today still requires human review, the implementation of feedback loops is expected to lead to a continual improvement in quality.
  • Compiles design and code: Theoretical capacity
    For compilation, design and code need to be aligned to complete the chosen idea. As AI models possess the above-mentioned design and coding capabilities, automatic generation and alignment may not be too far-fetched. In a recent interview, engineers at OpenAI have already demonstrated technologies that let anyone produce simple apps just by describing what they want, such as “Make me a personal website with PayPal embedded for payments.” This “Gutenbergian” future, in which anyone with an idea can bring it to fruition, is on the brink of erupting.
The AI Experiment

Designers
Designers fine-tuned the design, handed them off to developers and went through two rounds of reviews.

AI
With AI, designers called in developers and worked together to try code-generation services available today. Both designers and developers were surprised complete syntaxes were generated and agreed the experience felt futuristic. However, they were not comfortable with deploying the code as-is and would like to further explore its compatibility with their existing codebase.
A Glimpse Of The Future

The advent of technology in the realm of design is a well-known phenomenon, and designers have long been at the forefront of leveraging its potential to innovate and push the boundaries of their craft. Just as in the late 15th century the rise of the printing press has encouraged scribe artists, in the 19th century textile machines encouraged artisans, and more recently, photo-editing software has encouraged darkroom artists to shift their creative focus, it is not far-fetched to expect a similar shift triggered by AI in the 21st century.

As we consider AI’s potential to take on various tasks throughout the web design process, it is evident that later stages of the design process will be particularly susceptible to automation. Accordingly, productive designers will shift to focus their creativity on earlier stages in order to differentiate from replaceable tasks.

Day-to-day activities will move from pixel-pushing and software operation to strategizing and forming intents with clients.

The future of creativity is heading upstream.

We don’t expect this creative shift to happen overnight but gradually in three waves. While the AI models outlined have demonstrated signs of capabilities related to web design, for trusted deployment, the models will need to be trained with additional industry data. The quantity of training data will help the models develop higher accuracy toward addressing the field’s most abstract and generalized problems. Considering abstraction and scope, we will expand on the discussed automation of the web design process by forecasting effects on a time scale. With the ability to measure, we hope this will help practitioners manage the approaching future.

Wave 1: Design Copilot

The wave refers to the ability of AI models to assist designers in once manual and time-consuming tasks. These tasks will mainly be of low abstraction within a narrow scope. This specificity requires less training data, and the controlled output domain will allow AI models to meet expectations consistently. We are currently at the onset, with technology previews from Adobe and upstarts like us. Plausible examples in the future may include tools helping designers to automatically adapt one design for different screen sizes, implement suggested animations to make designs interactive, or complete technically-complex format adjustments with descriptive prompts.

Wave 2: Generation and Management

The next wave refers to the ability of AI models to generate semi-completed web designs based on prompts, as well as assist in client relationship management. Generation tasks include Ideating and Implementing output ideas, which involve higher abstractions in a narrow scope. While existing models like ChatGPT and DALL·E 2 are already capable of generating design suggestions and outputs as images, additional pattern-learning with web design-specific datasets will be required to improve variation and quality. Furthermore, there are still concerns that must be addressed, such as issues related to copyright and ethics.

On the other hand, Management tasks include Defining and Socializing ideas, which involve lower abstraction despite a wider scope. While use cases in adjacent industries have proven successful, implementation in everyday account management will require further oversight. For example, the ability to strike a balance between persuasion and tactful communication during the process will need additional monitoring.

Wave 3: Automation

The third wave refers to end-to-end automation of the web design process, including support of strategy and intent development in the Forming stage. There have been attempts at a leapfrog, including AI modules in website builders. However, it will take additional time and effort to overcome the hurdles mentioned earlier, particularly the ability to incorporate uniquely human perspectives, such as empathy, before AI can fully replace a designer’s contribution.

Your Next Step

As AI enters the world of design, it opens up a whole new realm of possibilities. Applications such as generative models are already demonstrating some theoretical capabilities and even practical applications across multiple stages of web design.

While AI still lacks uniquely human capabilities, such as inquisitiveness and empathy, opportunities abound for designers to collaborate with technology to unlock new levels of creativity. Like a brush stroke on an empty canvas, designers and AI have the potential to create something truly extraordinary. Together, they will paint a brighter future for the world of design.

Interested in leveraging AI in your web designs today? Sign up today!