Caching Data in SvelteKit

My previous post was a broad overview of SvelteKit where we saw what a great tool it is for web development. This post will fork off what we did there and dive into every developer’s favorite topic: caching. So, be sure to give my last post a read if you haven’t already. The code for this post is available on GitHub, as well as a live demo.

This post is all about data handling. We’ll add some rudimentary search functionality that will modify the page’s query string (using built-in SvelteKit features), and re-trigger the page’s loader. But, rather than just re-query our (imaginary) database, we’ll add some caching so re-searching prior searches (or using the back button) will show previously retrieved data, quickly, from cache. We’ll look at how to control the length of time the cached data stays valid and, more importantly, how to manually invalidate all cached values. As icing on the cake, we’ll look at how we can manually update the data on the current screen, client-side, after a mutation, while still purging the cache.

This will be a longer, more difficult post than most of what I usually write since we’re covering harder topics. This post will essentially show you how to implement common features of popular data utilities like react-query; but instead of pulling in an external library, we’ll only be using the web platform and SvelteKit features.

Unfortunately, the web platform’s features are a bit lower level, so we’ll be doing a bit more work than you might be used to. The upside is we won’t need any external libraries, which will help keep bundle sizes nice and small. Please don’t use the approaches I’m going to show you unless you have a good reason to. Caching is easy to get wrong, and as you’ll see, there’s a bit of complexity that’ll result in your application code. Hopefully your data store is fast, and your UI is fine allowing SvelteKit to just always request the data it needs for any given page. If it is, leave it alone. Enjoy the simplicity. But this post will show you some tricks for when that stops being the case.

Speaking of react-query, it was just released for Svelte! So if you find yourself leaning on manual caching techniques a lot, be sure to check that project out, and see if it might help.

Setting up

Before we start, let’s make a few small changes to the code we had before. This will give us an excuse to see some other SvelteKit features and, more importantly, set us up for success.

First, let’s move our data loading from our loader in +page.server.js to an API route. We’ll create a +server.js file in routes/api/todos, and then add a GET function. This means we’ll now be able to fetch (using the default GET verb) to the /api/todos path. We’ll add the same data loading code as before.

import { json } from "@sveltejs/kit";
import { getTodos } from "$lib/data/todoData";

export async function GET({ url, setHeaders, request }) {
  const search = url.searchParams.get("search") || "";

  const todos = await getTodos(search);

  return json(todos);
}

Next, let’s take the page loader we had, and simply rename the file from +page.server.js to +page.js (or .ts if you’ve scaffolded your project to use TypeScript). This changes our loader to be a “universal” loader rather than a server loader. The SvelteKit docs explain the difference, but a universal loader runs on both the server and also the client. One advantage for us is that the fetch call into our new endpoint will run right from our browser (after the initial load), using the browser’s native fetch function. We’ll add standard HTTP caching in a bit, but for now, all we’ll do is call the endpoint.

export async function load({ fetch, url, setHeaders }) {
  const search = url.searchParams.get("search") || "";

  const resp = await fetch(`/api/todos?search=${encodeURIComponent(search)}`);

  const todos = await resp.json();

  return {
    todos,
  };
}

Now let’s add a simple form to our /list page:

<div class="search-form">
  <form action="/list">
    <label>Search</label>
    <input autofocus name="search" />
  </form>
</div>

Yep, forms can target directly to our normal page loaders. Now we can add a search term in the search box, hit Enter, and a “search” term will be appended to the URL’s query string, which will re-run our loader and search our to-do items.

Search form

Let’s also increase the delay in our todoData.js file in /lib/data. This will make it easy to see when data are and are not cached as we work through this post.

export const wait = async amount => new Promise(res => setTimeout(res, amount ?? 500));

Remember, the full code for this post is all on GitHub, if you need to reference it.

Basic caching

Let’s get started by adding some caching to our /api/todos endpoint. We’ll go back to our +server.js file and add our first cache-control header.

setHeaders({
  "cache-control": "max-age=60",
});

…which will leave the whole function looking like this:

export async function GET({ url, setHeaders, request }) {
  const search = url.searchParams.get("search") || "";

  setHeaders({
    "cache-control": "max-age=60",
  });

  const todos = await getTodos(search);

  return json(todos);
}

We’ll look at manual invalidation shortly, but all this function says is to cache these API calls for 60 seconds. Set this to whatever you want, and depending on your use case, stale-while-revalidate might also be worth looking into.

And just like that, our queries are caching.

Cache in DevTools.

Note make sure you un-check the checkbox that disables caching in dev tools.

Remember, if your initial navigation on the app is the list page, those search results will be cached internally to SvelteKit, so don’t expect to see anything in DevTools when returning to that search.

What is cached, and where

Our very first, server-rendered load of our app (assuming we start at the /list page) will be fetched on the server. SvelteKit will serialize and send this data down to our client. What’s more, it will observe the Cache-Control header on the response, and will know to use this cached data for that endpoint call within the cache window (which we set to 60 seconds in put example).

After that initial load, when you start searching on the page, you should see network requests from your browser to the /api/todos list. As you search for things you’ve already searched for (within the last 60 seconds), the responses should load immediately since they’re cached.

What’s especially cool with this approach is that, since this is caching via the browser’s native caching, these calls could (depending on how you manage the cache busting we’ll be looking at) continue to cache even if you reload the page (unlike the initial server-side load, which always calls the endpoint fresh, even if it did it within the last 60 seconds).

Obviously data can change anytime, so we need a way to purge this cache manually, which we’ll look at next.

Cache invalidation

Right now, data will be cached for 60 seconds. No matter what, after a minute, fresh data will be pulled from our datastore. You might want a shorter or longer time period, but what happens if you mutate some data and want to clear your cache immediately so your next query will be up to date? We’ll solve this by adding a query-busting value to the URL we send to our new /todos endpoint.

Let’s store this cache busting value in a cookie. That value can be set on the server but still read on the client. Let’s look at some sample code.

We can create a +layout.server.js file at the very root of our routes folder. This will run on application startup, and is a perfect place to set an initial cookie value.

export function load({ cookies, isDataRequest }) {
  const initialRequest = !isDataRequest;

  const cacheValue = initialRequest ? +new Date() : cookies.get("todos-cache");

  if (initialRequest) {
    cookies.set("todos-cache", cacheValue, { path: "/", httpOnly: false });
  }

  return {
    todosCacheBust: cacheValue,
  };
}

You may have noticed the isDataRequest value. Remember, layouts will re-run anytime client code calls invalidate(), or anytime we run a server action (assuming we don’t turn off default behavior). isDataRequest indicates those re-runs, and so we only set the cookie if that’s false; otherwise, we send along what’s already there.

The httpOnly: false flag is also significant. This allows our client code to read these cookie values in document.cookie. This would normally be a security concern, but in our case these are meaningless numbers that allow us to cache or cache bust.

Reading cache values

Our universal loader is what calls our /todos endpoint. This runs on the server or the client, and we need to read that cache value we just set up no matter where we are. It’s relatively easy if we’re on the server: we can call await parent() to get the data from parent layouts. But on the client, we’ll need to use some gross code to parse document.cookie:

export function getCookieLookup() {
  if (typeof document !== "object") {
    return {};
  }

  return document.cookie.split("; ").reduce((lookup, v) => {
    const parts = v.split("=");
    lookup[parts[0]] = parts[1];

    return lookup;
  }, {});
}

const getCurrentCookieValue = name => {
  const cookies = getCookieLookup();
  return cookies[name] ?? "";
};

Fortunately, we only need it once.

Sending out the cache value

But now we need to send this value to our /todos endpoint.

import { getCurrentCookieValue } from "$lib/util/cookieUtils";

export async function load({ fetch, parent, url, setHeaders }) {
  const parentData = await parent();

  const cacheBust = getCurrentCookieValue("todos-cache") || parentData.todosCacheBust;
  const search = url.searchParams.get("search") || "";

  const resp = await fetch(`/api/todos?search=${encodeURIComponent(search)}&cache=${cacheBust}`);
  const todos = await resp.json();

  return {
    todos,
  };
}

getCurrentCookieValue('todos-cache') has a check in it to see if we’re on the client (by checking the type of document), and returns nothing if we are, at which point we know we’re on the server. Then it uses the value from our layout.

Busting the cache

But how do we actually update that cache busting value when we need to? Since it’s stored in a cookie, we can call it like this from any server action:

cookies.set("todos-cache", cacheValue, { path: "/", httpOnly: false });

The implementation

It’s all downhill from here; we’ve done the hard work. We’ve covered the various web platform primitives we need, as well as where they go. Now let’s have some fun and write application code to tie it all together.

For reasons that’ll become clear in a bit, let’s start by adding an editing functionality to our /list page. We’ll add this second table row for each todo:

import { enhance } from "$app/forms";
<tr>
  <td colspan="4">
    <form use:enhance method="post" action="?/editTodo">
      <input name="id" value="{t.id}" type="hidden" />
      <input name="title" value="{t.title}" />
      <button>Save</button>
    </form>
  </td>
</tr>

And, of course, we’ll need to add a form action for our /list page. Actions can only go in .server pages, so we’ll add a +page.server.js in our /list folder. (Yes, a +page.server.js file can co-exist next to a +page.js file.)

import { getTodo, updateTodo, wait } from "$lib/data/todoData";

export const actions = {
  async editTodo({ request, cookies }) {
    const formData = await request.formData();

    const id = formData.get("id");
    const newTitle = formData.get("title");

    await wait(250);
    updateTodo(id, newTitle);

    cookies.set("todos-cache", +new Date(), { path: "/", httpOnly: false });
  },
};

We’re grabbing the form data, forcing a delay, updating our todo, and then, most importantly, clearing our cache bust cookie.

Let’s give this a shot. Reload your page, then edit one of the to-do items. You should see the table value update after a moment. If you look in the Network tab in DevToold, you’ll see a fetch to the /todos endpoint, which returns your new data. Simple, and works by default.

Saving data

Immediate updates

What if we want to avoid that fetch that happens after we update our to-do item, and instead, update the modified item right on the screen?

This isn’t just a matter of performance. If you search for “post” and then remove the word “post” from any of the to-do items in the list, they’ll vanish from the list after the edit since they’re no longer in that page’s search results. You could make the UX better with some tasteful animation for the exiting to-do, but let’s say we wanted to not re-run that page’s load function but still clear the cache and update the modified to-do so the user can see the edit. SvelteKit makes that possible — let’s see how!

First, let’s make one little change to our loader. Instead of returning our to-do items, let’s return a writeable store containing our to-dos.

return {
  todos: writable(todos),
};

Before, we were accessing our to-dos on the data prop, which we do not own and cannot update. But Svelte lets us return our data in their own store (assuming we’re using a universal loader, which we are). We just need to make one more tweak to our /list page.

Instead of this:

{#each todos as t}

…we need to do this since todos is itself now a store.:

{#each $todos as t}

Now our data loads as before. But since todos is a writeable store, we can update it.

First, let’s provide a function to our use:enhance attribute:

<form
  use:enhance={executeSave}
  on:submit={runInvalidate}
  method="post"
  action="?/editTodo"
>

This will run before a submit. Let’s write that next:

function executeSave({ data }) {
  const id = data.get("id");
  const title = data.get("title");

  return async () => {
    todos.update(list =>
      list.map(todo => {
        if (todo.id == id) {
          return Object.assign({}, todo, { title });
        } else {
          return todo;
        }
      })
    );
  };
}

This function provides a data object with our form data. We return an async function that will run after our edit is done. The docs explain all of this, but by doing this, we shut off SvelteKit’s default form handling that would have re-run our loader. This is exactly what we want! (We could easily get that default behavior back, as the docs explain.)

We now call update on our todos array since it’s a store. And that’s that. After editing a to-do item, our changes show up immediately and our cache is cleared (as before, since we set a new cookie value in our editTodo form action). So, if we search and then navigate back to this page, we’ll get fresh data from our loader, which will correctly exclude any updated to-do items that were updated.

The code for the immediate updates is available at GitHub.

Digging deeper

We can set cookies in any server load function (or server action), not just the root layout. So, if some data are only used underneath a single layout, or even a single page, you could set that cookie value there. Moreoever, if you’re not doing the trick I just showed manually updating on-screen data, and instead want your loader to re-run after a mutation, then you could always set a new cookie value right in that load function without any check against isDataRequest. It’ll set initially, and then anytime you run a server action that page layout will automatically invalidate and re-call your loader, re-setting the cache bust string before your universal loader is called.

Writing a reload function

Let’s wrap-up by building one last feature: a reload button. Let’s give users a button that will clear cache and then reload the current query.

We’ll add a dirt simple form action:

async reloadTodos({ cookies }) {
  cookies.set('todos-cache', +new Date(), { path: '/', httpOnly: false });
},

In a real project you probably wouldn’t copy/paste the same code to set the same cookie in the same way in multiple places, but for this post we’ll optimize for simplicity and readability.

Now let’s create a form to post to it:

<form method="POST" action="?/reloadTodos" use:enhance>
  <button>Reload todos</button>
</form>

That works!

UI after reload.

We could call this done and move on, but let’s improve this solution a bit. Specifically, let’s provide feedback on the page to tell the user the reload is happening. Also, by default, SvelteKit actions invalidate everything. Every layout, page, etc. in the current page’s hierarchy would reload. There might be some data that’s loaded once in the root layout that we don’t need to invalidate or re-load.

So, let’s focus things a bit, and only reload our to-dos when we call this function.

First, let’s pass a function to enhance:

<form method="POST" action="?/reloadTodos" use:enhance={reloadTodos}>
import { enhance } from "$app/forms";
import { invalidate } from "$app/navigation";

let reloading = false;
const reloadTodos = () => {
  reloading = true;

  return async () => {
    invalidate("reload:todos").then(() => {
      reloading = false;
    });
  };
};

We’re setting a new reloading variable to true at the start of this action. And then, in order to override the default behavior of invalidating everything, we return an async function. This function will run when our server action is finished (which just sets a new cookie).

Without this async function returned, SvelteKit would invalidate everything. Since we’re providing this function, it will invalidate nothing, so it’s up to us to tell it what to reload. We do this with the invalidate function. We call it with a value of reload:todos. This function returns a promise, which resolves when the invalidation is complete, at which point we set reloading back to false.

Lastly, we need to sync our loader up with this new reload:todos invalidation value. We do that in our loader with the depends function:

export async function load({ fetch, url, setHeaders, depends }) {
    depends('reload:todos');

  // rest is the same

And that’s that. depends and invalidate are incredibly useful functions. What’s cool is that invalidate doesn’t just take arbitrary values we provide like we did. We can also provide a URL, which SvelteKit will track, and invalidate any loaders that depend on that URL. To that end, if you’re wondering whether we could skip the call to depends and invalidate our /api/todos endpoint altogether, you can, but you have to provide the exact URL, including the search term (and our cache value). So, you could either put together the URL for the current search, or match on the path name, like this:

invalidate(url => url.pathname == "/api/todos");

Personally, I find the solution that uses depends more explicit and simple. But see the docs for more info, of course, and decide for yourself.

If you’d like to see the reload button in action, the code for it is in this branch of the repo.

Parting thoughts

This was a long post, but hopefully not overwhelming. We dove into various ways we can cache data when using SvelteKit. Much of this was just a matter of using web platform primitives to add the correct cache, and cookie values, knowledge of which will serve you in web development in general, beyond just SvelteKit.

Moreover, this is something you absolutely do not need all the time. Arguably, you should only reach for these sort of advanced features when you actually need them. If your datastore is serving up data quickly and efficiently, and you’re not dealing with any kind of scaling problems, there’s no sense in bloating your application code with needless complexity doing the things we talked about here.

As always, write clear, clean, simple code, and optimize when necessary. The purpose of this post was to provide you those optimization tools for when you truly need them. I hope you enjoyed it!


Caching Data in SvelteKit originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Getting Started With SvelteKit

SvelteKit is the latest of what I’d call next-gen application frameworks. It, of course, scaffolds an application for you, with the file-based routing, deployment, and server-side rendering that Next has done forever. But SvelteKit also supports nested layouts, server mutations that sync up the data on your page, and some other niceties we’ll get into.

This post is meant to be a high-level introduction to hopefully build some excitement for anyone who’s never used SvelteKit. It’ll be a relaxed tour. If you like what you see, the full docs are here.

In some ways this is a challenging post to write. SvelteKit is an application framework. It exists to help you build… well, applications. That makes it hard to demo. It’s not feasible to build an entire application in a blog post. So instead, we’ll use our imaginations a bit. We’ll build the skeleton of an application, have some empty UI placeholders, and hard-coded static data. The goal isn’t to build an actual application, but instead to show you how SvelteKit’s moving pieces work so you can build an application of your own.

To that end, we’ll build the tried and true To-Do application as an example. But don’t worry, this will be much, much more about seeing how SvelteKit works than creating yet another To-Do app.

The code for everything in this post is available at GitHub. This project is also deployed on Vercel for a live demo.

Creating your project

Spinning up a new SvelteKit project is simple enough. Run npm create svelte@latest your-app-name in the terminal and answer the question prompts. Be sure to pick “Skeleton Project” but otherwise make whatever selections you want for TypeScript, ESLint, etc.

Once the project is created, run npm i and npm run dev and a dev server should start running. Fire up localhost:5173 in the browser and you’ll get the placeholder page for the skeleton app.

Basic routing

Notice the routes folder under src. That holds code for all of our routes. There’s already a +page.svelte file in there with content for the root / route. No matter where in the file hierarchy you are, the actual page for that path always has the name +page.svelte. With that in mind, let’s create pages for /list, /details, /admin/user-settings and admin/paid-status, and also add some text placeholders for each page.

Your file layout should look something like this:

Initial files.

You should be able to navigate around by changing URL paths in the browser address bar.

Browser address bar with localhost URL.

Layouts

We’ll want navigation links in our app, but we certainly don’t want to copy the markup for them on each page we create. So, let’s create a +layout.svelte file in the root of our routes folder, which SvelteKit will treat as a global template for all pages. Let’s and add some content to it:

<nav>
  <ul>
    <li>
      <a href="/">Home</a>
    </li>
    <li>
      <a href="/list">To-Do list</a>
    </li>
    <li>
      <a href="/admin/paid-status">Account status</a>
    </li>
    <li>
      <a href="/admin/user-settings">User settings</a>
    </li>
  </ul>
</nav>

<slot />

<style>
  nav {
    background-color: beige;
  }
  nav ul {
    display: flex;
  }
  li {
    list-style: none;
    margin: 15px;
  }
  a {
    text-decoration: none;
    color: black;
  }
</style>

Some rudimentary navigation with some basic styles. Of particular importance is the <slot /> tag. This is not the slot you use with web components and shadow DOM, but rather a Svelte feature indicating where to put our content. When a page renders, the page content will slide in where the slot is.

And now we have some navigation! We won’t win any design competitions, but we’re not trying to.

Horizontal navigation with light yellow background.

Nested layouts

What if we wanted all our admin pages to inherit the normal layout we just built but also share some things common to all admin pages (but only admin pages)? No problem, we add another +layout.svelte file in our root admin directory, which will be inherited by everything underneath it. Let’s do that and add this content:

<div>This is an admin page</div>

<slot />

<style>
  div {
    padding: 15px;
    margin: 10px 0;
    background-color: red;
    color: white;
  }
</style>

We add a red banner indicating this is an admin page and then, like before, a <slot /> denoting where we want our page content to go.

Our root layout from before renders. Inside of the root layout is a <slot /> tag. The nested layout’s content goes into the root layout’s <slot />. And finally, the nested layout defines its own <slot />, into which the page content renders.

If you navigate to the admin pages, you should see the new red banner:

Red box beneath navigation that says this is an admin page.

Defining our data

OK, let’s render some actual data — or at least, see how we can render some actual data. There’s a hundred ways to create and connect to a database. This post is about SvelteKit though, not managing DynamoDB, so we’ll “load” some static data instead. But, we’ll use all the same machinery to read and update it that you’d use for real data. For a real web app, swap out the functions returning static data with functions connecting and querying to whatever database you happen to use.

Let’s create a dirt-simple module in lib/data/todoData.ts that returns some static data along with artificial delays to simulate real queries. You’ll see this lib folder imported elsewhere via $lib. This is a SvelteKit feature for that particular folder, and you can even add your own aliases.

let todos = [
  { id: 1, title: "Write SvelteKit intro blog post", assigned: "Adam", tags: [1] },
  { id: 2, title: "Write SvelteKit advanced data loading blog post", assigned: "Adam", tags: [1] },
  { id: 3, title: "Prepare RenderATL talk", assigned: "Adam", tags: [2] },
  { id: 4, title: "Fix all SvelteKit bugs", assigned: "Rich", tags: [3] },
  { id: 5, title: "Edit Adam's blog posts", assigned: "Geoff", tags: [4] },
];

let tags = [
  { id: 1, name: "SvelteKit Content", color: "ded" },
  { id: 2, name: "Conferences", color: "purple" },
  { id: 3, name: "SvelteKit Development", color: "pink" },
  { id: 4, name: "CSS-Tricks Admin", color: "blue" },
];

export const wait = async amount => new Promise(res => setTimeout(res, amount ?? 100));

export async function getTodos() {
  await wait();

  return todos;
}

export async function getTags() {
  await wait();

  return tags.reduce((lookup, tag) => {
    lookup[tag.id] = tag;
    return lookup;
  }, {});
}

export async function getTodo(id) {
  return todos.find(t => t.id == id);
}

A function to return a flat array of our to-do items, a lookup of our tags, and a function to fetch a single to-do (we’ll use that last one in our Details page).

Loading our data

How do we get that data into our Svelte pages? There’s a number of ways, but for now, let’s create a +page.server.js file in our list folder, and put this content in it:

import { getTodos, getTags } from "$lib/data/todoData";

export function load() {
  const todos = getTodos();
  const tags = getTags();

  return {
    todos,
    tags,
  };
}

We’ve defined a load() function that pulls in the data needed for the page. Notice that we are not await-ing calls to our getTodos and getTags async functions. Doing so would create a data loading waterfall as we wait for our to-do items to come in before loading our tags. Instead, we return the raw promises from load, and SvelteKit does the necessary work to await them.

So, how do we access this data from our page component? SvelteKit provides a data prop for our component with data on it. We’ll access our to-do items and tags from it using a reactive assignment.

Our List page component now looks like this.

<script>
  export let data;
  $: ({ todo, tags } = data);
</script>

<table cellspacing="10" cellpadding="10">
  <thead>
    <tr>
      <th>Task</th>
      <th>Tags</th>
      <th>Assigned</th>
    </tr>
  </thead>
  <tbody>
    {#each todos as t}
    <tr>
      <td>{t.title}</td>
      <td>{t.tags.map((id) => tags[id].name).join(', ')}</td>
      <td>{t.assigned}</td>
    </tr>
    {/each}
  </tbody>
</table>

<style>
  th {
    text-align: left;
  }
</style>

And this should render our to-do items!

Five to-do items in a table format.

Layout groups

Before we move on to the Details page and mutate data, let’s take a peek at a really neat SvelteKit feature: layout groups. We’ve already seen nested layouts for all admin pages, but what if we wanted to share a layout between arbitrary pages at the same level of our file system? In particular, what if we wanted to share a layout between only our List page and our Details page? We already have a global layout at that level. Instead, we can create a new directory, but with a name that’s in parenthesis, like this:

File directory.

We now have a layout group that covers our List and Details pages. I named it (todo-management) but you can name it anything you like. To be clear, this name will not affect the URLs of the pages inside of the layout group. The URLs will remain the same; layout groups allow you to add shared layouts to pages without them all comprising the entirety of a directory in routes.

We could add a +layout.svelte file and some silly <div> banner saying, “Hey we’re managing to-dos”. But let’s do something more interesting. Layouts can define load() functions in order to provide data for all routes underneath them. Let’s use this functionality to load our tags — since we’ll be using our tags in our details page — in addition to the list page we already have.

In reality, forcing a layout group just to provide a single piece of data is almost certainly not worth it; it’s better to duplicate that data in the load() function for each page. But for this post, it’ll provide the excuse we need to see a new SvelteKit feature!

First, let’s go into our list page’s +page.server.js file and remove the tags from it.

import { getTodos, getTags } from "$lib/data/todoData";

export function load() {
  const todos = getTodos();

  return {
    todos,
  };
}

Our List page should now produce an error since there is no tags object. Let’s fix this by adding a +layout.server.js file in our layout group, then define a load() function that loads our tags.

import { getTags } from "$lib/data/todoData";

export function load() {
  const tags = getTags();

  return {
    tags,
  };
}

And, just like that, our List page is rendering again!

We’re loading data from multiple locations

Let’s put a fine point on what’s happening here:

  • We defined a load() function for our layout group, which we put in +layout.server.js.
  • This provides data for all of the pages the layout serves — which in this case means our List and Details pages.
  • Our List page also defines a load() function that goes in its +page.server.js file.
  • SvelteKit does the grunt work of taking the results of these data sources, merging them together, and making both available in data.

Our Details page

We’ll use our Details page to edit a to-do item. First, let’s add a column to the table in our List page that links to the Details page with the to-do item’s ID in the query string.

<td><a href="/details?id={t.id}">Edit</a></td>

Now let’s build out our Details page. First, we’ll add a loader to grab the to-do item we’re editing. Create a +page.server.js in /details, with this content:

import { getTodo, updateTodo, wait } from "$lib/data/todoData";

export function load({ url }) {
  const id = url.searchParams.get("id");

  console.log(id);
  const todo = getTodo(id);

  return {
    todo,
  };
}

Our loader comes with a url property from which we can pull query string values. This makes it easy to look up the to-do item we’re editing. Let’s render that to-do, along with functionality to edit it.

SvelteKit has wonderful built-in mutation capabilities, so long as you use forms. Remember forms? Here’s our Details page. I’ve elided the styles for brevity.

<script>
  import { enhance } from "$app/forms";

  export let data;

  $: ({ todo, tags } = data);
  $: currentTags = todo.tags.map(id => tags[id]);
</script>

<form use:enhance method="post" action="?/editTodo">
  <input name="id" type="hidden" value="{todo.id}" />
  <input name="title" value="{todo.title}" />

  <div>
    {#each currentTags as tag}
    <span style="{`color:" ${tag.color};`}>{tag.name}</span>
    {/each}
  </div>

  <button>Save</button>
</form>

We’re grabbing the tags as before from our layout group’s loader and the to-do item from our page’s loader. We’re grabbing the actual tag objects from the to-do’s list of tag IDs and then rendering everything. We create a form with a hidden input for the ID and a real input for the title. We display the tags and then provide a button to submit the form.

If you noticed the use:enhance, that simply tells SvelteKit to use progressive enhancement and Ajax to submit our form. You’ll likely always use that.

How do we save our edits?

Notice the action="?/editTodo" attribute on the form itself? This tells us where we want to submit our edited data. For our case, we want to submit to an editTodo “action.”

Let’s create it by adding the following to the +page.server.js file we already have for Details (which currently has a load() function, to grab our to-do):

import { redirect } from "@sveltejs/kit";

// ...

export const actions = {
  async editTodo({ request }) {
    const formData = await request.formData();

    const id = formData.get("id");
    const newTitle = formData.get("title");

    await wait(250);
    updateTodo(id, newTitle);

    throw redirect(303, "/list");
  },
};

Form actions give us a request object, which provides access to our formData, which has a get method for our various form fields. We added that hidden input for the ID value so we could grab it here in order to look up the to-do item we’re editing. We simulate a delay, call a new updateTodo() method, then redirect the user back to the /list page. The updateTodo() method merely updates our static data; in real life you’d run some sort of update in whatever datastore you’re using.

export async function updateTodo(id, newTitle) {
  const todo = todos.find(t => t.id == id);
  Object.assign(todo, { title: newTitle });
}

Let’s try it out. We’ll go to the List page first:

List page with to-do-items.

Now let’s click the Edit button for one of the to-do items to bring up the editing page in /details.

Details page for a to-do item.

We’re going to add a new title:

Changing the to-do title in an editable text input.

Now, click Save. That should get us back to our /list page, with the new to-do title applied.

The edited to-do item in the full list view.

How did the new title show up like that? It was automatic. Once we redirected to the /list page, SvelteKit automatically re-ran all of our loaders just like it would have done regardless. This is the key advancement that next-gen application frameworks, like SvelteKit, Remix, and Next 13 provide. Rather than giving you a convenient way to render pages then wishing you the best of luck fetching whatever endpoints you might have to update data, they integrate data mutation alongside data loading, allowing the two to work in tandem.

A few things you might be wondering…

This mutation update doesn’t seem too impressive. The loaders will re-run whenever you navigate. What if we hadn’t added a redirect in our form action, but stayed on the current page? SvelteKit would perform the update in the form action, like before, but would still re-run all of the loaders for the current page, including the loaders in the page layout(s).

Can we have more targeted means of invalidating our data? For example, our tags were not edited, so in real life we wouldn’t want to re-query them. Yes, what I showed you is just the default forms behavior in SvelteKit. You can turn the default behavior off by providing a callback to use:enhance. Then SvelteKit provides manual invalidation functions.

Loading data on every navigation is potentially expensive, and unnecessary. Can I cache this data like I do with tools like react-query? Yes, just differently. SvelteKit lets you set (and then respect) the cache-control headers the web already provides. And I’ll be covering cache invalidation mechanisms in a follow-on post.

Everything we’ve done throughout this article uses static data and modifies values in memory. If you need to revert everything and start over, stop and restart the npm run dev Node process.

Wrapping up

We’ve barely scratched the surface of SvelteKit, but hopefully you’ve seen enough to get excited about it. I can’t remember the last time I’ve found web development this much fun. With things like bundling, routing, SSR, and deployment all handled out of the box, I get to spend more time coding than configuring.

Here are a few more resources you can use as next steps learning SvelteKit:


Getting Started With SvelteKit originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Using Web Components With Next (or Any SSR Framework)

In my previous post we looked at Shoelace, which is a component library with a full suite of UX components that are beautiful, accessible, and — perhaps unexpectedly — built with Web Components. This means they can be used with any JavaScript framework. While React’s Web Component interoperability is, at present, less than ideal, there are workarounds.

But one serious shortcoming of Web Components is their current lack of support for server-side rendering (SSR). There is something called the Declarative Shadow DOM (DSD) in the works, but current support for it is pretty minimal, and it actually requires buy-in from your web server to emit special markup for the DSD. There’s currently work being done for Next.js that I look forward to seeing. But for this post, we’ll look at how to manage Web Components from any SSR framework, like Next.js, today.

We’ll wind up doing a non-trivial amount of manual work, and slightly hurting our page’s startup performance in the process. We’ll then look at how to minimize these performance costs. But make no mistake: this solution is not without tradeoffs, so don’t expect otherwise. Always measure and profile.

The problem

Before we dive in, let’s take a moment and actually explain the problem. Why don’t Web Components work well with server-side rendering?

Application frameworks like Next.js take React code and run it through an API to essentially “stringify” it, meaning it turns your components into plain HTML. So the React component tree will render on the server hosting the web app, and that HTML will be sent down with the rest of the web app’s HTML document to your user’s browser. Along with this HTML are some <script> tags that load React, along with the code for all your React components. When a browser processes these <script> tags, React will re-render the component tree, and match things up with the SSR’d HTML that was sent down. At this point, all of the effects will start running, the event handlers will wire up, and the state will actually… contain state. It’s at this point that the web app becomes interactive. The process of re-processing your component tree on the client, and wiring everything up is called hydration.

So, what does this have to do with Web Components? Well, when you render something, say the same Shoelace <sl-tab-group> component we visited last time:

<sl-tab-group ref="{tabsRef}">
  <sl-tab slot="nav" panel="general"> General </sl-tab>
  <sl-tab slot="nav" panel="custom"> Custom </sl-tab>
  <sl-tab slot="nav" panel="advanced"> Advanced </sl-tab>
  <sl-tab slot="nav" panel="disabled" disabled> Disabled </sl-tab>

  <sl-tab-panel name="general">This is the general tab panel.</sl-tab-panel>
  <sl-tab-panel name="custom">This is the custom tab panel.</sl-tab-panel>
  <sl-tab-panel name="advanced">This is the advanced tab panel.</sl-tab-panel>
  <sl-tab-panel name="disabled">This is a disabled tab panel.</sl-tab-panel>
</sl-tab-group>

…React (or honestly any JavaScript framework) will see those tags and simply pass them along. React (or Svelte, or Solid) are not responsible for turning those tags into nicely-formatted tabs. The code for that is tucked away inside of whatever code you have that defines those Web Components. In our case, that code is in the Shoelace library, but the code can be anywhere. What’s important is when the code runs.

Normally, the code registering these Web Components will be pulled into your application’s normal code via a JavaScript import. That means this code will wind up in your JavaScript bundle and execute during hydration which means that, between your user first seeing the SSR’d HTML and hydration happening, these tabs (or any Web Component for that matter) will not render the correct content. Then, when hydration happens, the proper content will display, likely causing the content around these Web Components to move around and fit the properly formatted content. This is known as a flash of unstyled content, or FOUC. In theory, you could stick markup in between all of those <sl-tab-xyz> tags to match the finished output, but this is all but impossible in practice, especially for a third-party component library like Shoelace.

Moving our Web Component registration code

So the problem is that the code to make Web Components do what they need to do won’t actually run until hydration occurs. For this post, we’ll look at running that code sooner; immediately, in fact. We’ll look at custom bundling our Web Component code, and manually adding a script directly to our document’s <head> so it runs immediately, and blocks the rest of the document until it does. This is normally a terrible thing to do. The whole point of server-side rendering is to not block our page from processing until our JavaScript has processed. But once done, it means that, as the document is initially rendering our HTML from the server, the Web Components will be registered and will both immediately and synchronously emit the right content.

In our case, we’re just looking to run our Web Component registration code in a blocking script. This code isn’t huge, and we’ll look to significantly lessen the performance hit by adding some cache headers to help with subsequent visits. This isn’t a perfect solution. The first time a user browses your page will always block while that script file is loaded. Subsequent visits will cache nicely, but this tradeoff might not be feasible for you — e-commerce, anyone? Anyway, profile, measure, and make the right decision for your app. Besides, in the future it’s entirely possible Next.js will fully support DSD and Web Components.

Getting started

All of the code we’ll be looking at is in this GitHub repo and deployed here with Vercel. The web app renders some Shoelace components along with text that changes color and content upon hydration. You should be able to see the text change to “Hydrated,” with the Shoelace components already rendering properly.

Custom bundling Web Component code

Our first step is to create a single JavaScript module that imports all of our Web Component definitions. For the Shoelace components I’m using, my code looks like this:

import { setDefaultAnimation } from "@shoelace-style/shoelace/dist/utilities/animation-registry";

import "@shoelace-style/shoelace/dist/components/tab/tab.js";
import "@shoelace-style/shoelace/dist/components/tab-panel/tab-panel.js";
import "@shoelace-style/shoelace/dist/components/tab-group/tab-group.js";

import "@shoelace-style/shoelace/dist/components/dialog/dialog.js";

setDefaultAnimation("dialog.show", {
  keyframes: [
    { opacity: 0, transform: "translate3d(0px, -20px, 0px)" },
    { opacity: 1, transform: "translate3d(0px, 0px, 0px)" },
  ],
  options: { duration: 250, easing: "cubic-bezier(0.785, 0.135, 0.150, 0.860)" },
});
setDefaultAnimation("dialog.hide", {
  keyframes: [
    { opacity: 1, transform: "translate3d(0px, 0px, 0px)" },
    { opacity: 0, transform: "translate3d(0px, 20px, 0px)" },
  ],
  options: { duration: 250, easing: "cubic-bezier(0.785, 0.135, 0.150, 0.860)" },
});

It loads the definitions for the <sl-tab-group> and <sl-dialog> components, and overrides some default animations for the dialog. Simple enough. But the interesting piece here is getting this code into our application. We cannot simply import this module. If we did that, it’d get bundled into our normal JavaScript bundles and run during hydration. This would cause the FOUC we’re trying to avoid.

While Next.js does have a number of webpack hooks to custom bundle things, I’ll use Vite instead. First, install it with npm i vite and then create a vite.config.js file. Mine looks like this:

import { defineConfig } from "vite";
import path from "path";

export default defineConfig({
  build: {
    outDir: path.join(__dirname, "./shoelace-dir"),
    lib: {
      name: "shoelace",
      entry: "./src/shoelace-bundle.js",
      formats: ["umd"],
      fileName: () => "shoelace-bundle.js",
    },
    rollupOptions: {
      output: {
        entryFileNames: `[name]-[hash].js`,
      },
    },
  },
});

This will build a bundle file with our Web Component definitions in the shoelace-dir folder. Let’s move it over to the public folder so that Next.js will serve it. And we should also keep track of the exact name of the file, with the hash on the end of it. Here’s a Node script that moves the file and writes a JavaScript module that exports a simple constant with the name of the bundle file (this will come in handy shortly):

const fs = require("fs");
const path = require("path");

const shoelaceOutputPath = path.join(process.cwd(), "shoelace-dir");
const publicShoelacePath = path.join(process.cwd(), "public", "shoelace");

const files = fs.readdirSync(shoelaceOutputPath);

const shoelaceBundleFile = files.find(name => /^shoelace-bundle/.test(name));

fs.rmSync(publicShoelacePath, { force: true, recursive: true });

fs.mkdirSync(publicShoelacePath, { recursive: true });
fs.renameSync(path.join(shoelaceOutputPath, shoelaceBundleFile), path.join(publicShoelacePath, shoelaceBundleFile));
fs.rmSync(shoelaceOutputPath, { force: true, recursive: true });

fs.writeFileSync(path.join(process.cwd(), "util", "shoelace-bundle-info.js"), `export const shoelacePath = "/shoelace/${shoelaceBundleFile}";`);

Here’s a companion npm script:

"bundle-shoelace": "vite build && node util/process-shoelace-bundle",

That should work. For me, util/shoelace-bundle-info.js now exists, and looks like this:

export const shoelacePath = "/shoelace/shoelace-bundle-a6f19317.js";

Loading the script

Let’s go into the Next.js \_document.js file and pull in the name of our Web Component bundle file:

import { shoelacePath } from "../util/shoelace-bundle-info";

Then we manually render a <script> tag in the <head>. Here’s what my entire _document.js file looks like:

import { Html, Head, Main, NextScript } from "next/document";
import { shoelacePath } from "../util/shoelace-bundle-info";

export default function Document() {
  return (
    <Html>
      <Head>
        <script src={shoelacePath}></script>
      </Head>
      <body>
        <Main />
        <NextScript />
      </body>
    </Html>
  );
}

And that should work! Our Shoelace registration will load in a blocking script and be available immediately as our page processes the initial HTML.

Improving performance

We could leave things as they are but let’s add caching for our Shoelace bundle. We’ll tell Next.js to make these Shoelace bundles cacheable by adding the following entry to our Next.js config file:

async headers() {
  return [
    {
      source: "/shoelace/shoelace-bundle-:hash.js",
      headers: [
        {
          key: "Cache-Control",
          value: "public,max-age=31536000,immutable",
        },
      ],
    },
  ];
}

Now, on subsequent browses to our site, we see the Shoelace bundle caching nicely!

DevTools Sources panel open and showing the loaded Shoelace bundle.

If our Shoelace bundle ever changes, the file name will change (via the :hash portion from the source property above), the browser will find that it does not have that file cached, and will simply request it fresh from the network.

Wrapping up

This may have seemed like a lot of manual work; and it was. It’s unfortunate Web Components don’t offer better out-of-the-box support for server-side rendering.

But we shouldn’t forget the benefits they provide: it’s nice being able to use quality UX components that aren’t tied to a specific framework. It’s aldo nice being able to experiment with brand new frameworks, like Solid, without needing to find (or hack together) some sort of tab, modal, autocomplete, or whatever component.


Using Web Components With Next (or Any SSR Framework) originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Introducing Shoelace, a Framework-Independent Component-Based UX Library

This is a post about Shoelace, a component library by Cory LaViska, but with a twist. It defines all your standard UX components: tabs, modals, accordions, auto-completes, and much, much more. They look beautiful out of the box, are accessible, and fully customizable. But rather than creating these components in React, or Solid, or Svelte, etc., it creates them with Web Components; this means you can use them with any framework.

Some preliminary things

Web Components are great, but there’s currently a few small hitches to be aware of.

React

I said they work in any JavaScript framework, but as I’ve written before, React’s support for Web Components is currently poor. To address this, Shoelace actually created wrappers just for React.

Another option, which I personally like, is to create a thin React component that accepts the tag name of a Web Component and all of its attributes and properties, then does the dirty work of handling React’s shortcomings. I talked about this option in a previous post. I like this solution because it’s designed to be deleted. The Web Component interoperability problem is currently fixed in React’s experimental branch, so once that’s shipped, any thin Web Component-interoperable component you’re using could be searched, and removed, leaving you with direct Web Component usages, without any React wrappers.

Server-Side Rendering (SSR)

Support for SSR is also poor at the time of this writing. In theory, there’s something called Declarative Shadow DOM (DSD) which would enable SSR. But browser support is minimal, and in any event, DSD actually requires server support to work right, which means Next, Remix, or whatever you happen to use on the server will need to become capable of some special handling.

That said, there are other ways to get Web Components to just work with a web app that’s SSR’d with something like Next. The short version is that the scripts registering your Web Components need to run in a blocking script before your markup is parsed. But that’s a topic for another post.

Of course, if you’re building any kind of client-rendered SPA, this is a non-issue. This is what we’ll work with in this post.

Let’s start

Since I want this post to focus on Shoelace and on its Web Component nature, I’ll be using Svelte for everything. I’ll also be using this Stackblitz project for demonstration. We’ll build this demo together, step-by-step, but feel free to open that REPL up anytime to see the end result.

I’ll show you how to use Shoelace, and more importantly, how to customize it. We’ll talk about Shadow DOMs and which styles they block from the outside world (as well as which ones they don’t). We’ll also talk about the ::part CSS selector — which may be entirely new to you — and we’ll even see how Shoelace allows us to override and customize its various animations.

If you find you like Shoelace after reading this post and want to try it in a React project, my advice is to use a wrapper like I mentioned in the introduction. This will allow you to use any of Shoelace’s components, and it can be removed altogether once React ships the Web Component fixes they already have (look for that in version 19).

Introducing Shoelace

Shoelace has fairly detailed installation instructions. At its most simple, you can dump <script> and <style> tags into your HTML doc, and that’s that. For any production app, though, you’ll probably want to selectively import only what you want, and there are instructions for that, too.

With Shoelace installed, let’s create a Svelte component to render some content, and then go through the steps to fully customize it. To pick something fairly non-trivial, I went with the tabs and a dialog (commonly referred to as a modal) components. Here’s some markup taken largely from the docs:

<sl-tab-group>
  <sl-tab slot="nav" panel="general">General</sl-tab>
  <sl-tab slot="nav" panel="custom">Custom</sl-tab>
  <sl-tab slot="nav" panel="advanced">Advanced</sl-tab>
  <sl-tab slot="nav" panel="disabled" disabled>Disabled</sl-tab>

  <sl-tab-panel name="general">This is the general tab panel.</sl-tab-panel>
  <sl-tab-panel name="custom">This is the custom tab panel.</sl-tab-panel>
  <sl-tab-panel name="advanced">This is the advanced tab panel.</sl-tab-panel>
  <sl-tab-panel name="disabled">This is a disabled tab panel.</sl-tab-panel>
</sl-tab-group>

<sl-dialog no-header label="Dialog">
  Hello World!
  <button slot="footer" variant="primary">Close</button>
</sl-dialog>

<br />
<button>Open Dialog</button>

This renders some nice, styled tabs. The underline on the active tab even animates nicely, and slides from one active tab to the next.

Four horizontal tab headings with the first active in blue with placeholder content contained in a panel below.
Default tabs in Shoelace

I won’t waste your time running through every inch of the APIs that are already well-documented on the Shoelace website. Instead, let’s look into how best to interact with, and fully customize these Web Components.

Interacting with the API: methods and events

Calling methods and subscribing to events on a Web Component might be slightly different than what you’re used to with your normal framework of choice, but it’s not too complicated. Let’s see how.

Tabs

The tabs component (<sl-tab-group>) has a show method, which manually shows a particular tab. In order to call this, we need to get access to the underlying DOM element of our tabs. In Svelte, that means using bind:this. In React, it’d be a ref. And so on. Since we’re using Svelte, let’s declare a variable for our tabs instance:

<script>
  let tabs;
</script>

…and bind it:

<sl-tab-group bind:this="{tabs}"></sl-tab-group>

Now we can add a button to call it:

<button on:click={() => tabs.show("custom")}>Show custom</button>

It’s the same idea for events. There’s a sl-tab-show event that fires when a new tab is shown. We could use addEventListener on our tabs variable, or we can use Svelte’s on:event-name shortcut.

<sl-tab-group bind:this={tabs} on:sl-tab-show={e => console.log(e)}>

That works and logs the event objects as you show different tabs.

Event object meta shown in DevTools.

Typically we render tabs and let the user click between them, so this work isn’t usually even necessary, but it’s there if you need it. Now let’s get the dialog component interactive.

Dialog

The dialog component (<sl-dialog>) takes an open prop which controls whether the dialog is… open. Let’s declare it in our Svelte component:

<script>
  let tabs;
  let open = false;
</script>

It also has an sl-hide event for when the dialog is hidden. Let’s pass our open prop and bind to the hide event so we can reset it when the user clicks outside of the dialog content to close it. And let’s add a click handler to that close button to set our open prop to false, which would also close the dialog.

<sl-dialog no-header {open} label="Dialog" on:sl-hide={() => open = false}>
  Hello World!
  <button slot="footer" variant="primary" on:click={() => open = false}>Close</button>
</sl-dialog>

Lastly, let’s wire up our open dialog button:

<button on:click={() => (open = true)}>Open Dialog</button>

And that’s that. Interacting with a component library’s API is more or less straightforward. If that’s all this post did, it would be pretty boring.

But Shoelace — being built with Web Components — means that some things, particularly styles, will work a bit differently than we might be used to.

Customize all the styles!

As of this writing, Shoelace is still in beta and the creator is considering changing some default styles, possibly even removing some defaults altogether so they’ll no longer override your host application’s styles. The concepts we’ll cover are relevant either way, but don’t be surprised if some of the Shoelace specifics I mention are different when you go to use it.

As nice as Shoelace’s default styles are, we might have our own designs in our web app, and we’ll want our UX components to match. Let’s see how we’d go about that in a Web Components world.

We won’t try to actually improve anything. The Shoelace creator is a far better designer than I’ll ever be. Instead, we’ll just look at how to change things, so you can adapt to your own web apps.

A quick tour of Shadow DOMs

Take a peek at one of those tab headers in your DevTools; it should look something like this:

The tabs component markup shown in DevTools.

Our tab element has created a div container with a .tab and .tab--active class, and a tabindex, while also displaying the text we entered for that tab. But notice that it’s sitting inside of a shadow root. This allows Web Component authors to add their own markup to the Web Component while also providing a place for the content we provide. Notice the <slot> element? That basically means “put whatever content the user rendered between the Web Component tags here.”

So the <sl-tab> component creates a shadow root, adds some content to it to render the nicely-styled tab header along with a placeholder (<slot>) that renders our content inside.

Encapsulated styles

One of the classic, more frustrating problems in web development has always been styles cascading to places where we don’t want them. You might worry that any style rules in our application which specify something like div.tab would interfere with these tabs. It turns out this isn’t a problem; shadow roots encapsulate styles. Styles from outside the shadow root do not affect what’s inside the shadow root (with some exceptions which we’ll talk about), and vice versa.

The exceptions to this are inheritable styles. You, of course, don’t need to apply a font-family style for every element in your web app. Instead, you can specify your font-family once, on :root or html and have it inherit everywhere beneath it. This inheritance will, in fact, pierce the shadow root as well.

CSS custom properties (often called “css variables”) are a related exception. A shadow root can absolutely read a CSS property that is defined outside the shadow root; this will become relevant in a moment.

The ::part selector

What about styles that don’t inherit. What if we want to customize something like cursor, which doesn’t inherit, on something inside of the shadow root. Are we out of luck? It turns out we’re not. Take another look at the tab element image above and its shadow root. Notice the part attribute on the div? That allows you to target and style that element from outside the shadow root using the ::part selector. We’ll walk through an example is a bit.

Overriding Shoelace styles

Let’s see each of these approaches in action. As of now, a lot of Shoelace styles, including fonts, receive default values from CSS custom properties. To align those fonts with your application’s styles, override the custom props in question. See the docs for info on which CSS variables Shoelace is using, or you can simply inspect the styles in any given element in DevTools.

Inheriting styles through the shadow root

Open the app.css file in the src directory of the StackBlitz project. In the :root section at the bottom, you should see a letter-spacing: normal; declaration. Since the letter-spacing property is inheritable, try setting a new value, like 2px. On save, all content, including the tab headers defined in the shadow root, will adjust accordingly.

Four horizontal tab headers with the first active in blue with plqceholder content contained in a panel below. The text is slightly stretched with letter spacing.

Overwriting Shoelace CSS variables

The <sl-tab-group> component reads an --indicator-color CSS custom property for the active tab’s underline. We can override this with some basic CSS:

sl-tab-group {
  --indicator-color: green;
}

And just like that, we now have a green indicator!

Four horizontal tab headers with the first active with blue text and a green underline.

Querying parts

In the version of Shoelace I’m using right now (2.0.0-beta.83), any non-disabled tab has a pointer cursor. Let’s change that to a default cursor for the active (selected) tab. We already saw that the <sl-tab> element adds a part="base" attribute on the container for the tab header. Also, the currently selected tab receives an active attribute. Let’s use these facts to target the active tab, and change the cursor:

sl-tab[active]::part(base) {
  cursor: default;
}

And that’s that!

Customizing animations

For some icing on the metaphorical cake, let’s see how Shoelace allows us to customize animations. Shoelace uses the Web Animations API, and exposes a setDefaultAnimation API to control how different elements animate their various interactions. See the docs for specifics, but as an example, here’s how you might change Shoelace’s default dialog animation from expanding outward, and shrinking inward, to instead animate in from the top, and drop down while hiding.

import { setDefaultAnimation } from "@shoelace-style/shoelace/dist/utilities/animation-registry";

setDefaultAnimation("dialog.show", {
  keyframes: [
    { opacity: 0, transform: "translate3d(0px, -20px, 0px)" },
    { opacity: 1, transform: "translate3d(0px, 0px, 0px)" },
  ],
  options: { duration: 250, easing: "cubic-bezier(0.785, 0.135, 0.150, 0.860)" },
});
setDefaultAnimation("dialog.hide", {
  keyframes: [
    { opacity: 1, transform: "translate3d(0px, 0px, 0px)" },
    { opacity: 0, transform: "translate3d(0px, 20px, 0px)" },
  ],
  options: { duration: 200, easing: "cubic-bezier(0.785, 0.135, 0.150, 0.860)" },
});

That code is in the App.svelte file. Comment it out to see the original, default animation.

Wrapping up

Shoelace is an incredibly ambitious component library that’s built with Web Components. Since Web Components are framework-independent, they can be used in any project, with any framework. With new frameworks starting to come out with both amazing performance characteristics, and also ease of use, the ability to use quality user experience widgets which aren’t tied to any one framework has never been more compelling.


Introducing Shoelace, a Framework-Independent Component-Based UX Library originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Building Interoperable Web Components That Even Work With React

Those of us who’ve been web developers more than a few years have probably written code using more than one JavaScript framework. With all the choices out there — React, Svelte, Vue, Angular, Solid — it’s all but inevitable. One of the more frustrating things we have to deal with when working across frameworks is re-creating all those low-level UI components: buttons, tabs, dropdowns, etc. What’s particularly frustrating is that we’ll typically have them defined in one framework, say React, but then need to rewrite them if we want to build something in Svelte. Or Vue. Or Solid. And so on.

Wouldn’t it be better if we could define these low-level UI components once, in a framework-agnostic way, and then re-use them between frameworks? Of course it would! And we can; web components are the way. This post will show you how.

As of now, the SSR story for web components is a bit lacking. Declarative shadow DOM (DSD) is how a web component is server-side rendered, but, as of this writing, it’s not integrated with your favorite application frameworks like Next, Remix or SvelteKit. If that’s a requirement for you, be sure to check the latest status of DSD. But otherwise, if SSR isn’t something you’re using, read on.

First, some context

Web Components are essentially HTML elements that you define yourself, like <yummy-pizza> or whatever, from the ground up. They’re covered all over here at CSS-Tricks (including an extensive series by Caleb Williams and one by John Rhea) but we’ll briefly walk through the process. Essentially, you define a JavaScript class, inherit it from HTMLElement, and then define whatever properties, attributes and styles the web component has and, of course, the markup it will ultimately render to your users.

Being able to define custom HTML elements that aren’t bound to any particular component is exciting. But this freedom is also a limitation. Existing independently of any JavaScript framework means you can’t really interact with those JavaScript frameworks. Think of a React component which fetches some data and then renders some other React component, passing along the data. This wouldn’t really work as a web component, since a web component doesn’t know how to render a React component.

Web components particularly excel as leaf components. Leaf components are the last thing to be rendered in a component tree. These are the components which receive some props, and render some UI. These are not the components sitting in the middle of your component tree, passing data along, setting context, etc. — just pure pieces of UI that will look the same, no matter which JavaScript framework is powering the rest of the app.

The web component we’re building

Rather than build something boring (and common), like a button, let’s build something a little bit different. In my last post we looked at using blurry image previews to prevent content reflow, and provide a decent UI for users while our images load. We looked at base64 encoding a blurry, degraded versions of our images, and showing that in our UI while the real image loaded. We also looked at generating incredibly compact, blurry previews using a tool called Blurhash.

That post showed you how to generate those previews and use them in a React project. This post will show you how to use those previews from a web component so they can be used by any JavaScript framework.

But we need to walk before we can run, so we’ll walk through something trivial and silly first to see exactly how web components work.

Everything in this post will build vanilla web components without any tooling. That means the code will have a bit of boilerplate, but should be relatively easy to follow. Tools like Lit or Stencil are designed for building web components and can be used to remove much of this boilerplate. I urge you to check them out! But for this post, I’ll prefer a little more boilerplate in exchange for not having to introduce and teach another dependency.

A simple counter component

Let’s build the classic “Hello World” of JavaScript components: a counter. We’ll render a value, and a button that increments that value. Simple and boring, but it’ll let us look at the simplest possible web component.

In order to build a web component, the first step is to make a JavaScript class, which inherits from HTMLElement:

class Counter extends HTMLElement {}

The last step is to register the web component, but only if we haven’t registered it already:

if (!customElements.get("counter-wc")) {
  customElements.define("counter-wc", Counter);
}

And, of course, render it:

<counter-wc></counter-wc>

And everything in between is us making the web component do whatever we want it to. One common lifecycle method is connectedCallback, which fires when our web component is added to the DOM. We could use that method to render whatever content we’d like. Remember, this is a JS class inheriting from HTMLElement, which means our this value is the web component element itself, with all the normal DOM manipulation methods you already know and love.

At it’s most simple, we could do this:

class Counter extends HTMLElement {
  connectedCallback() {
    this.innerHTML = "<div style='color: green'>Hey</div>";
  }
}

if (!customElements.get("counter-wc")) {
  customElements.define("counter-wc", Counter);
}

…which will work just fine.

The word "hey" in green.

Adding real content

Let’s add some useful, interactive content. We need a <span> to hold the current number value and a <button> to increment the counter. For now, we’ll create this content in our constructor and append it when the web component is actually in the DOM:

constructor() {
  super();
  const container = document.createElement('div');

  this.valSpan = document.createElement('span');

  const increment = document.createElement('button');
  increment.innerText = 'Increment';
  increment.addEventListener('click', () => {
    this.#value = this.#currentValue + 1;
  });

  container.appendChild(this.valSpan);
  container.appendChild(document.createElement('br'));
  container.appendChild(increment);

  this.container = container;
}

connectedCallback() {
  this.appendChild(this.container);
  this.update();
}

If you’re really grossed out by the manual DOM creation, remember you can set innerHTML, or even create a template element once as a static property of your web component class, clone it, and insert the contents for new web component instances. There’s probably some other options I’m not thinking of, or you can always use a web component framework like Lit or Stencil. But for this post, we’ll continue to keep it simple.

Moving on, we need a settable JavaScript class property named value

#currentValue = 0;

set #value(val) {
  this.#currentValue = val;
  this.update();
}

It’s just a standard class property with a setter, along with a second property to hold the value. One fun twist is that I’m using the private JavaScript class property syntax for these values. That means nobody outside our web component can ever touch these values. This is standard JavaScript that’s supported in all modern browsers, so don’t be afraid to use it.

Or feel free to call it _value if you prefer. And, lastly, our update method:

update() {
  this.valSpan.innerText = this.#currentValue;
}

It works!

The counter web component.

Obviously this is not code you’d want to maintain at scale. Here’s a full working example if you’d like a closer look. As I’ve said, tools like Lit and Stencil are designed to make this simpler.

Adding some more functionality

This post is not a deep dive into web components. We won’t cover all the APIs and lifecycles; we won’t even cover shadow roots or slots. There’s endless content on those topics. My goal here is to provide a decent enough introduction to spark some interest, along with some useful guidance on actually using web components with the popular JavaScript frameworks you already know and love.

To that end, let’s enhance our counter web component a bit. Let’s have it accept a color attribute, to control the color of the value that’s displayed. And let’s also have it accept an increment property, so consumers of this web component can have it increment by 2, 3, 4 at a time. And to drive these state changes, let’s use our new counter in a Svelte sandbox — we’ll get to React in a bit.

We’ll start with the same web component as before and add a color attribute. To configure our web component to accept and respond to an attribute, we add a static observedAttributes property that returns the attributes that our web component listens for.

static observedAttributes = ["color"];

With that in place, we can add a attributeChangedCallback lifecycle method, which will run whenever any of the attributes listed in observedAttributes are set, or updated.

attributeChangedCallback(name, oldValue, newValue) {
  if (name === "color") {
    this.update();
  }
}

Now we update our update method to actually use it:

update() {
  this.valSpan.innerText = this._currentValue;
  this.valSpan.style.color = this.getAttribute("color") || "black";
}

Lastly, let’s add our increment property:

increment = 1;

Simple and humble.

Using the counter component in Svelte

Let’s use what we just made. We’ll go into our Svelte app component and add something like this:

<script>
  let color = "red";
</script>

<style>
  main {
    text-align: center;
  }
</style>

<main>
  <select bind:value={color}>
    <option value="red">Red</option>
    <option value="green">Green</option>
    <option value="blue">Blue</option>
  </select>

  <counter-wc color={color}></counter-wc>
</main>

And it works! Our counter renders, increments, and the dropdown updates the color. As you can see, we render the color attribute in our Svelte template and, when the value changes, Svelte handles the legwork of calling setAttribute on our underlying web component instance. There’s nothing special here: this is the same thing it already does for the attributes of any HTML element.

Things get a little bit interesting with the increment prop. This is not an attribute on our web component; it’s a prop on the web component’s class. That means it needs to be set on the web component’s instance. Bear with me, as things will wind up much simpler in a bit.

First, we’ll add some variables to our Svelte component:

let increment = 1;
let wcInstance;

Our powerhouse of a counter component will let you increment by 1, or by 2:

<button on:click={() => increment = 1}>Increment 1</button>
<button on:click={() => increment = 2}>Increment 2</button>

But, in theory, we need to get the actual instance of our web component. This is the same thing we always do anytime we add a ref with React. With Svelte, it’s a simple bind:this directive:

<counter-wc bind:this={wcInstance} color={color}></counter-wc>

Now, in our Svelte template, we listen for changes to our component’s increment variable and set the underlying web component property.

$: {
  if (wcInstance) {
    wcInstance.increment = increment;
  }
}

You can test it out over at this live demo.

We obviously don’t want to do this for every web component or prop we need to manage. Wouldn’t it be nice if we could just set increment right on our web component, in markup, like we normally do for component props, and have it, you know, just work? In other words, it’d be nice if we could delete all usages of wcInstance and use this simpler code instead:

<counter-wc increment={increment} color={color}></counter-wc>

It turns out we can. This code works; Svelte handles all that legwork for us. Check it out in this demo. This is standard behavior for pretty much all JavaScript frameworks.

So why did I show you the manual way of setting the web component’s prop? Two reasons: it’s useful to understand how these things work and, a moment ago, I said this works for “pretty much” all JavaScript frameworks. But there’s one framework which, maddeningly, does not support web component prop setting like we just saw.

React is a different beast

React. The most popular JavaScript framework on the planet does not support basic interop with web components. This is a well-known problem that’s unique to React. Interestingly, this is actually fixed in React’s experimental branch, but for some reason wasn’t merged into version 18. That said, we can still track the progress of it. And you can try this yourself with a live demo.

The solution, of course, is to use a ref, grab the web component instance, and manually set increment when that value changes. It looks like this:

import React, { useState, useRef, useEffect } from 'react';
import './counter-wc';

export default function App() {
  const [increment, setIncrement] = useState(1);
  const [color, setColor] = useState('red');
  const wcRef = useRef(null);

  useEffect(() => {
    wcRef.current.increment = increment;
  }, [increment]);

  return (
    <div>
      <div className="increment-container">
        <button onClick={() => setIncrement(1)}>Increment by 1</button>
        <button onClick={() => setIncrement(2)}>Increment by 2</button>
      </div>

      <select value={color} onChange={(e) => setColor(e.target.value)}>
        <option value="red">Red</option>
        <option value="green">Green</option>
        <option value="blue">Blue</option>
      </select>

      <counter-wc ref={wcRef} increment={increment} color={color}></counter-wc>
    </div>
  );
}

As we discussed, coding this up manually for every web component property is simply not scalable. But all is not lost because we have a couple of options.

Option 1: Use attributes everywhere

We have attributes. If you clicked the React demo above, the increment prop wasn’t working, but the color correctly changed. Can’t we code everything with attributes? Sadly, no. Attribute values can only be strings. That’s good enough here, and we’d be able to get somewhat far with this approach. Numbers like increment can be converted to and from strings. We could even JSON stringify/parse objects. But eventually we’ll need to pass a function into a web component, and at that point we’d be out of options.

Option 2: Wrap it

There’s an old saying that you can solve any problem in computer science by adding a level of indirection (except the problem of too many levels of indirection). The code to set these props is pretty predictable and simple. What if we hide it in a library? The smart folks behind Lit have one solution. This library creates a new React component for you after you give it a web component, and list out the properties it needs. While clever, I’m not a fan of this approach.

Rather than have a one-to-one mapping of web components to manually-created React components, what I prefer is just one React component that we pass our web component tag name to (counter-wc in our case) — along with all the attributes and properties — and for this component to render our web component, add the ref, then figure out what is a prop and what is an attribute. That’s the ideal solution in my opinion. I don’t know of a library that does this, but it should be straightforward to create. Let’s give it a shot!

This is the usage we’re looking for:

<WcWrapper wcTag="counter-wc" increment={increment} color={color} />

wcTag is the web component tag name; the rest are the properties and attributes we want passed along.

Here’s what my implementation looks like:

import React, { createElement, useRef, useLayoutEffect, memo } from 'react';

const _WcWrapper = (props) => {
  const { wcTag, children, ...restProps } = props;
  const wcRef = useRef(null);

  useLayoutEffect(() => {
    const wc = wcRef.current;

    for (const [key, value] of Object.entries(restProps)) {
      if (key in wc) {
        if (wc[key] !== value) {
          wc[key] = value;
        }
      } else {
        if (wc.getAttribute(key) !== value) {
          wc.setAttribute(key, value);
        }
      }
    }
  });

  return createElement(wcTag, { ref: wcRef });
};

export const WcWrapper = memo(_WcWrapper);

The most interesting line is at the end:

return createElement(wcTag, { ref: wcRef });

This is how we create an element in React with a dynamic name. In fact, this is what React normally transpiles JSX into. All our divs are converted to createElement("div") calls. We don’t normally need to call this API directly but it’s there when we need it.

Beyond that, we want to run a layout effect and loop through every prop that we’ve passed to our component. We loop through all of them and check to see if it’s a property with an in check that checks the web component instance object as well as its prototype chain, which will catch any getters/setters that wind up on the class prototype. If no such property exists, it’s assumed to be an attribute. In either case, we only set it if the value has actually changed.

If you’re wondering why we use useLayoutEffect instead of useEffect, it’s because we want to immediately run these updates before our content is rendered. Also, note that we have no dependency array to our useLayoutEffect; this means we want to run this update on every render. This can be risky since React tends to re-render a lot. I ameliorate this by wrapping the whole thing in React.memo. This is essentially the modern version of React.PureComponent, which means the component will only re-render if any of its actual props have changed — and it checks whether that’s happened via a simple equality check.

The only risk here is that if you’re passing an object prop that you’re mutating directly without re-assigning, then you won’t see the updates. But this is highly discouraged, especially in the React community, so I wouldn’t worry about it.

Before moving on, I’d like to call out one last thing. You might not be happy with how the usage looks. Again, this component is used like this:

<WcWrapper wcTag="counter-wc" increment={increment} color={color} />

Specifically, you might not like passing the web component tag name to the <WcWrapper> component and prefer instead the @lit-labs/react package above, which creates a new individual React component for each web component. That’s totally fair and I’d encourage you to use whatever you’re most comfortable with. But for me, one advantage with this approach is that it’s easy to delete. If by some miracle React merges proper web component handling from their experimental branch into main tomorrow, you’d be able to change the above code from this:

<WcWrapper wcTag="counter-wc" increment={increment} color={color} />

…to this:

<counter-wc ref={wcRef} increment={increment} color={color} />

You could probably even write a single codemod to do that everywhere, and then delete <WcWrapper> altogether. Actually, scratch that: a global search and replace with a RegEx would probably work.

The implementation

I know, it seems like it took a journey to get here. If you recall, our original goal was to take the image preview code we looked at in my last post, and move it to a web component so it can be used in any JavaScript framework. React’s lack of proper interop added a lot of detail to the mix. But now that we have a decent handle on how to create a web component, and use it, the implementation will almost be anti-climactic.

I’ll drop the entire web component here and call out some of the interesting bits. If you’d like to see it in action, here’s a working demo. It’ll switch between my three favorite books on my three favorite programming languages. The URL for each book will be unique each time, so you can see the preview, though you’ll likely want to throttle things in your DevTools Network tab to really see things taking place.

View entire code
class BookCover extends HTMLElement {
  static observedAttributes = ['url'];

  attributeChangedCallback(name, oldValue, newValue) {
    if (name === 'url') {
      this.createMainImage(newValue);
    }
  }

  set preview(val) {
    this.previewEl = this.createPreview(val);
    this.render();
  }

  createPreview(val) {
    if (typeof val === 'string') {
      return base64Preview(val);
    } else {
      return blurHashPreview(val);
    }
  }

  createMainImage(url) {
    this.loaded = false;
    const img = document.createElement('img');
    img.alt = 'Book cover';
    img.addEventListener('load', () =&gt; {
      if (img === this.imageEl) {
        this.loaded = true;
        this.render();
      }
    });
    img.src = url;
    this.imageEl = img;
  }

  connectedCallback() {
    this.render();
  }

  render() {
    const elementMaybe = this.loaded ? this.imageEl : this.previewEl;
    syncSingleChild(this, elementMaybe);
  }
}

First, we register the attribute we’re interested in and react when it changes:

static observedAttributes = ['url'];

attributeChangedCallback(name, oldValue, newValue) {
  if (name === 'url') {
    this.createMainImage(newValue);
  }
}

This causes our image component to be created, which will show only when loaded:

createMainImage(url) {
  this.loaded = false;
  const img = document.createElement('img');
  img.alt = 'Book cover';
  img.addEventListener('load', () => {
    if (img === this.imageEl) {
      this.loaded = true;
      this.render();
    }
  });
  img.src = url;
  this.imageEl = img;
}

Next we have our preview property, which can either be our base64 preview string, or our blurhash packet:

set preview(val) {
  this.previewEl = this.createPreview(val);
  this.render();
}

createPreview(val) {
  if (typeof val === 'string') {
    return base64Preview(val);
  } else {
    return blurHashPreview(val);
  }
}

This defers to whichever helper function we need:

function base64Preview(val) {
  const img = document.createElement('img');
  img.src = val;
  return img;
}

function blurHashPreview(preview) {
  const canvasEl = document.createElement('canvas');
  const { w: width, h: height } = preview;

  canvasEl.width = width;
  canvasEl.height = height;

  const pixels = decode(preview.blurhash, width, height);
  const ctx = canvasEl.getContext('2d');
  const imageData = ctx.createImageData(width, height);
  imageData.data.set(pixels);
  ctx.putImageData(imageData, 0, 0);

  return canvasEl;
}

And, lastly, our render method:

connectedCallback() {
  this.render();
}

render() {
  const elementMaybe = this.loaded ? this.imageEl : this.previewEl;
  syncSingleChild(this, elementMaybe);
}

And a few helpers methods to tie everything together:

export function syncSingleChild(container, child) {
  const currentChild = container.firstElementChild;
  if (currentChild !== child) {
    clearContainer(container);
    if (child) {
      container.appendChild(child);
    }
  }
}

export function clearContainer(el) {
  let child;

  while ((child = el.firstElementChild)) {
    el.removeChild(child);
  }
}

It’s a little bit more boilerplate than we’d need if we build this in a framework, but the upside is that we can re-use this in any framework we’d like — although React will need a wrapper for now, as we discussed.

Odds and ends

I’ve already mentioned Lit’s React wrapper. But if you find yourself using Stencil, it actually supports a separate output pipeline just for React. And the good folks at Microsoft have also created something similar to Lit’s wrapper, attached to the Fast web component library.

As I mentioned, all frameworks not named React will handle setting web component properties for you. Just note that some have some special flavors of syntax. For example, with Solid.js, <your-wc value={12}> always assumes that value is a property, which you can override with an attr prefix, like <your-wc attr:value={12}>.

Wrapping up

Web components are an interesting, often underused part of the web development landscape. They can help reduce your dependence on any single JavaScript framework by managing your UI, or “leaf” components. While creating these as web components — as opposed to Svelte or React components — won’t be as ergonomic, the upside is that they’ll be widely reusable.


Building Interoperable Web Components That Even Work With React originally published on CSS-Tricks. You should get the newsletter.

Inline Image Previews with Sharp, BlurHash, and Lambda Functions

Don’t you hate it when you load a website or web app, some content displays and then some images load — causing content to shift around? That’s called content reflow and can lead to an incredibly annoying user experience for visitors.

I’ve previously written about solving this with React’s Suspense, which prevents the UI from loading until the images come in. This solves the content reflow problem but at the expense of performance. The user is blocked from seeing any content until the images come in.

Wouldn’t it be nice if we could have the best of both worlds: prevent content reflow while also not making the user wait for the images? This post will walk through generating blurry image previews and displaying them immediately, with the real images rendering over the preview whenever they happen to come in.

So you mean progressive JPEGs?

You might be wondering if I’m about to talk about progressive JPEGs, which are an alternate encoding that causes images to initially render — full size and blurry — and then gradually refine as the data come in until everything renders correctly.

This seems like a great solution until you get into some of the details. Re-encoding your images as progressive JPEGs is reasonably straightforward; there are plugins for Sharp that will handle that for you. Unfortunately, you still need to wait for some of your images’ bytes to come over the wire until even a blurry preview of your image displays, at which point your content will reflow, adjusting to the size of the image’s preview.

You might look for some sort of event to indicate that an initial preview of the image has loaded, but none currently exists, and the workarounds are … not ideal.

Let’s look at two alternatives for this.

The libraries we’ll be using

Before we start, I’d like to call out the versions of the libraries I’ll be using for this post:

Making our own previews

Most of us are used to using <img /> tags by providing a src attribute that’s a URL to some place on the internet where our image exists. But we can also provide a Base64 encoding of an image and just set that inline. We wouldn’t usually want to do that since those Base64 strings can get huge for images and embedding them in our JavaScript bundles can cause some serious bloat.

But what if, when we’re processing our images (to resize, adjust the quality, etc.), we also make a low quality, blurry version of our image and take the Base64 encoding of that? The size of that Base64 image preview will be significantly smaller. We could save that preview string, put it in our JavaScript bundle, and display that inline until our real image is done loading. This will cause a blurry preview of our image to show immediately while the image loads. When the real image is done loading, we can hide the preview and show the real image.

Let’s see how.

Generating our preview

For now, let’s look at Jimp, which has no dependencies on things like node-gyp and can be installed and used in a Lambda.

Here’s a function (stripped of error handling and logging) that uses Jimp to process an image, resize it, and then creates a blurry preview of the image:

function resizeImage(src, maxWidth, quality) {
  return new Promise<ResizeImageResult>(res => {
    Jimp.read(src, async function (err, image) {
      if (image.bitmap.width > maxWidth) {
        image.resize(maxWidth, Jimp.AUTO);
      }
      image.quality(quality);

      const previewImage = image.clone();
      previewImage.quality(25).blur(8);
      const preview = await previewImage.getBase64Async(previewImage.getMIME());

      res({ STATUS: "success", image, preview });
    });
  });
}

For this post, I’ll be using this image provided by Flickr Commons:

Photo of the Big Boy statue holding a burger.

And here’s what the preview looks like:

Blurry version of the Big Boy statue.

If you’d like to take a closer look, here’s the same preview in a CodeSandbox.

Obviously, this preview encoding isn’t small, but then again, neither is our image; smaller images will produce smaller previews. Measure and profile for your own use case to see how viable this solution is.

Now we can send that image preview down from our data layer, along with the actual image URL, and any other related data. We can immediately display the image preview, and when the actual image loads, swap it out. Here’s some (simplified) React code to do that:

const Landmark = ({ url, preview = "" }) => {
    const [loaded, setLoaded] = useState(false);
    const imgRef = useRef<HTMLImageElement>(null);
  
    useEffect(() => {
      // make sure the image src is added after the onload handler
      if (imgRef.current) {
        imgRef.current.src = url;
      }
    }, [url, imgRef, preview]);
  
    return (
      <>
        <Preview loaded={loaded} preview={preview} />
        <img
          ref={imgRef}
          onLoad={() => setTimeout(() => setLoaded(true), 3000)}
          style={{ display: loaded ? "block" : "none" }}
        />
      </>
    );
  };
  
  const Preview: FunctionComponent<LandmarkPreviewProps> = ({ preview, loaded }) => {
    if (loaded) {
      return null;
    } else if (typeof preview === "string") {
      return <img key="landmark-preview" alt="Landmark preview" src={preview} style={{ display: "block" }} />;
    } else {
      return <PreviewCanvas preview={preview} loaded={loaded} />;
    }
  };

Don’t worry about the PreviewCanvas component yet. And don’t worry about the fact that things like a changing URL aren’t accounted for.

Note that we set the image component’s src after the onLoad handler to ensure it fires. We show the preview, and when the real image loads, we swap it in.

Improving things with BlurHash

The image preview we saw before might not be small enough to send down with our JavaScript bundle. And these Base64 strings will not gzip well. Depending on how many of these images you have, this may or may not be good enough. But if you’d like to compress things even smaller and you’re willing to do a bit more work, there’s a wonderful library called BlurHash.

BlurHash generates incredibly small previews using Base83 encoding. Base83 encoding allows it to squeeze more information into fewer bytes, which is part of how it keeps the previews so small. 83 might seem like an arbitrary number, but the README sheds some light on this:

First, 83 seems to be about how many low-ASCII characters you can find that are safe for use in all of JSON, HTML and shells.

Secondly, 83 * 83 is very close to, and a little more than, 19 * 19 * 19, making it ideal for encoding three AC components in two characters.

The README also states how Signal and Mastodon use BlurHash.

Let’s see it in action.

Generating blurhash previews

For this, we’ll need to use the Sharp library.


Note

To generate your blurhash previews, you’ll likely want to run some sort of serverless function to process your images and generate the previews. I’ll be using AWS Lambda, but any alternative should work.

Just be careful about maximum size limitations. The binaries Sharp installs add about 9 MB to the serverless function’s size.

To run this code in an AWS Lambda, you’ll need to install the library like this:

"install-deps": "npm i && SHARP_IGNORE_GLOBAL_LIBVIPS=1 npm i --arch=x64 --platform=linux sharp"

And make sure you’re not doing any sort of bundling to ensure all of the binaries are sent to your Lambda. This will affect the size of the Lambda deploy. Sharp alone will wind up being about 9 MB, which won’t be great for cold start times. The code you’ll see below is in a Lambda that just runs periodically (without any UI waiting on it), generating blurhash previews.


This code will look at the size of the image and create a blurhash preview:

import { encode, isBlurhashValid } from "blurhash";
const sharp = require("sharp");

export async function getBlurhashPreview(src) {
  const image = sharp(src);
  const dimensions = await image.metadata();

  return new Promise(res => {
    const { width, height } = dimensions;

    image
      .raw()
      .ensureAlpha()
      .toBuffer((err, buffer) => {
        const blurhash = encode(new Uint8ClampedArray(buffer), width, height, 4, 4);
        if (isBlurhashValid(blurhash)) {
          return res({ blurhash, w: width, h: height });
        } else {
          return res(null);
        }
      });
  });
}

Again, I’ve removed all error handling and logging for clarity. Worth noting is the call to ensureAlpha. This ensures that each pixel has 4 bytes, one each for RGB and Alpha.

Jimp lacks this method, which is why we’re using Sharp; if anyone knows otherwise, please drop a comment.

Also, note that we’re saving not only the preview string but also the dimensions of the image, which will make sense in a bit.

The real work happens here:

const blurhash = encode(new Uint8ClampedArray(buffer), width, height, 4, 4);

We’re calling blurhash‘s encode method, passing it our image and the image’s dimensions. The last two arguments are componentX and componentY, which from my understanding of the documentation, seem to control how many passes blurhash does on our image, adding more and more detail. The acceptable values are 1 to 9 (inclusive). From my own testing, 4 is a sweet spot that produces the best results.

Let’s see what this produces for that same image:

{
  "blurhash" : "UAA]{ox^0eRiO_bJjdn~9#M_=|oLIUnzxtNG",
  "w" : 276,
  "h" : 400
}

That’s incredibly small! The tradeoff is that using this preview is a bit more involved.

Basically, we need to call blurhash‘s decode method and render our image preview in a canvas tag. This is what the PreviewCanvas component was doing before and why we were rendering it if the type of our preview was not a string: our blurhash previews use an entire object — containing not only the preview string but also the image dimensions.

Let’s look at our PreviewCanvas component:

const PreviewCanvas: FunctionComponent<CanvasPreviewProps> = ({ preview }) => {
    const canvasRef = useRef<HTMLCanvasElement>(null);
  
    useLayoutEffect(() => {
      const pixels = decode(preview.blurhash, preview.w, preview.h);
      const ctx = canvasRef.current.getContext("2d");
      const imageData = ctx.createImageData(preview.w, preview.h);
      imageData.data.set(pixels);
      ctx.putImageData(imageData, 0, 0);
    }, [preview]);
  
    return <canvas ref={canvasRef} width={preview.w} height={preview.h} />;
  };

Not too terribly much going on here. We’re decoding our preview and then calling some fairly specific Canvas APIs.

Let’s see what the image previews look like:

In a sense, it’s less detailed than our previous previews. But I’ve also found them to be a bit smoother and less pixelated. And they take up a tiny fraction of the size.

Test and use what works best for you.

Wrapping up

There are many ways to prevent content reflow as your images load on the web. One approach is to prevent your UI from rendering until the images come in. The downside is that your user winds up waiting longer for content.

A good middle-ground is to immediately show a preview of the image and swap the real thing in when it’s loaded. This post walked you through two ways of accomplishing that: generating degraded, blurry versions of an image using a tool like Sharp and using BlurHash to generate an extremely small, Base83 encoded preview.

Happy coding!


Inline Image Previews with Sharp, BlurHash, and Lambda Functions originally published on CSS-Tricks. You should get the newsletter.

Syntax Highlighting (and More!) With Prism on a Static Site

So, you’ve decided to build a blog with Next.js. Like any dev blogger, you’d like to have code snippets in your posts that are formatted nicely with syntax highlighting. Perhaps you also want to display line numbers in the snippets, and maybe even have the ability to call out certain lines of code.

This post will show you how to get that set up, as well as some tips and tricks for getting these other features working. Some of it is tricker than you might expect.

Prerequisites

We’re using the Next.js blog starter as the base for our project, but the same principles should apply to other frameworks. That repo has clear (and simple) getting started instructions. Scaffold the blog, and let’s go!

Another thing we’re using here is Prism.js, a popular syntax highlighting library that’s even used right here on CSS-Tricks. The Next.js blog starter uses Remark to convert Markdown into markup, so we’ll use the remark-Prism.js plugin for formatting our code snippets.

Basic Prism.js integration

Let’s start by integrating Prism.js into our Next.js starter. Since we already know we’re using the remark-prism plugin, the first thing to do is install it with your favorite package manager:

npm i remark-prism

Now go into the markdownToHtml file, in the /lib folder, and switch on remark-prism:

import remarkPrism from "remark-prism";

// later ...

.use(remarkPrism, { plugins: ["line-numbers"] })

Depending on which version of the remark-html you’re using, you might also need to change its usage to .use(html, { sanitize: false }).

The whole module should now look like this:

import { remark } from "remark";
import html from "remark-html";
import remarkPrism from "remark-prism";

export default async function markdownToHtml(markdown) {
  const result = await remark()
    .use(html, { sanitize: false })
    .use(remarkPrism, { plugins: ["line-numbers"] })
    .process(markdown);

  return result.toString();
}

Adding Prism.js styles and theme

Now let’s import the CSS that Prism.js needs to style the code snippets. In the pages/_app.js file, import the main Prism.js stylesheet, and the stylesheet for whichever theme you’d like to use. I’m using Prism.js’s “Tomorrow Night” theme, so my imports look like this:

import "prismjs/themes/prism-tomorrow.css";
import "prismjs/plugins/line-numbers/prism-line-numbers.css";
import "../styles/prism-overrides.css";

Notice I’ve also started a prism-overrides.css stylesheet where we can tweak some defaults. This will become useful later. For now, it can remain empty.

And with that, we now have some basic styles. The following code in Markdown:

```js
class Shape {
  draw() {
    console.log("Uhhh maybe override me");
  }
}

class Circle {
  draw() {
    console.log("I'm a circle! :D");
  }
}
```

…should format nicely:

Adding line numbers

You might have noticed that the code snippet we generated does not display line numbers even though we imported the plugin that supports it when we imported remark-prism. The solution is hidden in plain sight in the remark-prism README:

Don’t forget to include the appropriate css in your stylesheets.

In other words, we need to force a .line-numbers CSS class onto the generated <pre> tag, which we can do like this:

And with that, we now have line numbers!

Note that, based on the version of Prism.js I have and the “Tomorrow Night” theme I chose, I needed to add this to the prism-overrides.css file we started above:

.line-numbers span.line-numbers-rows {
  margin-top: -1px;
}

You may not need that, but there you have it. We have line numbers!

Highlighting lines

Our next feature will be a bit more work. This is where we want the ability to highlight, or call out certain lines of code in the snippet.

There’s a Prism.js line-highlight plugin; unfortunately, it is not integrated with remark-prism. The plugin works by analyzing the formatted code’s position in the DOM, and manually highlights lines based on that information. That’s impossible with the remark-prism plugin since there is no DOM at the time the plugin runs. This is, after all, static site generation. Next.js is running our Markdown through a build step and generating HTML to render the blog. All of this Prism.js code runs during this static site generation, when there is no DOM.

But fear not! There’s a fun workaround that fits right in with CSS-Tricks: we can use plain CSS (and a dash of JavaScript) to highlight lines of code.

Let me be clear that this is a non-trivial amount of work. If you don’t need line highlighting, then feel free to skip to the next section. But if nothing else, it can be a fun demonstration of what’s possible.

Our base CSS

Let’s start by adding the following CSS to our prism-overrides.css stylesheet:

:root {
  --highlight-background: rgb(0 0 0 / 0);
  --highlight-width: 0;
}

.line-numbers span.line-numbers-rows > span {
  position: relative;
}

.line-numbers span.line-numbers-rows > span::after {
  content: " ";
  background: var(--highlight-background);
  width: var(--highlight-width);
  position: absolute;
  top: 0;
}

We’re defining some CSS custom properties up front: a background color and a highlight width. We’re setting them to empty values for now. Later, though, we’ll set meaningful values in JavaScript for the lines we want highlighted.

We’re then setting the line number <span> to position: relative, so that we can add a ::after pseudo element with absolute positioning. It’s this pseudo element that we’ll use to highlight our lines.

Declaring the highlighted lines

Now, let’s manually add a data attribute to the <pre> tag that’s generated, then read that in code, and use JavaScript to tweak the styles above to highlight specific lines of code. We can do this the same way that we added line numbers before:

This will cause our <pre> element to be rendered with a data-line="3,8-10" attribute, where line 3 and lines 8-10 are highlighted in the code snippet. We can comma-separate line numbers, or provide ranges.

Let’s look at how we can parse that in JavaScript, and get highlighting working.

Reading the highlighted lines

Head over to components/post-body.tsx. If this file is JavaScript for you, feel free to either convert it to TypeScript (.tsx), or just ignore all my typings.

First, we’ll need some imports:

import { useEffect, useRef } from "react";

And we need to add a ref to this component:

const rootRef = useRef<HTMLDivElement>(null);

Then, we apply it to the root element:

<div ref={rootRef} className="max-w-2xl mx-auto">

The next piece of code is a little long, but it’s not doing anything crazy. I’ll show it, then walk through it.

useEffect(() => {
  const allPres = rootRef.current.querySelectorAll("pre");
  const cleanup: (() => void)[] = [];

  for (const pre of allPres) {
    const code = pre.firstElementChild;
    if (!code || !/code/i.test(code.tagName)) {
      continue;
    }

    const highlightRanges = pre.dataset.line;
    const lineNumbersContainer = pre.querySelector(".line-numbers-rows");

    if (!highlightRanges || !lineNumbersContainer) {
      continue;
    }

    const runHighlight = () =>
      highlightCode(pre, highlightRanges, lineNumbersContainer);
    runHighlight();

    const ro = new ResizeObserver(runHighlight);
    ro.observe(pre);

    cleanup.push(() => ro.disconnect());
  }

  return () => cleanup.forEach(f => f());
}, []);

We’re running an effect once, when the content has all been rendered to the screen. We’re using querySelectorAll to grab all the <pre> elements under this root element; in other words, whatever blog post the user is viewing.

For each one, we make sure there’s a <code> element under it, and we check for both the line numbers container and the data-line attribute. That’s what dataset.line checks. See the docs for more info.

If we make it past the second continue, then highlightRanges is the set of highlights we declared earlier which, in our case, is "3,8-10", where lineNumbersContainer is the container with the .line-numbers-rows CSS class.

Lastly, we declare a runHighlight function that calls a highlightCode function that I’m about to show you. Then, we set up a ResizeObserver to run that same function anytime our blog post changes size, i.e., if the user resizes the browser window.

The highlightCode function

Finally, let’s see our highlightCode function:

function highlightCode(pre, highlightRanges, lineNumberRowsContainer) {
  const ranges = highlightRanges.split(",").filter(val => val);
  const preWidth = pre.scrollWidth;

  for (const range of ranges) {
    let [start, end] = range.split("-");
    if (!start || !end) {
      start = range;
      end = range;
    }

    for (let i = +start; i <= +end; i++) {
      const lineNumberSpan: HTMLSpanElement = lineNumberRowsContainer.querySelector(
        `span:nth-child(${i})`
      );
      lineNumberSpan.style.setProperty(
        "--highlight-background",
        "rgb(100 100 100 / 0.5)"
      );
      lineNumberSpan.style.setProperty("--highlight-width", `${preWidth}px`);
    }
  }
}

We get each range and read the width of the <pre> element. Then we loop through each range, find the relevant line number <span>, and set the CSS custom property values for them. We set whatever highlight color we want, and we set the width to the total scrollWidth value of the <pre> element. I kept it simple and used "rgb(100 100 100 / 0.5)" but feel free to use whatever color you think looks best for your blog.

Here’s what it looks like:

Syntax highlighting for a block of Markdown code.

Line highlighting without line numbers

You may have noticed that all of this so far depends on line numbers being present. But what if we want to highlight lines, but without line numbers?

One way to implement this would be to keep everything the same and add a new option to simply hide those line numbers with CSS. First, we’ll add a new CSS class, .hide-numbers:

```js[class="line-numbers"][class="hide-numbers"][data-line="3,8-10"]
class Shape {
  draw() {
    console.log("Uhhh maybe override me");
  }
}

class Circle {
  draw() {
    console.log("I'm a circle! :D");
  }
}
```

Now let’s add CSS rules to hide the line numbers when the .hide-numbers class is applied:

.line-numbers.hide-numbers {
  padding: 1em !important;
}
.hide-numbers .line-numbers-rows {
  width: 0;
}
.hide-numbers .line-numbers-rows > span::before {
  content: " ";
}
.hide-numbers .line-numbers-rows > span {
  padding-left: 2.8em;
}

The first rule undoes the shift to the right from our base code in order to make room for the line numbers. By default, the padding of the Prism.js theme I chose is 1em. The line-numbers plugin increases it to 3.8em, then inserts the line numbers with absolute positioning. What we did reverts the padding back to the 1em default.

The second rule takes the container of line numbers, and squishes it to have no width. The third rule erases all of the line numbers themselves (they’re generated with ::before pseudo elements).

The last rule simply shifts the now-empty line number <span> elements back to where they would have been so that the highlighting can be positioned how we want it. Again, for my theme, the line numbers normally adds 3.8em worth of left padding, which we reverted back to the default 1em. These new styles add the other 2.8em so things are back to where they should be, but with the line numbers hidden. If you’re using different plugins, you might need slightly different values.

Here’s what the result looks like:

Syntax highlighting for a block of Markdown code.

Copy-to-Clipboard feature

Before we wrap up, let’s add one finishing touch: a button allowing our dear reader to copy the code from our snippet. It’s a nice little enhancement that spares people from having to manually select and copy the code snippets.

It’s actually somewhat straightforward. There’s a navigator.clipboard.writeText API for this. We pass that method the text we’d like to copy, and that’s that. We can inject a button next to every one of our <code> elements to send the code’s text to that API call to copy it. We’re already messing with those <code> elements in order to highlight lines, so let’s integrate our copy-to-clipboard button in the same place.

First, from the useEffect code above, let’s add one line:

useEffect(() => {
  const allPres = rootRef.current.querySelectorAll("pre");
  const cleanup: (() => void)[] = [];

  for (const pre of allPres) {
    const code = pre.firstElementChild;
    if (!code || !/code/i.test(code.tagName)) {
      continue;
    }

    pre.appendChild(createCopyButton(code));

Note the last line. We’re going to append our button right into the DOM underneath our <pre> element, which is already position: relative, allowing us to position the button more easily.

Let’s see what the createCopyButton function looks like:

function createCopyButton(codeEl) {
  const button = document.createElement("button");
  button.classList.add("prism-copy-button");
  button.textContent = "Copy";

  button.addEventListener("click", () => {
    if (button.textContent === "Copied") {
      return;
    }
    navigator.clipboard.writeText(codeEl.textContent || "");
    button.textContent = "Copied";
    button.disabled = true;
    setTimeout(() => {
      button.textContent = "Copy";
      button.disabled = false;
    }, 3000);
  });

  return button;
}

Lots of code, but it’s mostly boilerplate. We create our button then give it a CSS class and some text. And then, of course, we create a click handler to do the copying. After the copy is done, we change the button’s text and disable it for a few seconds to help give the user feedback that it worked.

The real work is on this line:

navigator.clipboard.writeText(codeEl.textContent || "");

We’re passing codeEl.textContent rather than innerHTML since we want only the actual text that’s rendered, rather than all the markup Prism.js adds in order to format our code nicely.

Now let’s see how we might style this button. I’m no designer, but this is what I came up with:

.prism-copy-button {
  position: absolute;
  top: 5px;
  right: 5px;
  width: 10ch;
  background-color: rgb(100 100 100 / 0.5);
  border-width: 0;
  color: rgb(0, 0, 0);
  cursor: pointer;
}

.prism-copy-button[disabled] {
  cursor: default;
}

Which looks like this:

Syntax highlighting for a block of Markdown code.

And it works! It copies our code, and even preserves the formatting (i.e. new lines and indentation)!

Wrapping up

I hope this has been useful to you. Prism.js is a wonderful library, but it wasn’t originally written for static sites. This post walked you through some tips and tricks for bridging that gap, and getting it to work well with a Next.js site.


Syntax Highlighting (and More!) With Prism on a Static Site originally published on CSS-Tricks. You should get the newsletter.

Adding CDN Caching to a Vite Build

Content delivery networks, or CDNs, allow you to improve the delivery of your website’s static resources, most notably, with CDN caching. They do this by serving your content from edge locations, which are located all over the world. When a user browses to your site, and your site requests resources from the CDN, the CDN will route that request to the nearest edge location. If that location has the requested resources, either from that user’s prior visit, or from another person, then the content will be served from cache. If not, the CDN will request the content from your underlying domain, cache it, and serve it.

There are countless CDNs out there, but for this post we’ll be using AWS CloudFront. We’ll look at setting up a CloudFront distribution to serve all our site’s assets: JavaScript files, CSS files, font files, etc. Then we’ll see about integrating it into a Vite build. If you’d like to learn more about Vite, I have an introduction here.

Setting up a CloudFront CDN distribution

Let’s jump right in and set up our CloudFront CDN distribution.

For any serious project, you should be setting up your serverless infrastructure with code, using something like the Serverless Framework, or AWS’s CDK. But to keep things simple, here, we’ll set up our CDN using the AWS console.

Head on over to the CloudFront homepage. At the top right, you should see an orange button to create a new distribution.

CloudFront CDN Distributions screen.

The creation screen has a ton of options, but for the most part the default selections will be fine. First and foremost, add the domain where your resources are located.

CloudFront CDN distribution creation screen.

Next, scroll down and find the Response headers policy dropdown, and choose “CORS-With-Preflight.”

CloudFront response headers settings.

Lastly, click the Create Distribution button at the bottom, and hopefully you’ll see your new distribution.

CloudFront CDN distribution overview screen.

Integrating the CDN with Vite

It’s one thing for our CDN to be set up and ready to serve our files. But it’s another for our site to actually know how to request them from our CDN. I’ll walk through integrating with Vite, but other build systems, like webpack or Rollup, will be similar.

When Vite builds our site, it maintains a “graph” of all the JavaScript and CSS files that various parts of our site import, and it injects the appropriate <script> tags, <link> tags, or import() statements to load what’s needed. What we need to do is tell Vite to request these assets from our CDN when in production. Let’s see how.

Open up your vite.config.ts file. First, we’ll need to know if we’re on the live site (production) or in development (dev).

const isProduction = process.env.NODE_ENV === "production"; 

This works since Vite sets this environment variable when we run vite build, which is what we do for production, as opposed to dev mode with hot module reloading.

Next we tell Vite to draw our assets from our CDN like so, setting the base property of our config object:

export default defineConfig({
  base: isProduction ? process.env.REACT_CDN : "",

Be sure to set your REACT_CDN environment variable to your CDN’s location, which in this case, will be our CloudFront distribution’s location. Mine looks something (but not exactly) like this:

https://distributiondomainname.cloudfront.net

Watch your VitePWA settings!

As one final piece of cleanup, if you happen to be using the VitePWA plugin, be sure to reset your base property like this:

VitePWA({
  base: "/",

Otherwise, your web.manifest file will have invalid settings and cause errors.

Let’s see the CDN work

Once you’re all set up, browse to your site, and inspect any of the network requests for your script or CSS files. For starters, the protocol should be h2.

Showing the assets served via CDN caching in DevTools. Each file name includes a unique random string of letters and numbers.

From there, you can peek into the response headers of any one of those files, and you should see some CloudFront data in there:

Screenshot of a response header.

Cache busting

It’s hard to talk about CDNs without mentioning cache busting. CDNs like CloudFront have functionality to manually “eject” items from cache. But for Vite-built assets, we get this “for free” since Vite adds fingerprinting, or hash codes, to the filenames of the assets it produces.

So Vite might turn a home.js file into home-abc123.js during a build, but then if you change that file and rebuild, it might become home-xyz987.js. That’s good, as it will “break the cache,” and the newly built file will not be cached, so the CDN will have to turn to our host domain for the actual content.

CDN caching for other static assets

JavaScript, CSS, and font files aren’t the only kinds of assets that can benefit from CDN caching. If you have an S3 bucket you’re serving images out of, consider setting up a CloudFront distribution for it as well. There are options specifically for S3 which makes it a snap to create. Not only will you get the same edge caching, but HTTP/2 responses, which S3 does not provide.

Advanced CDN practices

Integrating a CDN here was reasonably straightforward, but we’re only enjoying a fraction of the potential benefits. Right now, users will browse to our app, our server will serve our root HTML file, and then the user’s browser will connect to our CDN to start pulling down all our static assets.

Going further, we would want to serve our entire site from a CDN. That way, it can communicate with our web server as needed for non-static and non-cached assets.

Conclusion

CDNs are a great way to improve the performance of your site. They provide edge caching and HTTP/2 out of the box. Not only that, but they’re reasonably easy to set up. Now you have a new tool in your belt to both set up a CDN and integrate it with Vite.


Adding CDN Caching to a Vite Build originally published on CSS-Tricks. You should get the newsletter.

Demystifying TypeScript Discriminated Unions

TypeScript is a wonderful tool for writing JavaScript that scales. It’s more or less the de facto standard for the web when it comes to large JavaScript projects. As outstanding as it is, there are some tricky pieces for the unaccustomed. One such area is TypeScript discriminated unions.

Specifically, given this code:

interface Cat {
  weight: number;
  whiskers: number;
}
interface Dog {
  weight: number;
  friendly: boolean;
}
let animal: Dog | Cat;

…many developers are surprised (and maybe even angry) to discover that when they do animal., only the weight property is valid, and not whiskers or friendly. By the end of this post, this will make perfect sense.

Before we dive in, let’s do a quick (and necessary) review of structural typing, and how it differs from nominal typing. This will set up our discussion of TypeScript’s discriminated unions nicely.

Structural typing

The best way to introduce structural typing is to compare it to what it’s not. Most typed languages you’ve probably used are nominally typed. Consider this C# code (Java or C++ would look similar):

class Foo {
  public int x;
}
class Blah {
  public int x;
}

Even though Foo and Blah are structured exactly the same, they cannot be assigned to one another. The following code:

Blah b = new Foo();

…generates this error:

Cannot implicitly convert type 'Foo' to 'Blah'

The structure of these classes is irrelevant. A variable of type Foo can only be assigned to instances of the Foo class (or subclasses thereof).

TypeScript operates the opposite way. TypeScript considers types to be compatible if they have the same structure—hence the name, structural typing. Get it?

So, the following runs without error:

class Foo {
  x: number = 0;
}
class Blah {
  x: number = 0;
}
let f: Foo = new Blah();
let b: Blah = new Foo();

Types as sets of matching values

Let’s hammer this home. Given this code:

class Foo {
  x: number = 0;
}

let f: Foo;

f is a variable holding any object that matches the structure of instances created by the Foo class which, in this case, means an x property that represents a number. That means even a plain JavaScript object will be accepted.

let f: Foo;
f = {
  x: 0
}

Unions

Thanks for sticking with me so far. Let’s get back to the code from the beginning:

interface Cat {
  weight: number;
  whiskers: number;
}
interface Dog {
  weight: number;
  friendly: boolean;
}

We know that this:

let animal: Dog;

…makes animal any object that has the same structure as the Dog interface. So what does the following mean?

let animal: Dog | Cat;

This types animal as any object that matches the Dog interface, or any object that matches the Cat interface.

So why does animal—as it exists now—only allow us to access the weight property? To put it simply, it’s because TypeScript does not know which type it is. TypeScript knows that animal has to be either a Dog or Cat, but it could be either (or both at the same time, but let’s keep it simple). We’d likely get runtime errors if we were allowed to access the friendly property, but the instance wound up being a Cat instead of a Dog. Likewise for the whiskers property if the object wound up being a Dog.

Type unions are unions of valid values rather than unions of properties. Developers often write something like this:

let animal: Dog | Cat;

…and expect animal to have the union of Dog and Cat properties. But again, that’s a mistake. This specifies animal as having a value that matches the union of valid Dog values and valid Cat values. But TypeScript will only allow you to access properties it knows are there. For now, that means properties on all the types in the union.

Narrowing

Right now, we have this:

let animal: Dog | Cat;

How do we properly treat animal as a Dog when it’s a Dog, and access properties on the Dog interface, and likewise when it’s a Cat? For now, we can use the in operator. This is an old-school JavaScript operator you probably don’t see very often, but it essentially allows us to test if a property is in an object. Like this:

let o = { a: 12 };

"a" in o; // true
"x" in o; // false

It turns out TypeScript is deeply integrated with the in operator. Let’s see how:

let animal: Dog | Cat = {} as any;

if ("friendly" in animal) {
  console.log(animal.friendly);
} else {
  console.log(animal.whiskers);
}

This code produces no errors. When inside the if block, TypeScript knows there’s a friendly property, and therefore casts animal as a Dog. And when inside the else block, TypeScript similarly treats animal as a Cat. You can even see this if you hover over the animal object inside these blocks in your code editor:

Showing a tooltip open on top of a a TypeScript discriminated unions example that shows `let animal: Dog`.
Showing a tooltip open on top of a a TypeScript discriminated union example that shows `let animal: Cat`.

Discriminated unions

You might expect the blog post to end here but, unfortunately, narrowing type unions by checking for the existence of properties is incredibly limited. It worked well for our trivial Dog and Cat types, but things can easily get more complicated, and more fragile, when we have more types, as well as more overlap between those types.

This is where discriminated unions come in handy. We’ll keep everything the same from before, except add a property to each type whose only job is to distinguish (or “discriminate”) between the types:

interface Cat {
  weight: number;
  whiskers: number;
  ANIMAL_TYPE: "CAT";
}
interface Dog {
  weight: number;
  friendly: boolean;
  ANIMAL_TYPE: "DOG";
}

Note the ANIMAL_TYPE property on both types. Don’t mistake this as a string with two different values; this is a literal type. ANIMAL_TYPE: "CAT"; means a type that holds exactly the string "CAT", and nothing else.

And now our check becomes a bit more reliable:

let animal: Dog | Cat = {} as any;

if (animal.ANIMAL_TYPE === "DOG") {
  console.log(animal.friendly);
} else {
  console.log(animal.whiskers);
}

Assuming each type participating in the union has a distinct value for the ANIMAL_TYPE property, this check becomes foolproof.

The only downside is that you now have a new property to deal with. Any time you create an instance of a Dog or a Cat, you have to supply the single correct value for the ANIMAL_TYPE. But don’t worry about forgetting because TypeScript will remind you. 🙂

Showing the TypeScript discriminated union for a createDog function that returns weight and friendly properties.
Screenshot of TypeScript displaying a warning in the code editor as a result of not providing a single value for the ANIMAL_TYPE property.


Further reading

If you’d like to learn more, I’d recommend the TypeScript docs on narrowing. That’ll provide some deeper coverage of what we went over here. Inside of that link is a section on type predicates. These allow you to define your own, custom checks to narrow types, without needing to use type discriminators, and without relying on the in keyword.

Conclusion

At the beginning of this article, I said it would make sense why weight is the only accessible property in the following example:

interface Cat {
  weight: number;
  whiskers: number;
}
interface Dog {
  weight: number;
  friendly: boolean;
}
let animal: Dog | Cat;

What we learned is that TypeScript only knows that animal could be either a Dog or a Cat, but not both. As such, all we get is weight, which is the only common property between the two.

The concept of discriminated unions is how TypeScript differentiates between those objects and does so in a way that scales extremely well, even with larger sets of objects. As such, we had to create a new ANIMAL_TYPE property on both types that holds a single literal value we can use to check against. Sure, it’s another thing to track, but it also produces more reliable results—which is what we want from TypeScript in the first place.


Demystifying TypeScript Discriminated Unions originally published on CSS-Tricks. You should get the newsletter and become a supporter.

Svelte for the Experienced React Dev

This post is an accelerated introduction to Svelte from the point of view of someone with solid experience with React. I’ll provide a quick introduction, and then shift focus to things like state management and DOM interoperability, among other things. I plan on moving somewhat quickly, so I can cover a lot of topics. At the end of the day, I’m mainly hoping to spark some interest in Svelte.

For a straightforward introduction to Svelte, no blog post could ever beat the official tutorial or docs.

“Hello, World!” Svelte style

Let’s start with a quick tour of what a Svelte component looks like.

<script>
  let number = 0;
</script>

<style>
  h1 {
    color: blue;
  }
</style>

<h1>Value: {number}</h1>

<button on:click={() => number++}>Increment</button>
<button on:click={() => number--}>Decrement</button> 

That content goes in a .svelte file, and is processed by the Rollup or webpack plugin to produce a Svelte component. There’s a few pieces here. Let’s walk through them.

First, we add a <script> tag with any state we need.

We can also add a <style> tag with any CSS we want. These styles are scoped to the component in such a way that, here, <h1> elements in this component will be blue. Yes, scoped styles are built into Svelte, without any need for external libraries. With React, you’d typically need to use a third-party solution to achieve scoped styling, such as css-modules, styled-components, or the like (there are dozens, if not hundreds, of choices).

Then there’s the HTML markup. As you’d expect, there are some HTML bindings you’ll need to learn, like {#if}, {#each}, etc. These domain-specific language features might seem like a step back from React, where everything is “just JavaScript.” But there’s a few things worth noting: Svelte allows you to put arbitrary JavaScript inside of these bindings. So something like this is perfectly valid:

{#if childSubjects?.length}

If you jumped into React from Knockout or Ember and never looked back, this might come as a (happy) surprise to you.

Also, the way Svelte processes its components is very different from React. React re-runs all components any time any state within a component, or anywhere in an ancestor (unless you “memoize”), changes. This can get inefficient, which is why React ships things like useCallback and useMemo to prevent un-needed re-calculations of data.

Svelte, on the other hand, analyzes your template, and creates targeted DOM update code whenever any relevant state changes. In the component above, Svelte will see the places where number changes, and add code to update the <h1> text after the mutation is done. This means you never have to worry about memoizing functions or objects. In fact, you don’t even have to worry about side-effect dependency lists, although we’ll get to that in a bit.

But first, let’s talk about …

State management

In React, when we need to manage state, we use the useState hook. We provide it an initial value, and it returns a tuple with the current value, and a function we can use to set a new value. It looks something like this:

import React, { useState } from "react";

export default function (props) {
  const [number, setNumber] = useState(0);
  return (
    <>
      <h1>Value: {number}</h1>
      <button onClick={() => setNumber(n => n + 1)}>Increment</button>
      <button onClick={() => setNumber(n => n - 1)}>Decrement</button>
    </>
  );
}

Our setNumber function can be passed wherever we’d like, to child components, etc.

Things are simpler in Svelte. We can create a variable, and update it as needed. Svelte’s ahead-of-time compilation (as opposed to React’s just-in-time compilation) will do the footwork of tracking where it’s updated, and force an update to the DOM. The same simple example from above might look like this:

<script>
  let number = 0;
</script>

<h1>Value: {number}</h1>
<button on:click={() => number++}>Increment</button>
<button on:click={() => number--}>Decrement</button>

Also of note here is that Svelte requires no single wrapping element like JSX does. Svelte has no equivalent of the React fragment <></> syntax, since it’s not needed.

But what if we want to pass an updater function to a child component so it can update this piece of state, like we can with React? We can just write the updater function like this:

<script>
  import Component3a from "./Component3a.svelte";
        
  let number = 0;
  const setNumber = cb => number = cb(number);
</script>

<h1>Value: {number}</h1>

<button on:click={() => setNumber(val => val + 1)}>Increment</button>
<button on:click={() => setNumber(val => val - 1)}>Decrement</button>

Now, we pass it where needed — or stay tuned for a more automated solution.

Reducers and stores

React also has the useReducer hook, which allows us to model more complex state. We provide a reducer function, and it gives us the current value, and a dispatch function that allows us to invoke the reducer with a given argument, thereby triggering a state update, to whatever the reducer returns. Our counter example from above might look like this:

import React, { useReducer } from "react";

function reducer(currentValue, action) {
  switch (action) {
    case "INC":
      return currentValue + 1;
    case "DEC":
      return currentValue - 1;
  }
}

export default function (props) {
  const [number, dispatch] = useReducer(reducer, 0);
  return (
    <div>
      <h1>Value: {number}</h1>
      <button onClick={() => dispatch("INC")}>Increment</button>
      <button onClick={() => dispatch("DEC")}>Decrement</button>
    </div>
  );
}

Svelte doesn’t directly have something like this, but what it does have is called a store. The simplest kind of store is a writable store. It’s an object that holds a value. To set a new value, you can call set on the store and pass the new value, or you can call update, and pass in a callback function, which receives the current value, and returns the new value (exactly like React’s useState).

To read the current value of a store at a moment in time, there’s a get function that can be called, which returns its current value. Stores also have a subscribe function, which we can pass a callback to, and that will run whenever the value changes.

Svelte being Svelte, there’s some nice syntactic shortcuts to all of this. If you’re inside of a component, for example, you can just prefix a store with the dollar sign to read its value, or directly assign to it, to update its value. Here’s the counter example from above, using a store, with some extra side-effect logging, to demonstrate how subscribe works:

<script>
  import { writable, derived } from "svelte/store";
        
  let writableStore = writable(0);
  let doubleValue = derived(writableStore, $val => $val * 2);
        
  writableStore.subscribe(val => console.log("current value", val));
  doubleValue.subscribe(val => console.log("double value", val))
</script>

<h1>Value: {$writableStore}</h1>

<!-- manually use update -->
<button on:click={() => writableStore.update(val => val + 1)}>Increment</button>
<!-- use the $ shortcut -->
<button on:click={() => $writableStore--}>Decrement</button>

<br />

Double the value is {$doubleValue}

Notice that I also added a derived store above. The docs cover this in depth, but briefly, derived stores allow you to project one store (or many stores) to a single, new value, using the same semantics as a writable store.

Stores in Svelte are incredibly flexible. We can pass them to child components, alter, combine them, or even make them read-only by passing through a derived store; we can even re-create some of the React abstractions you might like, or even need, if we’re converting some React code over to Svelte.

React APIs with Svelte

With all that out of the way, let’s return to React’s useReducer hook from before.

Let’s say we really like defining reducer functions to maintain and update state. Let’s see how difficult it would be to leverage Svelte stores to mimic React’s useReducer API. We basically want to call our own useReducer, pass in a reducer function with an initial value, and get back a store with the current value, as well as a dispatch function that invokes the reducer and updates our store. Pulling this off is actually not too bad at all.

export function useReducer(reducer, initialState) {
  const state = writable(initialState);
  const dispatch = (action) =>
    state.update(currentState => reducer(currentState, action));
  const readableState = derived(state, ($state) => $state);

  return [readableState, dispatch];
}

The usage in Svelte is almost identical to React. The only difference is that our current value is a store, rather than a raw value, so we need to prefix it with the $ to read the value (or manually call get or subscribe on it).

<script>
  import { useReducer } from "./useReducer";
        
  function reducer(currentValue, action) {
    switch (action) {
      case "INC":
        return currentValue + 1;
      case "DEC":
        return currentValue - 1;
    }
  }
  const [number, dispatch] = useReducer(reducer, 0);      
</script>

<h1>Value: {$number}</h1>

<button on:click={() => dispatch("INC")}>Increment</button>
<button on:click={() => dispatch("DEC")}>Decrement</button>

What about useState?

If you really love the useState hook in React, implementing that is just as straightforward. In practice, I haven’t found this to be a useful abstraction, but it’s a fun exercise that really shows Svelte’s flexibility.

export function useState(initialState) {
  const state = writable(initialState);
  const update = (val) =>
    state.update(currentState =>
      typeof val === "function" ? val(currentState) : val
    );
  const readableState = derived(state, $state => $state);

  return [readableState, update];
}

Are two-way bindings really evil?

Before closing out this state management section, I’d like to touch on one final trick that’s specific to Svelte. We’ve seen that Svelte allows us to pass updater functions down the component tree in any way that we can with React. This is frequently to allow child components to notify their parents of state changes. We’ve all done it a million times. A child component changes state somehow, and then calls a function passed to it from a parent, so the parent can be made aware of that state change.

In addition to supporting this passing of callbacks, Svelte also allows a parent component to two-way bind to a child’s state. For example, let’s say we have this component:

<!-- Child.svelte -->
<script>
  export let val = 0;
</script>

<button on:click={() => val++}>
  Increment
</button>

Child: {val}

This creates a component, with a val prop. The export keyword is how components declare props in Svelte. Normally, with props, we pass them in to a component, but here we’ll do things a little differently. As we can see, this prop is modified by the child component. In React this code would be wrong and buggy, but with Svelte, a component rendering this component can do this:

<!-- Parent.svelte -->
<script>
  import Child from "./Child.svelte";
        
  let parentVal;
</script>

<Child bind:val={parentVal} />
Parent Val: {parentVal}

Here, we’re binding a variable in the parent component, to the child’s val prop. Now, when the child’s val prop changes, our parentVal will be updated by Svelte, automatically.

Two-way binding is controversial for some. If you hate this then, by all means, feel free to never use it. But used sparingly, I’ve found it to be an incredibly handy tool to reduce boilerplate.

Side effects in Svelte, without the tears (or stale closures)

In React, we manage side effects with the useEffect hook. It looks like this:

useEffect(() => {
  console.log("Current value of number", number);
}, [number]);

We write our function with the dependency list at the end. On every render, React inspects each item in the list, and if any are referentially different from the last render, the callback re-runs. If we’d like to cleanup after the last run, we can return a cleanup function from the effect.

For simple things, like a number changing, it’s easy. But as any experienced React developer knows, useEffect can be insidiously difficult for non-trivial use cases. It’s surprisingly easy to accidentally omit something from the dependency array and wind up with a stale closure.

In Svelte, the most basic form of handling a side effect is a reactive statement, which looks like this:

$: {
  console.log("number changed", number);
}

We prefix a code block with $: and put the code we’d like to execute inside of it. Svelte analyzes which dependencies are read, and whenever they change, Svelte re-runs our block. There’s no direct way to have the cleanup run from the last time the reactive block was run, but it’s easy enough to workaround if we really need it:

let cleanup;
$: {
  cleanup?.();
  console.log("number changed", number);
  cleanup = () => console.log("cleanup from number change");
}

No, this won’t lead to an infinite loop: re-assignments from within a reactive block won’t re-trigger the block.

While this works, typically these cleanup effects need to run when your component unmounts, and Svelte has a feature built in for this: it has an onMount function, which allows us to return a cleanup function that runs when the component is destroyed, and more directly, it also has an onDestroy function that does what you’d expect.

Spicing things up with actions

The above all works well enough, but Svelte really shines with actions. Side effects are frequently tied to our DOM nodes. We might want to integrate an old (but still great) jQuery plugin on a DOM node, and tear it down when that node leaves the DOM. Or maybe we want to set up a ResizeObserver for a node, and tear it down when the node leaves the DOM, and so on. This is a common enough requirement that Svelte builds it in with actions. Let’s see how.

{#if show}
  <div use:myAction>
    Hello                
  </div>
{/if}

Note the use:actionName syntax. Here we’ve associated this <div> with an action called myAction, which is just a function.

function myAction(node) {
  console.log("Node added", node);
}

This action runs whenever the <div> enters the DOM, and passes the DOM node to it. This is our chance to add our jQuery plugins, set up our ResizeObserver, etc. Not only that, but we can also return a cleanup function from it, like this:

function myAction(node) {
  console.log("Node added", node);

  return {
    destroy() {
      console.log("Destroyed");
    }
  };
}

Now the destroy() callback will run when the node leaves the DOM. This is where we tear down our jQuery plugins, etc.

But wait, there’s more!

We can even pass arguments to an action, like this:

<div use:myAction={number}>
  Hello                
</div>

That argument will be passed as the second argument to our action function:

function myAction(node, param) {
  console.log("Node added", node, param);

  return {
    destroy() {
      console.log("Destroyed");
    }
  };
}

And if you’d like to do additional work whenever that argument changes, you can return an update function:

function myAction(node, param) {
  console.log("Node added", node, param);

  return {
    update(param) {
      console.log("Update", param);
    },
    destroy() {
      console.log("Destroyed");
    }
  };
}

When the argument to our action changes, the update function will run. To pass multiple arguments to an action, we pass an object:

<div use:myAction={{number, otherValue}}>
  Hello                
</div>

…and Svelte re-runs our update function whenever any of the object’s properties change.

Actions are one of my favorite features of Svelte; they’re incredibly powerful.

Odds and Ends

Svelte also ships a number of great features that have no counterpart in React. There’s a number of form bindings (which the tutorial covers), as well as CSS helpers.

Developers coming from React might be surprised to learn that Svelte also ships animation support out of the box. Rather than searching on npm and hoping for the best, it’s… built in. It even includes support for spring physics, and enter and exit animations, which Svelte calls transitions.

Svelte’s answer to React.Chidren are slots, which can be named or not, and are covered nicely in the Svelte docs. I’ve found them much simpler to reason about than React’s Children API.

Lastly, one of my favorite, almost hidden features of Svelte is that it can compile its components into actual web components. The svelte:options helper has a tagName property that enables this. But be sure to set the corresponding property in the webpack or Rollup config. With webpack, it would look something like this:

{
  loader: "svelte-loader",
  options: {
    customElement: true
  }
}

Interested in giving Svelte a try?

Any of these items would make a great blog post in and of itself. While we may have only scratched the surface of things like state management and actions, we saw how Svelte’s features not only match up pretty with React, but can even mimic many of React’s APIs. And that’s before we briefly touched on Svelte’s conveniences, like built-in animations (or transitions) and the ability to convert Svelte components into bona fide web components.

I hope I’ve succeeded in sparking some interest, and if I have, there’s no shortage of docs, tutorials, online courses, etc that dive into these topics (and more). Let me know in the comments if you have any questions along the way!


The post Svelte for the Experienced React Dev appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Coordinating Svelte Animations With XState

This post is an introduction to XState as it might be used in a Svelte project. XState is unique in the JavaScript ecosystem. It won’t keep your DOM synced with your application state, but it will help manage your application’s state by allowing you to model it as a finite state machine (FSM).

A deep dive into state machines and formal languages is beyond the scope of this post, but Jon Bellah does that in another CSS-Tricks article. For now, think of an FSM as a flow chart. Flow charts have a number of states, represented as bubbles, and arrows leading from one state to the next, signifying a transition from one state to the next. State machines can have more than one arrow leading out of a state, or none at all if it’s a final state, and they can even have arrows leaving a state, and pointing right back into that same state.

If that all sounds overwhelming, relax, we’ll get into all the details, nice and slow. For now, the high level view is that, when we model our application as a state machine, we’ll be creating different “states” our application can be in (get it … state machine … states?), and the events that happen and cause changes to state will be the arrows between those states. XState calls the states “states,” and the arrows between the states “actions.”

Our example

XState has a learning curve, which makes it challenging to teach. With too contrived a use case it’ll appear needlessly complex. It’s only when an application’s code gets a bit tangled that XState shines. This makes writing about it tricky. With that said, the example we’ll look at is an autocomplete widget (sometimes called autosuggest), or an input box that, when clicked, reveals a list of items to choose from, which filter as you type in the input.

For this post we’ll look at getting the animation code cleaned up. Here’s the starting point:

This is actual code from my svelte-helpers library, though with unnecessary pieces removed for this post. You can click the input and filter the items, but you won’t be able to select anything, “arrow down” through the items, hover, etc. I’ve removed all the code that’s irrelevant to this post.

We’ll be looking at the animation of the list of items. When you click the input, and the results list first renders, we want to animate it down. As you type and filter, changes to the list’s dimensions will animate larger and smaller. And when the input loses focus, or you click ESC, we animate the list’s height to zero, while fading it out, and then remove it from the DOM (and not before). To make things more interesting (and nice for the user), let’s use a different spring configuration for the opening than what we use for the closing, so the list closes a bit more quickly, or stiffly, so unneeded UX doesn’t linger on the screen too long.

If you’re wondering why I’m not using Svelte transitions to manage the animations in and out of the DOM, it’s because I’m also animating the list’s dimensions when it’s open, as the user filters, and coordinating between transition, and regular spring animations is a lot harder than simply waiting for a spring update to finish getting to zero before removing an element from the DOM. For example, what happens if the user quickly types and filters the list, as it’s animating in? As we’ll see, XState makes tricky state transitions like this easy.

Scoping the Problem

Let’s take a look at the code from the example so far. We’ve got an open variable to control when the list is open, and a resultsListVisible property to control whether it should be in the DOM. We also have a closing variable that controls whether the list is in the process of closing.

On line 28, there’s an inputEngaged method that runs when the input is clicked or focused. For now let’s just note that it sets open and resultsListVisible to true. inputChanged is called when the user types in the input, and sets open to true. This is for when the input is focused, the user clicks escape to close it, but then starts typing, so it can re-open. And, of course, the inputBlurred function runs when you’d expect, and sets closing to true, and open to false.

Let’s pick apart this tangled mess and see how the animations work. Note the slideInSpring and opacitySpring at the top. The former slides the list up and down, and adjusts the size as the user types. The latter fades the list out when hidden. We’ll focus mostly on the slideInSpring.

Take a look at the monstrosity of a function called setSpringDimensions. This updates our slide spring. Focusing on the important pieces, we take a few boolean properties. If the list is opening, we set the opening spring config, we immediately set the list’s width (I want the list to only slide down, not down and out), via the { hard: true } config, and then set the height. If we’re closing, we animate to zero, and, when the animation is complete, we set resultsListVisible to false (if the closing animation is interrupted, Svelte will be smart enough to not resolve the promise so the callback will never run). Lastly, this method is also called any time the size of the results list changes, i.e., as the user filters. We set up a ResizeObserver elsewhere to manage this.

Spaghetti galore

Let’s take stock of this code.

  • We have our open variable which tracks if the list is open.
  • We have the resultsListVisible variable which tracks if the list should be in the DOM (and set to false after the close animation is complete).
  • We have the closing variable that tracks if the list is in the process of closing, which we check for in the input focus/click handler so we can reverse the closing animation if the user quickly re-engages the widget before it’s done closing.
  • We also have setSpringDimensions that we call in four different places. It sets our springs depending on whether the list is opening, closing, or just resizing while open (i.e. if the user filters the list).
  • Lastly, we have a resultsListRendered Svelte action that runs when the results list DOM element renders. It starts up our ResizeObserver, and when the DOM node unmounts, sets closing to false.

Did you catch the bug? When the ESC button is pressed, I’m only setting open to false. I forgot to set closing to true, and call setSpringDimensions(false, true). This bug was not purposefully contrived for this blog post! That’s an actual mistake I made when I was overhauling this widget’s animations. I could just copy paste the code in inputBlured over to where the escape button is caught, or even move it to a new function and call it from both places. This bug isn’t fundamentally hard to solve, but it does increase the cognitive load of the code.

There’s a lot of things we’re keeping track of, but worst of all, this state is scattered all throughout the module. Take any piece of state described above, and use CodeSandbox’s Find feature to view all the places where that piece of state is used. You’ll see your cursor bouncing across the file. Now imagine you’re new to this code, trying to make sense of it. Think about the growing mental model of all these state pieces that you’ll have to keep track of, figuring out how it works based on all the places it exists. We’ve all been there; it sucks. XState offers a better way; let’s see how.

Introducing XState

Let’s step back a bit. Wouldn’t it be simpler to model our widget in terms of what state it’s in, with events happening as the user interacts, which cause side effects, and transitions to new states? Of course, but that’s what we were already doing; the problem is, the code is scattered everywhere. XState gives us the ability to properly model our state in this way.

Setting expectations

Don’t expect XState to magically make all of our complexity vanish. We still need to coordinate our springs, adjust the spring’s config based on opening and closing states, handle resizes, etc. What XState gives us is the ability to centralize this state management code in a way that’s easy to reason about, and adjust. In fact, our overall line count will increase a bit, as a result of our state machine setup. Let’s take a look.

Your first state machine

Let’s jump right in, and see what a bare bones state machine looks like. I’m using XState’s FSM package, which is a minimal, pared down version of XState, with a tiny 1KB bundle size, perfect for libraries (like an autosuggest widget). It doesn’t have a lot of advanced features like the full XState package, but we wouldn’t need them for our use case, and we wouldn’t want them for an introductory post like this.

The code for our state machine is below, and the interactive demo is over at Code Sandbox. There’s a lot, but we’ll go over it shortly. And to be clear, it doesn’t work yet.

const stateMachine = createMachine(
  {
    initial: "initial",
    context: {
      open: false,
      node: null
    },
    states: {
      initial: {
        on: { OPEN: "open" }
      },
      open: {
        on: {
          RENDERED: { actions: "rendered" },
          RESIZE: { actions: "resize" },
          CLOSE: "closing"
        },
        entry: "opened"
      },
      closing: {
        on: {
          OPEN: { target: "open", actions: ["resize"] },
          CLOSED: "closed"
        },
        entry: "close"
      },
      closed: {
        on: {
          OPEN: "open"
        },
        entry: "closed"
      }
    }
  },
  {
    actions: {
      opened: assign(context => {
        return { ...context, open: true };
      }),
      rendered: assign((context, evt) => {
        const { node } = evt;
        return { ...context, node };
      }),
      close() {},
      resize(context) {},
      closed: assign(() => {
        return { open: false, node: null };
      })
    }
  }
);

Let’s go from top to bottom. The initial property controls what the initial state is, which I’ve called “initial.” context is the data associated with our state machine. I’m storing a boolean for whether the results list is currently open, as well as a node object for that same results list. Next we see our states. Each state is a key in the states property. For most states, you can see we have an on property, and an entry property.

on configures events. For each event, we can transition to a new state; we can run side effects, called actions; or both. For example, when the OPEN event happens inside of the initial state, we move into the open state. When the RENDERED event happens in the open state, we run the rendered action. And when the OPEN event happens inside the closing state, we transition into the open state, and also run the resize action. The entry field you see on most states configures an action to run automatically whenever a state is entered. There are also exit actions, although we don’t need them here.

We still have a few more things to cover. Let’s look at how our state machine’s data, or context, can change. When we want an action to modify context, we wrap it in assign and return the new context from our action; if we don’t need any processing, we can just pass the new state directly to assign. If our action does not update context, i.e., it’s just for side effects, then we don’t wrap our action function in assign, and just perform whatever side effects we need.

Affecting change in our state machine

We have a cool model for our state machine, but how do we run it? We use the interpret function.

const stateMachineService = interpret(stateMachine).start();

Now stateMachineService is our running state machine, on which we can invoke events to force our transitions and actions. To fire an event, we call send, passing the event name, and then, optionally, the event object. For example, in our Svelte action that runs when the results list first mounts in the DOM, we have this:

stateMachineService.send({ type: "RENDERED", node });

That’s how the rendered action gets the node for the results list. If you look around the rest of the AutoComplete.svelte file, you’ll see all the ad hoc state management code replaced with single line event dispatches. In the event handler for our input click/focus, we run the OPEN event. Our ResizeObserver fires the RESIZE event. And so on.

Let’s pause for a moment and appreciate the things XState gives us for free here. Let’s look at the handler that runs when our input is clicked or focused before we added XState.

function inputEngaged(evt) {
  if (closing) {
    setSpringDimensions();
  }
  open = true;
  resultsListVisible = true;
} 

Before, we were checking to see if we were closing, and if so, forcing a re-calculation of our sliding spring. Otherwise we opened our widget. But what happened if we clicked on the input when it was already open? The same code re-ran. Fortunately that didn’t really matter. Svelte doesn’t care if we re-set open and resultsListVisible to the values they already held. But those concerns disappear with XState. The new version looks like this:


function inputEngaged(evt) {
  stateMachineService.send("OPEN");
}

If our state machine is already in the open state, and we fire the OPEN event, then nothing happens, since there’s no OPEN event configured for that state. And that special handling for when the input is clicked when the results are closing? That’s also handled right in the state machine config — notice how the OPEN event tacks on the resize action when it’s run from the closing state.

And, of course, we’ve fixed the ESC key bug from before. Now, pressing the key simply fires the CLOSE event, and that’s that.

Finishing up

The ending is almost anti-climactic. We need to take all of the work we were doing before, and simply move it to the right place among our actions. XState does not remove the need for us to write code; it only provides a structured, clear place to put it.

{
  actions: {
    opened: assign({ open: true }),
    rendered: assign((context, evt) => {
      const { node } = evt;
      const dimensions = getResultsListDimensions(node);
      itemsHeightObserver.observe(node);
      opacitySpring.set(1, { hard: true });
      Object.assign(slideInSpring, SLIDE_OPEN);
      slideInSpring.update(prev => ({ ...prev, width: dimensions.width }), {
        hard: true
      });
      slideInSpring.set(dimensions, { hard: false });
      return { ...context, node };
    }),
    close() {
      opacitySpring.set(0);
      Object.assign(slideInSpring, SLIDE_CLOSE);
      slideInSpring
        .update(prev => ({ ...prev, height: 0 }))
        .then(() => {
          stateMachineService.send("CLOSED");
        });
    },
    resize(context) {
      opacitySpring.set(1);
      slideInSpring.set(getResultsListDimensions(context.node));
    },
    closed: assign(() => {
      itemsHeightObserver.unobserve(resultsList);
      return { open: false, node: null };
    })
  }
}

Odds and ends

Our animation state is in our state machine, but how do we get it out? We need the open state to control our results list rendering, and, while not used in this demo, the real version of this autosuggest widget needs the results list DOM node for things like scrolling the currently highlighted item into view.

It turns out our stateMachineService has a subscribe method that fires whenever there’s a state change. The callback you pass is invoked with the current state machine state, which includes a context object. But Svelte has a special trick up its sleeve: its reactive syntax of $: doesn’t only work with component variables and Svelte stores; it also works with any object with a subscribe method. That means we can sync with our state machine with something as simple as this:

$: ({ open, node: resultsList } = $stateMachineService.context);

Just a regular destructuring, with some parens to help things get parsed correctly.

One quick note here, as an area for improvement. Right now, we have some actions which both both perform a side effect, and also update state. Ideally, we should probably split these up into two actions, one just for the side effect, and the other using assign for the new state. But I decided to keep things as simple as possible for this article to help ease the introduction of XState, even if a few things wound up not being quite ideal.

Here’s the demo

Parting thoughts

I hope this post has sparked some interest in XState. I’ve found it to be an incredibly useful, easy to use tool for managing complex state. Please know that we’ve only scratched the surface. We focused on the minimal fsm package, but the entire XState library is capable of a lot more than what we covered here, from nested states, to first-class support for Promises, and it even has a state visualization tool! I urge you to check it out.

Happy coding!


The post Coordinating Svelte Animations With XState appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Svelte and Spring Animations

Spring animations are a wonderful way to make UI interactions come to life. Rather than merely changing a property at a constant rate over a period of time, springs allow us to move things using spring physics, which gives the impression of a real thing moving, and can appear more natural to users.

I’ve written about spring animations previously. That post was based on React, using react-spring for the animations. This post will explore similar ideas in Svelte.

CSS devs! It’s common to think of easing when it comes to controling the feel of animations. You could think of “spring” animations as a subcategory of easing that are based on real-world physics.

Svelte actually has springs built into the framework, without needing any external libraries. We’ll rehash what was covered in the first half my previous post on react-spring. But after that, we’ll take a deep-dive into all the ways these springs can be used with Svelte, and leave the real world implementation for a future post. While that may seem disappointing, Svelte has a number of wonderful, unique features with no counterpart in React, which can be effectively integrated with these animation primitives. We’re going to spend some time talking about them.

One other note: Some of the demos sprinkled throughout may look odd because I configured the springs to be extra “bouncy” to create more obvious effect. If you the code for any of them, be sure to find a spring configuration that works for you.

Here’s a wonderful REPL Rich Harris made to show all the various spring configurations, and how they behave.

A quick primer on Svelte Stores

Before we start, let’s take a very, very quick tour of Svelte stores. While Svelte’s components are more than capable of storing and updating state, Svelte also has the concept of a store, which allows you to store state outside of a component. Since Svelte’s Spring API uses Stores, we’ll quickly introduce the salient parts here.

To create an instance of a store, we can import the writable type, and create it like so:

import { writable } from "svelte/store";
const clicks = writable(0);

The clicks variable is a store that has a value of 0. There’s two ways to set a new value of a store: the set and update methods. The former receives the value to which you’re setting the store, while the latter receives a callback, accepting the current value, and returning the new value.

function increment() {
  clicks.update(val => val + 1);
}
function setTo5() {
  clicks.set(5);
}

State is useless if you can’t actually consume it. For this, stores offer a subscribe method, which allows you to be notified of new values — but when using it inside of a component, you can prefix the store’s name with the $ character, which tells Svelte to not only display the current value of the store, but to update when it changes. For example:

<h1>Value {$clicks}</h1>
<button on:click={increment}>Increment</button>
<button on:click={setTo5}>Set to 5</button>

Here’s a full, working example of this code. Stores offer a number of other features, such as derived stores, which allow you to chain stores together, readable stores, and even the ability to be notified when a store is first observed, and when it no longer has observers. But for the purposes of this post, the code shown above is all we need to worry about. Consult the Svelte docs or interactive tutorial for more info.

A crash course on springs

Let’s walk through a quick introduction of springs, and what they accomplish. We’ll take a look at a simple UI that changes a presentational aspect of some elements — opacity and transform — and then look at animating that change.

This is a minimal Svelte component that toggles the opacity of one <div>, and toggles the x-axis transform of another (without any animation).

<script>
  let shown = true;
  let moved = 0;

  const toggleShow = () => (shown = !shown);
  const toggleMove = () => (moved = moved ? 0 : 500);
</script>

<div style="opacity: {shown ? 1 : 0}">Content to toggle</div>
<br />
<button on:click={toggleShow}>Toggle</button>
<hr />
<div class="box" style="transform: translateX({moved}px)">I'm a box.</div>
<br />
<button on:click={toggleMove}>Move it!</button>

These changes are applied instantly, so let’s look at animating them. This is where springs come in. In Svelte, a spring is a store that we set the desired value on, but instead of instantly changing, the store internally uses spring physics to gradually change the value. We can then bind our UI to this changing value, to get a nice animation. Let’s see it in action.

<script>
  import { spring } from "svelte/motion";

  const fadeSpring = spring(1, { stiffness: 0.1, damping: 0.5 });
  const transformSpring = spring(0, { stiffness: 0.2, damping: 0.1 });

  const toggleFade = () => fadeSpring.update(val => (val ? 0 : 1));
  const toggleTransform = () => transformSpring.update(val => (val ? 0 : 500));
  const snapTransform = () => transformSpring.update(val => val, { hard: true });
</script>

<div style="opacity: {$fadeSpring}">Content to fade</div>
<br />
<button on:click={toggleFade}>Fade Toggle</button>

<hr />

<div class="box" style="transform: translateX({$transformSpring}px)">I'm a box.</div>
<br />
<button on:click={toggleTransform}>Move it!</button>
<button on:click={snapTransform}>Snap into place</button>

We get our spring function from Svelte, and set up different spring instances for our opacity, and transform animations. The transform spring config is purposefully set up to be extra springy, to help show later how we can temporarily turn off spring animations, and instantly apply desired changes (which will come in handy later). At the end of the script block are our click handlers for setting the desired properties. Then, in the HTML, we bind our changing values directly to our elements… and that’s it! That’s all there is to basic spring animations in Svelte.

The only remaining item is the snapTransform function, where we set our transform spring to its current value, but also pass an object as the second argument, with hard: true. This has the effect of immediately applying the desired value with no animation at all.

This demo, as well as the rest of the basic examples we’ll look at in this post, is here:

Animating height

Animating height is trickier than other CSS properties, since we have to know the actual height to which we’re animating. Sadly, we can’t animate to a value of auto. That wouldn’t make sense for a spring, since the spring needs a real number so it can interpolate the correct values via spring physics. And as it happens, you can’t even animate auto height with regular CSS transitions. Fortunately, the web platform gives us a handy tool for getting the height of an element: a ResizeObserver, which enjoys pretty good support among browsers.

Let’s start with a raw height animation of an element, producing a “slide down” effect that we gradually refine in other examples. We’ll be using ResizeObserver to bind to an element’s height. I should note that Svelte does have an offsetHeight binding that can be used to more directly bind an element’s height, but it’s implemented with some <iframe> hacks that cause it to only work on elements that can receive children. This would probably be good enough for most use cases, but I’ll use a ResizeObserver because it allows some nice abstractions in the end.

First, we’ll bind an element’s height. It’ll receive the element and return a writable store that initializes a ResizeObserver, which updates the height value on change. Here’s what that looks like:

export default function syncHeight(el) {
  return writable(null, (set) => {
    if (!el) {
      return;
    }
    let ro = new ResizeObserver(() => el && set(el.offsetHeight));
    ro.observe(el);
    return () => ro.disconnect();
  });
}

We’re starting the store with a value of null, which we’ll interpret as “haven’t measured yet.” The second argument to writable is called by Svelte when the store becomes active, which it will be as soon as it’s used in a component. This is when we fire up the ResizeObserver and start observing the element. Then, we return a cleanup function, which Svelte calls for us when the store is no longer being used anywhere.

Let’s see this in action:

<script>
  import syncHeight from "../syncHeight";
  import { spring } from "svelte/motion";

  let el;
  let shown = false;
  let open = false;
  let secondParagraph = false;

  const heightSpring = spring(0, { stiffness: 0.1, damping: 0.3 });
  $: heightStore = syncHeight(el);
  $: heightSpring.set(open ? $heightStore || 0 : 0);

  const toggleOpen = () => (open = !open);
  const toggleSecondParagraph = () => (secondParagraph = !secondParagraph);
</script>

<button on:click={ toggleOpen }>Toggle</button>
<button on:click={ toggleSecondParagraph }>Toggle More</button>
<div style="overflow: hidden; height: { $heightSpring }px">
  <div bind:this={el}>
    <div>...</div>
    <br />
    {#if secondParagraph}
    <div>...</div>
    {/if}
  </div>
</div>

Our el variable holds the element we’re animating. We tell Svelte to set it to the DOM element via bind:this={el}. heightSpring is our spring that holds the height value of the element when it’s open, and zero when it’s closed. Our heightStore is what keeps it up to date with the element’s current height. el is initially undefined, and syncHeight returns a junk writable store that basically does nothing. As soon as el is assigned to the <div> node, that line will re-fire — thanks to the $: syntax — and get our writable store with the ResizeObserver listening.

Then, this line:

$: heightSpring.set(open ? $heightStore || 0 : 0);

…listens for changes to the open value, and also changes to the height value. In either case, it updates our spring store. We bind the height in HTML, and we’re done!

Be sure to remember to set overflow to hidden on this outer element so the contents are properly clipped as the elements toggles between its opened and closed states. Also, changes to the element’s height also animate into place, which you can see with the “Toggle More” button. You can run this in the embedded demo in the previous section.

Note that this line above:

$: heightStore = syncHeight(el);

…currently causes an error when using server-side rendering (SSR), as explained in this bug. If you’re not using SSR you don’t need to worry about it, and of course by the time you read this that bug may have been fixed. But the workaround is to merely do this:

let heightStore;
$: heightStore = syncHeight(el);

…which works but is hardly ideal.

We probably don’t want the <div> to spring open on first render. Also, the opening spring effect is nice, but when closing, the effect is janky due to some content flickering. We can fix that. To prevent our initial render from animating, we can use the { hard: true } option we saw earlier. Let’s change our call to heightSpring.set to this:

$: heightSpring.set(open ? $heightStore || 0 : 0, getConfig($heightStore));

…and then see about writing a getConfig function that returns an object with the hard property that was set to true for the first render. Here’s what I came up with:

let shown = false;

const getConfig = val => {
  let active = typeof val === "number";
  let immediate = !shown && active;
  //once we've had a proper height registered, we can animate in the future
  shown = shown || active;
  return immediate ? { hard: true } : {};
};

Remember, our height store initially holds null and only gets a number when the ResizeObserver starts running. We capitalize on this by checking for an actual number. If we have a number, and we haven’t yet shown anything, then we know to show our content immediately, and we we do that by setting the immediate value. That value ultimately triggers the hard config value in the spring, which we saw before.

Now let’s tweak the animation to be a bit less, well, springy when we close our content. That way, things won’t flicker when they close. When we initially created our spring, we specified stiffness and damping, like so

const heightSpring = spring(0, { stiffness: 0.1, damping: 0.3 });

It turns out the spring object itself maintains those properties, which can be set anytime. Let’s update this line:

$: heightSpring.set(open ? $heightStore || 0 : 0, getConfig($heightStore));

That detects changes to the open value (and the heightStore itself) to update the spring. Let’s also update the spring’s settings based on whether we’re opening or closing. Here’s what it looks like:

$: {
  heightSpring.set(open ? $heightStore || 0 : 0, getConfig($heightStore));
  Object.assign(
    heightSpring,
    open ? { stiffness: 0.1, damping: 0.3 } : { stiffness: 0.1, damping: 0.5 }
  );
}

Now when we get a new open or height value, we call heightSpring.set just like before, but we also set stiffness and damping values on the spring that are applied based on whether the element is open. If it’s closed, we set damping up to 0.5, which reduces the springiness. Of course, you’re welcome to tweak all these values and configure them as you’d like! You can see this in the “Animate Height Different Springs” section of the demo.

You might notice our code is starting to grow pretty quickly. We’ve added a lot of boilerplate to cover some of these use cases, so let’s clean things up. Specifically, we’ll make a function that creates our spring and that also exports a sync function to handle our spring config, initial render, etc.

import { spring } from "svelte/motion";

const OPEN_SPRING = { stiffness: 0.1, damping: 0.3 };
const CLOSE_SPRING = { stiffness: 0.1, damping: 0.5 };

export default function getHeightSpring() {
  const heightSpring = spring(0);
  let shown = false;

  const getConfig = (open, val) => {
    let active = typeof val === "number";
    let immediate = open && !shown && active;
    // once we've had a proper height registered, we can animate in the future
    shown = shown || active;
    return immediate ? { hard: true } : {};
  };

  const sync = (open, height) => {
    heightSpring.set(open ? height || 0 : 0, getConfig(open, height));
    Object.assign(heightSpring, open ? OPEN_SPRING : CLOSE_SPRING);
  };

  return { sync, heightSpring };
}

There’s a lot of code here, but it’s all the code we’ve been writing so far, just packaged into a single function. Now our code to use this animation is simplified to just this

const { heightSpring, sync } = getHeightSpring();
$: heightStore = syncHeight(el);
$: sync(open, $heightStore);

You can see in the “Animate Height Cleanup” section of the demo.

Some Svelte-specific tricks

Let’s pause for a moment and consider some ways Svelte differs from React, and how we might leverage that to improve what we have even further.

First, the stores we’ve been using to hold springs and change height values are, unlike React’s hooks, not tied to component rendering. They’re plain JavaScript objects that can be consumed anywhere. And, as alluded to above, we can imperatively subscribe to them so that they manually observe changing values.

Svelte also something called actions. These are functions that can be added to a DOM element. When the element is created, Svelte calls the function and passes the element as the first argument. We can also specify additional arguments for Svelte to pass, and provide an update function for Svelte to re-run when those values change. Another thing we can do is provide a cleanup function for Svelte to call when it destroys the element.

Let’s put these tools together in a single action that we can simply drop onto an element to handle all the animation we’ve been writing so far:

export default function slideAnimate(el, open) {
  el.parentNode.style.overflow = "hidden";

  const { heightSpring, sync } = getHeightSpring();
  const doUpdate = () => sync(open, el.offsetHeight);
  const ro = new ResizeObserver(doUpdate);

  const springCleanup = heightSpring.subscribe((height) => {
    el.parentNode.style.height = `${ height }px`;
  });

  ro.observe(el);

  return {
    update(isOpen) {
      open = isOpen;
      doUpdate();
    },
    destroy() {
      ro.disconnect();
      springCleanup();
    }
  };
}

Our function is called with the element we want to animate, as well as the open value. We’ll set the element’s parent to have overflow: hidden. Then we use the same getHeightSpring function from before, set up our ResizeObserver, etc. The real magic is here.

const springCleanup = heightSpring.subscribe((height) => {
  el.parentNode.style.height = `${height}px`;
});

Instead of binding our heightSpring to the DOM, we manually subscribe to changes, then set the height ourselves, manually. We wouldn’t normally do manual DOM updates when using a JavaScript framework like Svelte but, in this case, it’s for a helper library, which is just fine in my opinion.

In the object we’re returning, we define an update function which Svelte will call when the open value changes. We update the original argument to this function, which the function closes over ( i.e. creates a closure around) and then calls our update function to sync everything. Svelte calls the destroy function when our DOM node is destroyed.

Best of all, using this action is a snap:

<div use:slideAnimate={open}>

That’s it. When open changes, Svelte calls our update function.

Before we move on, let’s make one other tweak. Notice how we remove the springiness by changing the spring config when we collapse the pane with the “Toggle” button; however, when we make the element smaller by clicking the “Toggle More” button, it shrinks with the usual springiness. I dislike that, and prefer shrinking sizes move with the same physics we’re using for collapsing.

Let’s start by removing this line in the getHeightSpring function:

Object.assign(heightSpring, open ? OPEN_SPRING : CLOSE_SPRING);

That line is inside the sync function that getHeightSpring created, which updates our spring settings on every change, based on the open value. With it gone, we can start our spring with the “open” spring config:

const heightSpring = spring(0, OPEN_SPRING);

Now let’s change our spring settings when either the height of our content changes, or when the open value changes. We already have the ability to observe both of those things changing — our ResizeObserver callback fires when the size of the content changes, and the update function of our action fires whenever open changes.

Our ResizeObserver callback can be changed, like this:

let currentHeight = null;
const ro = new ResizeObserver(() => {
  const newHeight = el.offsetHeight;
  const bigger = newHeight > currentHeight;

  if (typeof currentHeight === "number") {
    Object.assign(heightSpring, bigger ? OPEN_SPRING : CLOSE_SPRING);
  }
  currentHeight = newHeight;
  doUpdate();
});

currentHeight holds the current value, and we check it on size changes to see which direction we’re moving. Next up is the update function. Here’s what it looks like after our change:

update(isOpen) {
  open = isOpen;
  Object.assign(heightSpring, open ? OPEN_SPRING : CLOSE_SPRING);
  doUpdate();
},

Same idea, but now we’re only checking whether open is true or false. You can see these iterations in the “Slide Animate” and “Slide Animate 2” sections of the demo.

Transitions

We’ve talked about animating items already on the page so far, but what about animating an object when it first renders? And when it un-mounts? That’s called a transition, and it’s built into Svelte. The docs do a superb job covering the common use cases, but there’s one thing that’s not yet (directly) supported: spring-based transitions.

/explanation Note that what Svelte calls a “transition” and what CSS calls a “transition” are very different things. CSS means transitioning one value to another. Svelte is referring to elements as they “transition” into and out of the DOM entirely (something that CSS doesn’t help with much at all).

To be clear, the work we’re doing here is made for adding spring-based animations into Svelte’s transitions. This is not currently supported, so it requires some tricks and workarounds that we’ll get into. If you don’t care about using springs, then Svelte’s built-in transitions can be used, which are significantly simpler. Again, check the docs for more info.

The way transitions work in Svelte is that we provide a duration in milliseconds (ms) along with an optional easing function, then Svelte provides us a callback with a value running from 0 to 1, representing how far along the transition is, and we turn that into whatever CSS we want. For example:

const animateIn = () => {
  return {
    duration: 2000,
    css: t => `transform: translateY(${t * 50 - 50}px)`
  };
};

…is used like this:

<div in:animateIn out:animateOut class="box">
  Hello World!
</div>

When that <div> first mounts, Svelte:

  • calls our animateIn function,
  • rapidly calls the CSS function on our resulting object ahead of time with values from 0 to 1,
  • collects our changing CSS result, then
  • compiles those results into a CSS keyframes animation, which it then applies to the incoming <div>.

This means that our animation will run as a CSS animation — not as JavaScript on the main thread — offering a nice performance boost for free.

The variable t starts at 0, which results in a translation of -50px. As t gets closer to 1, the translation approaches 0, its final value. The out transition is about the same, but in reverse, with the added feature of detecting the box’s current translation value, starting from there. So, if we add it then quickly remove it, the box will start to leave from its current position rather than jumping ahead. However, if we then re-add it while it’s leaving, it will jump, something we’ll talk about in just moment.

You can run this in the “Basic Transition” section of the demo.

Transitions, but with springs

While there’s a number of easing functions that alter the flow of an animation, there’s no ability to directly use springs. But what we could do is find some way to run a spring ahead of time, collect the resulting values, and then, when our css function is called with the a t value running from 0 to 1, look up the right spring value. So, if t is 0, we obviously need the first value from thespring. When t is 0.5, we want the value right in the middle, and so on. We also need a duration, which is number_of_spring_values * 1000 / 60 since there’s 60 frames per second.

We won’t write that code here. Instead, we’ll use the solution that already exists in the svelte-helpers library, a project I started. I grabbed one small function from the Svelte codebase, spring_tick, then wrote a separate function to repeatedly call it until it’s finished, collecting the values along the way. That, along with a translation from t to the correct element in that array (or a weighted average if there’s not a direct match), is all we need. Rich Harris gave a helping hand on the latter, for which I’m grateful.

Animate in

Let’s pretend a big red <div> is a modal that we want to animate in, and out. Here’s what an animateIn function looks like:

import { springIn, springOut } from "svelte-helpers/animation";
const SPRING_IN = { stiffness: 0.1, damping: 0.1 };

const animateIn = node => {
  const { duration, tickToValue } = springIn(-80, 0, SPRING_IN);
  return {
    duration,
    css: t => `transform: translateY(${ tickToValue(t) }px)`
  };
};

We feed the values we want to spring to, as well as our spring config to the springIn function. That gives us a duration, and a function for translating the current tickToValue into the current value to apply in the CSS. That’s it!

Animate out

Closing the modal is the same thing, with one small tweak

const SPRING_OUT = { stiffness: 0.1, damping: 0.5, precision: 3 };

const animateOut = node => {
  const current = currentYTranslation(node);
  const { duration, tickToValue } = springOut(current ? current : 0, 80, SPRING_OUT);
  return {
    duration: duration,
    css: t => `transform: translateY(${ tickToValue(t) }px)`
  };
};

Here, we’re check the modal’s current translation position, then use that as a starting point for the animation. This way, if the user opens and then quickly closes the modal, it’ll exit from its current position, rather than teleporting to 0, and then leaving. This works because the animateOut function is called when the element un-mounts, at which point we generate the object with the duration property and css function so the animation can be computed.

Sadly, it seems re-mounting the object while it’s in the process of leaving does not work, at least well. The animateIn function is not called de novo, but rather the original animation is re-used, which means it’ll always start at -80. Fortunately this almost certainly would not matter for a typical modal component, since a modal is usually removed by clicking on something, like the background overlay, meaning we are unable to re-show it until that overlay has finished animating out. Besides, repeatedly adding and removing an element with bidirectional transitions might make for a fun demo, but they’re not really common in practice, at least in my experience.

One last quick note on the outgoing spring config: You may have noticed that I set the precision ridiculously high (3 when the default is 0.01). This tells Svelte how close to get to the target value before deciding it is “done.” If you leave the default at 0.01, the modal will (almost) hit its destination, then spend quite a few milliseconds imperceptibly getting closer and closer before deciding it’s done, then remove itself from the DOM. This gives the impression that the modal is stuck, or otherwise delayed. Moving the precision to a value of 3 fixes this. Now the modal animates to where it should go (or close enough), then quickly goes away.

More animation

Let’s add one final tweak to our modal example. Let’s have it fade in and out while animating. We can’t use springs for this, since, again, we need to have one canonical duration for the transition, and our motion spring is already providing that. But spring animations usually make sense for items actually moving, and not much else. So let’s use an easing function to create a fade animation.

If you need help picking the right easing function, be sure to check out this handy visualization from the Svelte docs. I’ll be using the quintOut and quadIn functions.

import { quintOut, quadIn } from "svelte/easing";

Our new animateIn function looks pretty similar. Our css function does what it did before, but also runs the tickToValue value through the quintOut easing function to get our opacity value. Since t runs from 0 to 1 during an in transition, and 1 to 0 during an out transition, we don’t have to do anything further to it before applying to opacity.

const SPRING_IN = { stiffness: 0.1, damping: 0.1 };
const animateIn = node =>; {
  const { duration, tickToValue } = springIn(-80, 0, SPRING_IN);
  return {
    duration,
    css: t => {
      const transform = tickToValue(t);
      const opacity = quintOut(t);
      return `transform: translateY(${ transform }px); opacity: ${ opacity };`;
    }
  };
};

Our animateOut function is similar, except we want to grab the element’s current opacity value, and force the animation to start there. So, if the element is in the process of fading in, with an opacity of, say, 0.3, we don’t want to reset it to 1, and then fade it out. Instead, we want to fade it out from 0.3.

Multiplying that starting opacity by whatever value the easing function returns accomplishes this. If our t value starts at 1, then 1 * 0.3 is 0.3. If t is 0.95, we do 0.95 * 0.3 to get a value, which is a little less than 0.3, and so on.

Here’s the function:

const animateOut = node => {
  const currentT = currentYTranslation(node);
  const startOpacity = +getComputedStyle(node).opacity;
  const { duration, tickToValue } = springOut(
    currentT ? currentT : 0,
    80,
    SPRING_OUT
  );
  return {
    duration,
    css: t => {
      const transform = tickToValue(t);
      const opacity = quadIn(t);
      return `transform: translateY(${ transform }px); opacity: ${ startOpacity * opacity }`;
    }
  };
};

You can run this example in the demo with the “Spring Transition With Fade component.

Parting thoughts

Svelte is a lot of fun! In my (admittedly limited) experience, it tends to provide extremely simple primitives, and then leaves you to code up whatever you need. I hope this post has helped explain how the spring animations can be put to good use in your web applications.

And, hey, just a quick reminder to consider accessibility when working with springs, just as you would do with any other animation. Pairing these techniques with something like prefers-reduced-motion can ensure that only folks who prefer animations are the ones who get them.


The post Svelte and Spring Animations appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Integrating TypeScript with Svelte

Svelte is one of the newer JavaScript frameworks and it’s rapidly rising in popularity. It’s a template-based framework, but one which allows for arbitrary JavaScript inside the template bindings; it has a superb reactivity story that’s simple, flexible and effective; and as an ahead-of-time (AOT) compiled framework, it has incredibly impressive perf, and bundle sizes. This post will focus on configuring TypeScript inside of Svelte templates. If you’re new to Svelte, I’d urge you to check out the introductory tutorial and docs.

If you’d like to follow along with the code (or you want to debug what you might be missing in your own project) you can clone the repo. I have branches set up to demonstrate the various pieces I’ll be going over.

Basic TypeScript and Svelte setup

Let’s look at a baseline setup. If you go to the initial-setup branch in the repo, there’s a bare Svelte project set up, with TypeScript. To be clear, TypeScript is only working in stand-alone .ts files. It’s not in any way integrated into Svelte. Accomplishing the TypeScript integration is the purpose of this post.

I’ll go over a few pieces that make Svelte and TypeScript work, mainly since I’ll be changing them in a bit, to add TypeScript support to Svelte templates.

First, I have a tsconfig.json file:

{
  "compilerOptions": {
    "module": "esNext",
    "target": "esnext",
    "moduleResolution": "node"
  },
  "exclude": ["./node_modules"]
}

This file tells TypeScript that I want to use modern JavaScript, use Node resolution, and exclude a node_modules from compilation.

Then, in typings/index.d.ts I have this:

declare module "*.svelte" {
  const value: any;
  export default value;
}

This allows TypeScript to co-exist with Svelte. Without this, TypeScript would issue errors any time a Svelte file is loaded with an import statement. Lastly, we need to tell webpack to process our Svelte files, which we do with this rule in webpack.config.js:

{
  test: /\.(html|svelte)$/,
  use: [
    { loader: "babel-loader" },
    {
      loader: "svelte-loader",
      options: {
        emitCss: true,
      },
    },
  ],
}

All of that is the basic setup for a project using Svelte components and TypeScript files. To confirm everything builds, open up a couple of terminals and run npm start in one, which will start a webpack watch, and npm run tscw in the other, to start a TypeScript watch task. Hopefully both will run without error. To really verify the TypeScript checking is running, you can change:

let x: number = 12;

…in index.ts to:

let x: number = "12";

…and see the error come up in the TypeScript watch. If you want to actually run this, you can run node server in a third terminal (I recommend iTerm2, which allows you to run these terminals inside tabs in the same window) and then hit localhost:3001.

Adding TypeScript to Svelte

Let’s add TypeScript directly to our Svelte component, then see what configuration changes we need to make it work. First go to Helper.svelte, and add lang="ts" to the script tag. That tells Svelte there’s TypeScript inside the script. Now let’s actually add some TypeScript. Let’s change the val prop to be checked as a number, via export let val: number;. The whole component now looks like this:

<script lang="ts">
  export let val: number;
</script>

<h1>Value is: {val}</h1>

Our webpack window should now have an error, but that’s expected.

Showing the terminal with an error output.

We need to tell the Svelte loader how to handle TypeScript. Let’s install the following:

npm i svelte-preprocess svelte-check --save

Now, let’s go to our webpack config file and grab svelte-preprocess:

const sveltePreprocess = require("svelte-preprocess");

…and add it to our svelte-loader:

{
  test: /\.(html|svelte)$/,
  use: [
    { loader: "babel-loader" },
    {
      loader: "svelte-loader",
      options: {
        emitCss: true,
        preprocess: sveltePreprocess({})
      },
    },
  ],
}

OK, let’s restart the webpack process, and it should build.

Add checking

So far, what we have builds, but it doesn’t check. If we have invalid code in a Svelte component, we want that to generate an error. So, let’s go to App.svelte, add the same lang="ts" to the script tag, and then pass an invalid value for the val prop, like this:

<Helper val={"3"} />

If we look in our TypeScript window, there are no errors, but there should be. It turns out we don’t type check our Svelte template with the normal tsc compiler, but with the svelte-check utility we installed earlier. Let’s stop our TypeScript watch and, in that terminal, run npm run svelte-check. That’ll start the svelte-check process in watch mode, and we should see the error we were expecting.

Showing the terminal with a caught error.

Now, remove the quotes around the 3, and the error should go away:

Showing the same terminal window, but no errors.

Neat!

In practice, we’d want both svelte-check and tsc running at the same time so we catch both errors in both our TypeScript files and Svelte templates. There’s a bunch of utilities on npm that allow can do this, or we can use iTerm2, is able to split multiple terminals in the same window. I’m using it here to run the server, webpack build, tsc build, and svelte-check build.

Showing iTerm2 window with four open terminals in a two-by-two grid.

This setup is in the basic-checking branch of the repo.

Catching missing props

There’s still one problem we need to solve. If we omit a required prop, like the val prop we just looked at, we still won’t get an error, but we should, since we didn’t assign it a default value in Helper.svelte, and is therefore required.

<Helper /> // missing `val` prop

To tell TypeScript to report this as an error, let’s go back to our tsconfig, and add two new values

"strict": true,
"noImplicitAny": false 

The first enables a bunch of TypeScript checks that are disabled by default. The second, noImplicitAny, turns off one of those strict checks. Without that second line, any variable lacking a type—which is implicitly typed as any—is now reported as an error (no implicit any, get it?)

Opinions differ widely on whether noImplicitAny should be set to true. I happen to think it’s too strict, but plenty of people disagree. Experiment and come to your own conclusion.

Anyway, with that new configuration in place, we should be able to restart our svelte-check task and see the error we were expecting.

Showing terminal with a caught error.

This setup is in the better-checking branch of the repo.

Odds and ends

One thing to be aware of is that TypeScript’s mechanism for catching incorrect properties is immediately, and irreversibly switched off for a component if that component ever references $$props or $$restProps. For example, if you were to pass an undeclared prop of, say, junk into the Helper component, you’d get an error, as expected, since that component has no junk property. But this error would immediately go away if the Helper component referenced $$props or $$restProps. The former allows you to dynamically access any prop without having an explicit declaration for it, while $$restProps is for dynamically accessing undeclared props.

This makes sense when you think about it. The purpose of these constructs is to dynamically access a property on the fly, usually for some sort of meta-programming, or to arbitrarily pass attributes on to an html element, which is common in UI libraries. The existence of either of them implies arbitrary access to a component that may not have been declared.

There’s one other common use of $$props, and that’s to access props declared as a reserved word. class is a common example of this. For example:

const className = $$props.class;

…since:

export let class = "";

…is not valid. class is a reserved word in JavaScript but there’s a workaround in this specific case. The following is also a valid way to declare that same prop—thanks to Rich Harris for helping with this.

let className;
export { className as class };

If your only use of $$props is to access a prop whose name is reserved, you can use this alternative, and maintain better type checking for your component.

Parting thoughts

Svelte is one of the most promising, productive, and frankly fun JavaScript frameworks I’ve worked with. The relative ease with which TypeScript can be added is like a cherry on top. Having TypeScript catch errors early for you can be a real productivity boost. Hopefully this post was of some help achieving that.


The post Integrating TypeScript with Svelte appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

How to Use CSS Grid for Sticky Headers and Footers

CSS Grid is a collection of properties designed to make layout easier than it’s ever been. Like anything, there’s a bit of a learning curve, but Grid is honestly fun to work with once you get the hang of it. One area where it shines is dealing with headers and footers. With a little adjustment in our thinking, we can pull off headers and footers that behave like they are fixed, or have that “sticky” treatment (not position: sticky, but the kind of footer that hugs the bottom of the screen even if there isn’t enough content to push it there, and is pushed away with more content). 

Hopefully this sparks further interest in modern layouts, and if it does, I can’t recommend Rachel Andrew’s book The New CSS Layout strongly enough: it covers both of the major modern layout techniques, grid and flexbox.

What we’re making

Let’s implement a fairly classic HTML layout that consist of a header, main content and footer.

We’ll make a truly fixed footer, one that stays at the bottom of the viewport where the main content scrolls within itself, as needed, then later update the footer to be a more traditional sticky footer that starts at the bottom of the viewport, even if the main content is small, but gets pushed down as needed. Further, to broaden our exposure to grid, let’s design our main content holder so that it can either span the whole width of the viewport, or take up a nicely centered strip down the middle.

A fixed footer is slightly unusual. Footers are commonly designed to start at the bottom of the viewport, and get pushed down by main content as needed. But a persistent footer isn’t unheard of. Charles Schwab does it on their homepage. Either way, it’ll be fun to implement!

But before we move on, feel free to actually peek at the fixed footer implemented on the Charles Schwab site. Unsurprisingly, it uses fixed positioning, which means it has a hard-coded size. In fact, if we crack open DevTools, we see that right off the bat:

body #qq0 {
  border-top: 4px solid #133568;
  background-color: #eee;
  left: 0;
  right: 0;
  bottom: 0;
  height: 40px!important;
}

Not only that, but there’s the balance of making sure the main content doesn’t get hidden behind that fixed footer, which it does by setting hard-coded paddings (including 15px on the bottom of the <footer> element), margins (including 20px on <ul> in the footer), and even line breaks.

Let’s try to pull this off without any of these restrictions.  

Our baseline styles

Let’s sketch out a bare minimum UI to get us started, then enhance our grid to match our goals. There’s a CodeSandbox below, plus additional ones for the subsequent steps that get us to the end result.

First, let’s do some prep work. We’ll make sure we’re using the whole height of the viewport, so when we add our grid, it’ll be easy to put the footer at the bottom (and keep it there).  There’s only going to be one element inside the document’s <body> with an ID of #app, which will hold the <header, <main> and <footer> elements.

body {
  margin: 0; /* prevents scrollbars */
}


#app {
  height: 100vh;
}

Next, let’s set up our header, main, and footer sections, as well as the grid they’ll all sit in. To be clear, this will not work the way we want right out of the gate. It’s just to get us started, with a base to build from.

body {
  margin: 0;
}


#app {
  height: 100vh;
  
  /* grid container settings */
  display: grid;
  grid-template-columns: 1fr;
  grid-template-rows: auto 1fr auto;
  grid-template-areas: 
    'header'
    'main'
    'footer';
}


#app > header {
  grid-area: header;
}


#app > main {
  grid-area: main;
  padding: 15px 5px 10px 5px;
}


#app > footer {
  grid-area: footer;
}

We’ve created a simple one-column layout, with a width of 1fr. If that 1fr is new to you, it essentially means “take the remaining space” which, in this case, is the entire width of the grid container, #app.

We’ve also defined three rows:

#app {
  /* etc. */
  grid-template-rows: auto 1fr auto;
  /* etc. */
}

The first and third rows, which will be our header and footer, respectively, are sized with auto, which means they’ll take up as much space as needed. In other words: no need for hard-coded sizes! This is a super important detail and a perfect example of how we benefit from using CSS Grid.

The middle row is where we’ll put our content. We’ve assigned it a size of 1fr which, again, just means it takes up all of the remaining space that’s left over from the other two rows. If you’re wondering why we aren’t making it auto as well, it’s because the entire grid spans the viewport’s whole height, so we need one section to grow and fill up any unused space. Note that we do not have, nor will we ever need at any point, any fixed heights, margins, paddings — or even line breaks! — to push things into place. Such is the good life when working with grid!

Shall we try some content?

You’ll notice in the Sandbox that I used React to build this demo, but since this isn’t a post about React, I won’t belabor those details; React has absolutely nothing to do with any of the CSS Grid work in this post. I’m only using it as an easy way to navigate between different chunks of markup. If you hate React, that’s fine: hopefully you can ignore it in this post.

We have Header, Main and Footer components that render the expected <header> , <main>  and <footer> elements, respectively. And, of course, this all sits inside our #app container. Yes, in theory, #app should be an <article> element, semantically speaking, but that’s always looked weird to me. I just wanted to covey these details so we’re all one the same page as we plow ahead.

For the actual content, I have Billing and Settings sections that you can navigate between in the header. They both render fake, static content, and are only meant to show our layout in action. The Settings section will be the content that we put in a centered strip on our page, Billing will be the one that spans our whole page.

Here’s the Sandbox with what we have so far.

The Billing section looks good, but the Settings section pushes our footer off screen. Not only that, but if we scroll, the entire page scrolls, causing us to lose our header. That may be desirable in some cases, but we want both the header and footer to stay in view, so let’s fix that.

Fixed header, fixed footer

When we initially set up our grid, we gave it a height of 100vh, which is the entire height of the viewport. We then assigned the rows for the header and footer an auto height, and the main a height of 1fr to take up the remaining space. Unfortunately, when content exceeds the space available, it expanded beyond the viewport bounds, pushing our footer down and out of view.

The fix here is trivial: adding overflow: auto will cause our <main> element to scroll, while keeping our <header> and <footer> elements in place.

#app > main {
  grid-area: main;
  overflow: auto;
  padding: 15px 5px 10px 5px;
}

Here’s the updated demo that puts this to use.

Adjustable width main section

We want our <main> element to either span the whole width of the viewport, or be centered in a 600px space. You might think we could simply make <main> a 600px fixed width, with an auto margins on either side. But since this is a post about grid, let’s use moar grid. (Plus, as we’ll see later, a fixed width won’t work anyway).

To achieve our centered 600px element, we’ll actually make the <main> element a grid container. That’s right, a grid within a grid! Nesting grids is a totally legit approach, and will even get easier in the future when subgrid is officially supported across browsers. In this scenario, we’ll make <main> a grid with three column tracks of 1fr 600px 1fr or, stated simply, 600px in the middle, with the remaining space equally divided on the sides.

#app > main {
  display: grid;
  grid-template-rows: 1fr;
  grid-template-columns: 1fr 600px 1fr;
}

Now let’s position our the content in the grid. Our different modules all render in a <section> child. Let’s say that by default, content will occupy the middle section, unless it has a .full class, in which case it will span the entire grid width. We won’t use named areas here, and instead specify precise grid coordinates of the form [row-start] / [col-start] / [row-end] / [col-end]:

#app > section {
  grid-area: 1 / 2 / 1 / 3;
}


#app > section.full {
  grid-area: 1 / 1 / 1 / 4
}

You might be surprised to see a col-end value of 4, given that there’s only three columns. This is because the column and row values are column and row grid lines. It takes four grid lines to draw three grid columns. 

Our <section> will always be in the first row, which is the only row. By default it’ll span column lines 2 through 3, which is the middle column, unless the section has a full class on it, in which case it’ll span column lines 1 through 4, which is all three columns.

Here’s an updated demo with this code. It’ll probably look good, depending on your CodeSandbox layout, but there’s still a problem. If you shrink the display to smaller than 600px, the content is abruptly truncated. We don’t really want a fixed 600px width in the middle. We want a width of up to 600px. It turns out grid has just the tool for us: the minmax() function. We specify a minimum width and a maximum width, and the grid will compute a value that falls in that range. That’s how we prevent the content from blowing out of the grid.

All we need to do is swap out that 600px value with minmax(0, 600px):

main {
  display: grid;
  grid-template-rows: 1fr;
  grid-template-columns: 1fr minmax(0, 600px) 1fr;
}

Here’s the demo for the finished code.

One more approach: The traditional fixed footer

Earlier, we decided to prevent the footer from being pushed off the screen and did that by setting the <main> element’s overflow property to auto.

But, as we briefly called out, that might be a desirable effect. In fact, it’s more of a classic “sticky” footer that solves that annoying issue, and places the footer on the bottom edge of the viewport when the content is super short.

Hey, get back to the bottom!

How could we keep all of our existing work, but allow the footer to get pushed down, instead of fixing itself to the bottom in persistent view?

Right now our content is in a grid with this HTML structure:

<div id="app">
  <header />
  <main>
    <section />
  </main>
  <footer />
</div>

…where <main> is a grid container nested within the #app grid container, that contains one row and three columns that we use to position our module’s contents, which go in the <section> tag.

 Let’s change it to this:

<div id="app">
  <header />
  <main>
    <section />
    <footer />
  </main>
</div>

…and incorporate <footer> into the <main> element’s grid. We’ll start by updating our parent #app grid so that it now consists of two rows instead of three:

#app {
  /* same as before */


  grid-template-columns: 1fr;
  grid-template-rows: auto 1fr;
  grid-template-areas: 
    'header'
    'main';
}

Just two rows, one for the header, and the other for everything else. Now let’s update the grid inside our <main> element:

#app > main {
  display: grid;
  grid-template-rows: 1fr auto;
  grid-template-columns: 1fr minmax(0, 600px) 1fr;
}

We’ve introduced a new auto-sized row. That means we now have a 1fr row for our content, that holds our <section>, and an auto row for the footer.

Now we position our <footer> inside this grid, instead of directly in #app:

#app > footer {
  grid-area: 2 / 1 / 2 / 4;
}

Since <main> is the element that has scrolling, and since this element now has our footer, we’ve achieved the sticky footer we want! This way, if <main> has content that exceeds the viewport, the whole thing will scroll, and that scrolling content will now include our footer, which sits at the very bottom of the screen as we’d expect.

Here’s an updated demo. Note that the footer will be at the bottom of the screen if possible; otherwise it’ll scroll as needed. 

I made a few other small changes, like minor adjustments to paddings here and there; we can’t have any left or right paddings on <main>, because the <footer> would no longer go edge-to-edge.

I also made a last-minute adjustment during final edits to the <section> element—the one we enabled adjustable width content on. Specifically, I set its display to flex, its width to 100%, and its immediate descendant to overflow: auto. I did this so the <section> element’s content can scroll horizontally, within itself, if it exceeds our grid column boundary, but without allowing any vertical scrolling.

Without this change, the work we did would amount to the fixed footer approach we covered earlier. Making section> a flex container forces its immediate child — the <div> that contains the content — to take up all of the available vertical space. And, of course, setting that child div to overflow: auto enables scrolling. If you’re wondering why I didn’t just set the section’s overflow-x to auto, and overflow-y to visible, well, it turns out that’s not possible.

Parting thoughts 

We haven’t done anything revolutionary in this post, and certainly nothing that couldn’t be accomplished before CSS Grid. Our fixed width <main> container could have been a block element with a max-width value of 600px, and auto margins on the left and right. Our fixed footer could have been made with position: fixed (just make sure the main content doesn’t overlap with it). And, of course, there are various ways to get a more traditional “sticky footer.”

But CSS Grid provides a single, uniform layout mechanism to accomplish all of this, and it’s fun to work with — honestly fun. In fact, the idea of moving the footer from fixed to sticky wasn’t even something I planned at first. I threw it in at the last minute because I thought the post was a bit too light without it. It was trivial to accomplish, basically moving grid rows around, not unlike putting lego blocks together. And again, these UIs were trivial. Imagine how brightly grid will shine with more ambitious designs!


The post How to Use CSS Grid for Sticky Headers and Footers appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Making Sense of react-spring

Animation is one of the trickier things to get right with React. In this post, I’ll try to provide the introduction to react-spring I wish I had when I first started out, then dive into some interesting use cases. While react-spring isn’t the only animation library for React, it’s one of the more popular (and better) ones.

I’ll be using the newest version 9 which, as of this writing, is in release candidate status. If it’s not fully released by the time you read this, be sure to install it with react-spring@next. From on what I’ve seen and from what the lead maintainer has told me, the code is incredibly stable. The only issue I’ve seen is a slight bug when used with concurrent mode, which can be tracked in the GitHub repo.

react-spring redux

Before we get to some interesting application use cases, let’s take a whirlwind intro. We’ll cover springs, height animation, and then transitions. I’ll have a working demo at the end of this section, so don’t worry if things get a little confusing along the way. 

springs

Let’s consider the canonical “Hello world” of animation: fading content in and out. Let’s stop for a moment and consider how we’d switch opacity on and off without any kind of animation. It’d look something like this:

export default function App() {
  const [showing, setShowing] = useState(false);
  return (
    <div>
      <div style={{ opacity: showing ? 1 : 0 }}>
        This content will fade in and fade out
      </div>
      <button onClick={() => setShowing(val => !val)}>Toggle</button>
      <hr />
    </div>
  );
}

Easy, but boring. How do we animate the changing opacity? Wouldn’t it be nice if we could declaratively set the opacity we want based on state, like we do above, except have those values animate smoothly? That’s what react-spring does. Think of react-spring as our middle-man that launders our changing style values, so it can produce the smooth transition between animating values we want. Like this:

const [showA, setShowA] = useState(false);


const fadeStyles = useSpring({
  config: { ...config.stiff },
  from: { opacity: 0 },
  to: {
    opacity: showA ? 1 : 0
  }
});

We specify our initial style values with from, and we specify the current value in the to section, based on our current state. The return value, fadeStyles, contains the actual style values we apply to our content. There’s just one last thing we need…

You might think you could just do this:

<div style={fadeStyles}>

…and be done. But instead of using a regular div, we need to use a react-spring div that’s created from the animated export. That may sound confusing, but all it really means is this:

<animated.div style={fadeStyles}>

And that’s it. 

animating height 

Depending on what we’re animating, we may want the content to slide up and down, from a zero height to its full size, so the surrounding contents adjust and flow smoothly into place. You might wish that we could just copy the above, with height going from zero to auto, but alas, you can’t animate to auto height. That neither works with vanilla CSS, nor with react-spring. Instead, we need to know the actual height of our contents, and specify that in the to section of our spring.

We need to have the height of any arbitrary content on the fly so we can pass that value to react-spring. It turns out the web platform has something specifically designed for that: a ResizeObserver. And the support is actually pretty good! Since we’re using React, we’ll of course wrap that usage in a hook. Here’s what mine looks like:

export function useHeight({ on = true /* no value means on */ } = {} as any) {
  const ref = useRef<any>();
  const [height, set] = useState(0);
  const heightRef = useRef(height);
  const [ro] = useState(
    () =>
      new ResizeObserver(packet => {
        if (ref.current && heightRef.current !== ref.current.offsetHeight) {
          heightRef.current = ref.current.offsetHeight;
          set(ref.current.offsetHeight);
        }
      })
  );
  useLayoutEffect(() => {
    if (on && ref.current) {
      set(ref.current.offsetHeight);
      ro.observe(ref.current, {});
    }
    return () => ro.disconnect();
  }, [on, ref.current]);
  return [ref, height as any];
}

We can optionally provide an on value that switches measuring on and off (which will come in handy later). When on is true, we tell our ResizeObserver to observe out content. We return a ref that needs to be applied to whatever content we want measured, as well as the current height.

Let’s see it in action.

const [heightRef, height] = useHeight();
const slideInStyles = useSpring({
  config: { ...config.stiff },
  from: { opacity: 0, height: 0 },
  to: {
    opacity: showB ? 1 : 0,
    height: showB ? height : 0
  }
});

useHeight gives us a ref and a height value of the content we’re measuring, which we pass along to our spring. Then we apply the ref and apply the height styles.

<animated.div style={{ ...slideInStyles, overflow: "hidden" }}>
  <div ref={heightRef}>
    This content will fade in and fade out with sliding
  </div>
</animated.div>

Oh, and don’t forget to add overflow: hidden to the container. That allows us to properly container our adjusting height values.

animating transitions

Lastly, let’s look at adding and removing animating items to and from the DOM. We already know how to animate changing values of an item that exists and is staying in the DOM, but to animate the adding, or removing of items, we need a new hook: useTransition.

If you’ve used react-spring before, this is one of the few places where version 9 has some big changes to its API. Let’s take a look.

In order to animate a list of items, like this:

const [list, setList] = useState([]);

…we’ll declare our transition function like this:

const listTransitions = useTransition(list, {
  config: config.gentle,
  from: { opacity: 0, transform: "translate3d(-25%, 0px, 0px)" },
  enter: { opacity: 1, transform: "translate3d(0%, 0px, 0px)" },
  leave: { opacity: 0, height: 0, transform: "translate3d(25%, 0px, 0px)" },
  keys: list.map((item, index) => index)
});

As I alluded earlier, the return value, listTransitions, is a function. react-spring is tracking the list array, keeping tabs on the items which are added and removed. We call the listTransitions function, provide a callback accepting a single styles object and a single item, and react-spring will call it for each item in the list with the right styles, based on whether it’s newly added, newly removed, or just sitting in the list.

Note the keys section: This allows us to tell react-spring how to identify objects in the list. In this case, I decided to tell react-spring that an item’s index in the array uniquely defines that item. Normally this would be a terrible idea, but for now, it lets us see the feature in action. In the demo below, the “Add item” button adds an item to the end of the list when clicked, and the “Remove last item” button removes the most recently added item from the list. So, if you type in the input box then quickly hit the add button then the remove button, you’ll see the same item smoothly begin to enter, and then immediately, from whatever stage in the animation it’s at, begin to leave. Conversely, if you add an item, then quickly hit the remove button and the add button, the same item will begin to slide off, then abruptly stop in place, and slide right back to where it was.

Here’s that demo

Whew, that was a ton of words! Here’s a working demo, showing everything we just covered in action.

Odds and Ends

Did you notice how, when you slide down the content in the demo, it sort of bounces into place, like… a spring? That’s where the name comes from: react-spring interpolates our changing values using spring physics. It doesn’t simply chop the value changing into N equal deltas that it applies over N equal delays. Instead, it uses a more sophisticated algorithm that produces that spring-like effect, which will appear more natural.

The spring algorithm is fully configurable, and it comes with a number of presets you can take off the shelf — the demo above uses the stiff, and gentle presets. See the docs for more info.

Note also how I’m animating values inside of translate3d values. As you can see, the syntax isn’t the most terse, and so react-spring provides some shortcuts. There’s documentation on this, but for the remainder of this post I’ll continue to use the full, non-shortcut syntax for the sake of keeping things as clear as possible. 

I’ll close this section by calling attention to the fact that, when you slide the content up in the demo above, you’ll likely see the content underneath it get a little jumpy at the very end. This is a result of that same bouncing effect. It looks sharp when the content bounces down and into position, but less so when we’re sliding the content up. Stay tuned to see how we can switch it off. (Spoiler, it’s the clamp property).

A few things to consider with these sandboxes

Code Sandbox uses hot reloading. As you change the code, changes are usually reflected immediately. This is cool, but can wreck havoc on animations. If you start tinkering, and then see weird, ostensibly incorrect behavior, try refreshing the sandbox.

The other sandboxes in this post will make use of a modal. For reasons I haven’t quite been able to figure out, when the modal is open, you won’t be able to modify any code — the modal refuses to give up focus. So, be sure to close the modal before attempting any changes. 

Now let’s build something real

Those are the basic building blocks of react-spring. Let’s use them to build something more interesting. You might think, given everything above, that react-spring is pretty simple to use. Unfortunately, in practice, it can be tricky to figure out some subtle things you need to get right. The rest of this post will dive into many of these details. 

Prior blog posts I’ve written have been in some way related to my booklist side project. This one will be no different — it’s not an obsession, it’s just that that project happens to have a publicly-available GraphQL endpoint, and plenty of existing code that be can leveraged, making it an obvious target.

Let’s build a UI that allows you to open a modal and search for books. When the results come in, you can add them to a running list of selected books that display beneath the modal. When you’re done, you can close the modal and click a button to find books similar to the selection.

We’ll start with a functioning UI then animate the pieces step by step, including interactive demos along the way.

If you’re really eager to see what the final result will look like, or you’re already familiar with react-spring and want to see if I’m covering anything you don’t already know, here it is (it won’t win any design awards, I’m well aware). The rest of this post will cover the journey getting to that end state, step by step.

Animating our modal

Let’s start with our modal. Before we start adding any kind of data, let’s get our modal animating nicely.  Here’s what a basic, un-animated modal looks like. I’m using Ryan Florence’s Reach UI (specifically the modal component), but the idea will be the same no matter what you use to build your modal. We’d like to get our backdrop to fade in, and also transition our modal content.

Since a modal is conditionally rendered based on some sort of “open” property, we’ll use the useTransition hook. I was already wrapping the Reach UI modal with my own modal component, and rendering either nothing, or the actual modal based on the isOpen property. We just need to go through the transition hook to get it animating.

Here’s what the transition hook looks like:

const modalTransition = useTransition(!!isOpen, {
  config: isOpen ? { ...config.stiff } : { duration: 150 },
  from: { opacity: 0, transform: `translate3d(0px, -10px, 0px)` },
  enter: { opacity: 1, transform: `translate3d(0px, 0px, 0px)` },
  leave: { opacity: 0, transform: `translate3d(0px, 10px, 0px)` }
});

There’s not too many surprises here. We want to fade things in and provide a slight vertical transition based on whether the modal is active or not. The odd piece is this:

config: isOpen ? { ...config.stiff } : { duration: 150 },

I want to only use spring physics if the modal is opening. The reason for this — at least in my experience — is when you close the modal, the backdrop takes too long to completely vanish, which leaves the underlying UI un-interactive for too long. So, when the modal opens, it’ll nicely bounce into place with spring physics, and when closed, it’ll quickly vanish in 150ms.

And, of course, we’ll render our content via the transition function our hook returns. Notice that I’m plucking the opacity style off of the styles object to apply to the backdrop, and then applying all the animating styles to the actual modal content.

return modalTransition(
  (styles, isOpen) =>
    isOpen && (
      <AnimatedDialogOverlay
        allowPinchZoom={true}
        initialFocusRef={focusRef}
        onDismiss={onHide}
        isOpen={isOpen}
        style={{ opacity: styles.opacity }}
      >
      <AnimatedDialogContent
        style={{
          border: "4px solid hsla(0, 0%, 0%, 0.5)",
          borderRadius: 10,
          maxWidth: "400px",
          ...styles
        }}
      >
        <div>
          <div>
            <StandardModalHeader caption={headerCaption} onHide={onHide} />
            {children}
          </div>
        </div>
      </AnimatedDialogContent>
    </AnimatedDialogOverlay>
  )
);

Base setup

Let’s start with the use case I described above. If you’re following along with the demos, here’s a full demo of everything working, but with zero animation. Open the modal, and search for anything (feel free to just hit Enter in the empty textbox). You should hit my GraphQL endpoint, and get back search results from my own personal library.

The rest of this post will focus on adding animations to the UI, which will give us a chance to see a before and after, and (hopefully) observe how much nicer some subtle, well-placed animations can make a UI. 

Animating the modal size

Let’s start with the modal itself. Open it and search for, say, “jefferson.” Notice how the modal abruptly becomes larger to accommodate the new content. Can we have the modal animate to larger (and smaller) sizes? Of course. Let’s dig out our trusty useHeight hook, and see what we can do.

Unfortunately, we can’t simply slap the height ref on a wrapper in our content, and then stick the height in a spring. If we did this, we’d see the modal slide into its initial size. We don’t want that; we want our fully formed modal to appear in the right size, and re-size from there.

What we want to do is wait for our modal content to be rendered in the DOM, then set our height ref, and switch on our useHeight hook, to start measuring. Oh, and we want our initial height to be set immediately, and not animate into place. It sounds like a lot, but it’s not as bad as it sounds.

Let’s start with this:

const [heightOn, setHeightOn] = useState(false);
const [sizingRef, contentHeight] = useHeight({ on: heightOn });
const uiReady = useRef(false);

We have some state for whether we’re measuring our modal’s height. This will be set to true when the modal is in the DOM. Then we call our useHeight hook with the on property, for whether we’re active. Lastly, some state to hold whether our UI is ready, and we can begin animating.

First things fist: how do we even know when our modal is actually rendered in the DOM? It turns out we can use a ref that lets us know. We’re used to doing <div ref={someRef} in React, but you can actually pass a function, which React will call with the DOM node after it’s rendered. Let’s define that function now.

const activateRef = ref => {
  sizingRef.current = ref;
  if (!heightOn) {
    setHeightOn(true);
  }
};

That sets our height ref and switches on our useHeight hook. We’re almost done!

Now how do we get that initial animation to not be immediate? The useSpring hook has two new properties we’ll look at now. It has an immediate property which tells it to make states changes immediate instead of animating them. It also has an onRest callback which fires when a state change finishes.

Let’s leverage both of them. Here’s what the final hook looks like:

const heightStyles = useSpring({
  immediate: !uiReady.current,
  config: { ...config.stiff },
  from: { height: 0 },
  to: { height: contentHeight },
  onRest: () => (uiReady.current = true)
});

Once any height change is completed, we set the uiReady ref to true. So long as it’s false, we tell react-spring to make immediate changes. So, when our modal first mounts, contentHeight is zero (useHeight will return zero if there’s nothing to measure) and the spring is just chilling, doing nothing. When the modal switches to open and actual content is rendered, our activateRef ref is called, our useHeight will switch on, we’ll get an actual height value for our contents, our spring will set it “immediately,” and, finally, the onRest callback will trigger, and future changes will be animated. Phew!

I should point out that if, in some alternate use case we did immediately have a correct height upon the first render, we’d be able to simplify the above hook to just this:

const heightStyles = useSpring({
  to: {
    height: contentHeight
  },
  config: config.stiff,
})

…which can actually be further simplified to this:

const heightStyles = useSpring({
  height: contentHeight,
  config: config.stiff,
})

Our hook would render initially with the correct height, and any changes to that value would be animated. But since our modal renders before it’s actually shown, we can’t avail ourselves to this simplification. 

Keen readers might wonder what happens when you close the modal. Well, the content will un-render and the height hook will just stick with the last reported height, though still “observing” a DOM node that’s no longer in the DOM. If you’re worried about that, feel free to clean things up better than I have here, perhaps with something like this:

useLayoutEffect(() => {
  if (!isOpen) {
    setHeightOn(false);
  }
}, [isOpen]);

That’ll cancel the ResizeObserver for that DOM node and fix the memory leak.

Animating the results

Next, let’s look at animating the changing of the results within the modal. If you run a few searches, you should see the results immediately swap in and out.

Take a look at the SearchBooksContent component in the searchBooks.js file. Right now, we have const booksObj = data?.allBooks; which plucks the appropriate result set off of the GraphQL response, and then later renders them.

{booksObj.Books.map(book => (
  <SearchResult
    key={book._id}
    book={book}
    selected={selectedBooksMap[book._id]}
    selectBook={selectBook}
    dispatch={props.dispatch}
  />
))}

As fresh results come back from our GraphQL endpoint, this object will change, so why not take advantage of that fact, and pass it to the useTransition hook from before, and get some transition animations defined.

const resultsTransition = useTransition(booksObj, {
  config: { ...config.default },
  from: {
    opacity: 0,
    position: "static",
    transform: "translate3d(0%, 0px, 0px)"
  },
  enter: {
    opacity: 1,
    position: "static",
    transform: "translate3d(0%, 0px, 0px)"
  },
  leave: {
    opacity: 0,
    position: "absolute",
    transform: "translate3d(90%, 0px, 0px)"
  }
});

Note the change from position: static to position: absolute. An outgoing result set with absolute positioning has no effect on its parent’s height, which is what we want. Our parent will size to the new contents and, of course, our modal will nicely animate to the new size based on the work we did above.

As before, we’ll use our transition function to render our content:

<div className="overlay-holder">
  {resultsTransition((styles, booksObj) =>
    booksObj?.Books?.length ? (
      <animated.div style={styles}>
        {booksObj.Books.map(book => (
          <SearchResult
            key={book._id}
            book={book}
            selected={selectedBooksMap[book._id]}
            selectBook={selectBook}
            dispatch={props.dispatch}
          />
        ))}
      </animated.div>
    ) : null
  )}

Now new result sets will fade in, while outgoing sets of results will fade (and slightly slide) out to give the user an extra cue that things have changed.

Of course, we also want to animate any messaging, such as when there’s no results, or when the user has selected everything in the result set. The code for that is pretty repetitive with everything else in here, and since this post is already getting long, I’ll leave the code in the demo.

Animating selected books (out)

Right now, selecting a book instantly and abruptly vanishes it from the list. Let’s apply our usual fade out while sliding it out to the right. And as the item is sliding out to the right (via transform), we probably want its height to animate to zero so the list can smoothly adjust to the exiting item, rather than have it slide out, leaving behind an the empty box, which then immediately disappears.

By now, you probably think this is easy. You’re expecting something like this:

const SearchResult = props => {
  let { book, selectBook, selected } = props;


  const initiallySelected = useRef(selected);
  const [sizingRef, currentHeight] = useHeight();


  const heightStyles = useSpring({
    config: { ...config.stiff, clamp: true },
    from: {
      opacity: initiallySelected.current ? 0 : 1,
      height: initiallySelected.current ? 0 : currentHeight,
      transform: "translate3d(0%, 0px, 0px)"
    },
    to: {
      opacity: selected ? 0 : 1,
      height: selected ? 0 : currentHeight,
      transform: `translate3d(${selected ? "25%" : "0%"},0px,0px)`
    }
  }); 

This uses our trusted useHeight hook to measure our content, using the selected value to animate the item that’s leaving. We’re tracking the selected prop and animating to, or starting with, a height of 0 if it’s already selected, rather than simply removing the item and using a transition. This allows different result sets that have the same book to correctly decline to display it, if it’s selected.

This code does work. Give it a try in this demo.

But there’s a rub. If you select most of the books in a result set, there will be a kind of bouncy animation chain as you continue selecting. The book starts animating out of the list, and then the modal’s height itself starts trailing behind.

This looks goofy in my opinion, so let’s see what we can do about it. 

We’ve already seen how we can use the immediate property to turn off all spring animations. We’ve also seen the onRest callback fire when an animation finishes, and I’m sure you won’t be surprised to learn there’s an onStart callback which does what you’d expect. Let’s use those pieces to allow the content inside our modal to “turn off” the modal’s height animation when the content itself is animating heights.

First, we’ll add some state to our modal that switches animation on and off.

const animatModalSizing = useRef(true);
const modalSizingPacket = useMemo(() => {
  return {
    disable() {
      animatModalSizing.current = false;
    },
    enable() {
      animatModalSizing.current = true;
    }
  };
}, []);

Now, let’s tie it into our transition from before.

const heightStyles = useSpring({
  immediate: !uiReady.current || !animatModalSizing.current,
  config: { ...config.stiff },
  from: { height: 0 },
  to: { height: contentHeight },
  onRest: () => (uiReady.current = true)
});

Great. Now how do we get that modalSizingPacket down to our content, so whatever we’re rendering can actually switch off the modal’s animation, when needed? Context of course! Let’s create a piece of context.

export const ModalSizingContext = createContext(null);

Then, we’ll wrap all of our modal’s content with it:

<ModalSizingContext.Provider value={modalSizingPacket}>

Now our SearchResult component can grab it:

const { enable: enableModalSizing, disable: disableModalSizing } = useContext(
  ModalSizingContext
);

…and tie it right into its spring:

const heightStyles = useSpring({
  config: { ...config.stiff, clamp: true },
  from: {
    opacity: initiallySelected.current ? 0 : 1,
    height: initiallySelected.current ? 0 : currentHeight,
    transform: "translate3d(0%, 0px, 0px)"
  },
  to: {
    opacity: selected ? 0 : 1,
    height: selected ? 0 : currentHeight,
    transform: `translate3d(${selected ? "25%" : "0%"},0px,0px)`
  },
  onStart() {
    if (uiReady.current) {
      disableModalSizing();
    }
  },
  onRest() {
    uiReady.current = true;
    setTimeout(() => {
      enableModalSizing();
    });
  }
});

Note the setTimeout at the very end. I’ve found it necessary to make sure the modal’s animation is truly shut off until everything’s settled.

I know that was a lot of code. If I moved too fast, be sure to check out the demo to see all this in action.

Animating selected books (in)

Let’s wrap this blog post up by animating the selected books that appear on the main screen, beneath the modal. Let’s have newly selected books fade in while sliding in from the left when selected, then slide out to the right while it’s height shrinks to zero when removed.

We’ll use a transition, but there already seems to be a problem because we need to account for each of the selected books needs to have its own, individual height. Previously, when we reached for useTransition, we’ve had a single from and to object that was applied to entering and exiting items.

Here, we’ll use an alternate form instead, allowing us to provide a function for the to object. It’s invoked with the actual animating item — a book object in this case — and we return the to object that contains the animating values. Additionally, we’ll keep track of a simple lookup object which maps each book’s ID to its height, and then tie that into our transition.

First, let’s create our map of height values:

const [displaySizes, setDisplaySizes] = useState({});
const setDisplaySize = useCallback(
  (_id, height) => {
    setDisplaySizes(displaySizes => ({ ...displaySizes, [_id]: height }));
  },
  [setDisplaySizes]
);

We’ll pass the setDisplaySizes update function to the SelectedBook component, and use it with useHeight to report back the actual height of each book.

const SelectedBook = props => {
  let { book, removeBook, styles, setDisplaySize } = props;
  const [ref, height] = useHeight();
  useLayoutEffect(() => {
    height && setDisplaySize(book._id, height);
  }, [height]);

Note how we check that the height value has been updated with an actual value before calling it. That’s so we don’t prematurely set the value to zero before setting the correct height, which would cause our content to animate down, rather than sliding in fully-formed. Instead, no height will initially be set, so our content will default to height: auto. When our hook fires, the actual height will set. When an item is removed, the height will animate down to zero, as it fades and slides out.

Here’s the transition hook:

const selectedBookTransitions = useTransition(selectedBooks, {
  config: book => ({
    ...config.stiff,
    clamp: !selectedBooksMap[book._id]
  }),
  from: { opacity: 0, transform: "translate3d(-25%, 0px, 0px)" },
  enter: book => ({
    opacity: 1,
    height: displaySizes[book._id],
    transform: "translate3d(0%, 0px, 0px)"
  }),
  update: book => ({ height: displaySizes[book._id] }),
  leave: { opacity: 0, height: 0, transform: "translate3d(25%, 0px, 0px)" }
});

Notice the update callback. It will adjust our content if any of the heights change. (You can force this in the demo by resizing the results pane after selecting a bunch of books.)

For a little icing on top of our cake, note how we’re conditionally setting the clamp property of our hook’s config. As content is animating in, we have clamp off, which produces a nice (at least in my opinion) bouncing effect. But when leaving, it animates down, but stays gone, without any of the jitteriness we saw before with clamping turned off.

Bonus: Simplifying the modal height animation while fixing a bug in the process

After finishing this post, I found a bug in the modal implementation where, if the modal height changes while it’s not shown, you’ll see the old, now incorrect height the next time you open the modal, followed by the modal animating to the correct height. To see what I mean, have a look at this update to the demo. You’ll notice new buttons to clear, or force results into the modal when it’s not visible. Open the modal, then close it, click the button to add results, and re-open it — you should see it awkwardly animate to the new, correct height.

Fixing this also allows us to simplify the code for the height animation from before. The problem is that our modal currently continues to render in the React component tree, even when not shown. The height hook is still “running,” only to be updated the next time the modal is shown, rendering the children. What if we moved the modal’s children to its own, dedicated component, and brought the height hook with it? That way, the hook and animation spring will only be render when the modal is shown, and can start with correct values. It’s less complicated than it seems. Right now our modal component has this:

<animated.div style={{ overflow: "hidden", ...heightStyles }}>
  <div style={{ padding: "10px" }} ref={activateRef}>
    <StandardModalHeader
      caption={headerCaption}
      onHide={onHide}
    />
    {children}
  </div>
</animated.div>

Let’s make a new component that renders this markup, including the needed hooks and refs:

const ModalContents = ({ header, contents, onHide, animatModalSizing }) => {
  const [sizingRef, contentHeight] = useHeight();
  const uiReady = useRef(false);

  const heightStyles = useSpring({
    immediate: !uiReady.current || !animatModalSizing.current,
    config: { ...config.stiff },
    from: { height: 0 },
    to: { height: contentHeight },
    onRest: () => (uiReady.current = true)
  });

  return (
    <animated.div style={{ overflow: "hidden", ...heightStyles }}>
      <div style={{ padding: "10px" }} ref={sizingRef}>
        <StandardModalHeader caption={header} onHide={onHide} />
        {contents}
      </div>
    </animated.div>
  );
};

This is a significant reduction in complexity compared to what we had before. We no longer have that activateRef function, and we no longer have the heightOn state that was set in activateRef. This component is only rendered by the modal if it’s being shown, which means we’re guaranteed to have content, so we can just add a regular ref to our div. Unfortunately, we do still need our uiReady state, since even now we don’t initially have our height on first render; that’s not available until the useHeight layout effect fires immediately after the first render finishes.

And, of course, this solves the bug from before. No matter what happens when the modal is closed, when it re-opens, this component will render anew, and our spring will start with a fresh value for uiReady.

Parting thoughts

If you’ve stuck with me all this way, thank you! I know this post was long, but I hope you found some value in it.

react-spring is an incredible tool for creating robust animations with React. It can be low-level at times, which can make it hard to figure out for non-trivial use cases. But it’s this low-level nature that makes it so flexible.


The post Making Sense of react-spring appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Building Your First Serverless Service With AWS Lambda Functions

Many developers are at least marginally familiar with AWS Lambda functions. They’re reasonably straightforward to set up, but the vast AWS landscape can make it hard to see the big picture. With so many different pieces it can be daunting, and frustratingly hard to see how they fit seamlessly into a normal web application.

The Serverless framework is a huge help here. It streamlines the creation, deployment, and most significantly, the integration of Lambda functions into a web app. To be clear, it does much, much more than that, but these are the pieces I’ll be focusing on. Hopefully, this post strikes your interest and encourages you to check out the many other things Serverless supports. If you’re completely new to Lambda you might first want to check out this AWS intro.

There’s no way I can cover the initial installation and setup better than the quick start guide, so start there to get up and running. Assuming you already have an AWS account, you might be up and running in 5–10 minutes; and if you don’t, the guide covers that as well.

Your first Serverless service

Before we get to cool things like file uploads and S3 buckets, let’s create a basic Lambda function, connect it to an HTTP endpoint, and call it from an existing web app. The Lambda won’t do anything useful or interesting, but this will give us a nice opportunity to see how pleasant it is to work with Serverless.

First, let’s create our service. Open any new, or existing web app you might have (create-react-app is a great way to quickly spin up a new one) and find a place to create our services. For me, it’s my lambda folder. Whatever directory you choose, cd into it from terminal and run the following command:

sls create -t aws-nodejs --path hello-world

That creates a new directory called hello-world. Let’s crack it open and see what’s in there.

If you look in handler.js, you should see an async function that returns a message. We could hit sls deploy in our terminal right now, and deploy that Lambda function, which could then be invoked. But before we do that, let’s make it callable over the web.

Working with AWS manually, we’d normally need to go into the AWS API Gateway, create an endpoint, then create a stage, and tell it to proxy to our Lambda. With serverless, all we need is a little bit of config.

Still in the hello-world directory? Open the serverless.yaml file that was created in there.

The config file actually comes with boilerplate for the most common setups. Let’s uncomment the http entries, and add a more sensible path. Something like this:

functions:
  hello:
    handler: handler.hello
#   The following are a few example events you can configure
#   NOTE: Please make sure to change your handler code to work with those events
#   Check the event documentation for details
    events:
      - http:
        path: msg
        method: get

That’s it. Serverless does all the grunt work described above.

CORS configuration 

Ideally, we want to call this from front-end JavaScript code with the Fetch API, but that unfortunately means we need CORS to be configured. This section will walk you through that.

Below the configuration above, add cors: true, like this

functions:
  hello:
    handler: handler.hello
    events:
      - http:
        path: msg
        method: get
        cors: true

That’s the section! CORS is now configured on our API endpoint, allowing cross-origin communication.

CORS Lambda tweak

While our HTTP endpoint is configured for CORS, it’s up to our Lambda to return the right headers. That’s just how CORS works. Let’s automate that by heading back into handler.js, and adding this function:

const CorsResponse = obj => ({
  statusCode: 200,
  headers: {
    "Access-Control-Allow-Origin": "*",
    "Access-Control-Allow-Headers": "*",
    "Access-Control-Allow-Methods": "*"
  },
  body: JSON.stringify(obj)
});

Before returning from the Lambda, we’ll send the return value through that function. Here’s the entirety of handler.js with everything we’ve done up to this point:

'use strict';
const CorsResponse = obj => ({
  statusCode: 200,
  headers: {
    "Access-Control-Allow-Origin": "*",
    "Access-Control-Allow-Headers": "*",
    "Access-Control-Allow-Methods": "*"
  },
  body: JSON.stringify(obj)
});


module.exports.hello = async event => {
  return CorsResponse("HELLO, WORLD!");
};

Let’s run it. Type sls deploy into your terminal from the hello-world folder.

When that runs, we’ll have deployed our Lambda function to an HTTP endpoint that we can call via Fetch. But… where is it? We could crack open our AWS console, find the gateway API that serverless created for us, then find the Invoke URL. It would look something like this.

The AWS console showing the Settings tab which includes Cache Settings. Above that is a blue notice that contains the invoke URL.

Fortunately, there is an easier way, which is to type sls info into our terminal:

Just like that, we can see that our Lambda function is available at the following path:

https://6xpmc3g0ch.execute-api.us-east-1.amazonaws.com/dev/ms

Woot, now let’s call It!

Now let’s open up a web app and try fetching it. Here’s what our Fetch will look like:

fetch("https://6xpmc3g0ch.execute-api.us-east-1.amazonaws.com/dev/msg")
  .then(resp => resp.json())
  .then(resp => {
    console.log(resp);
  });

We should see our message in the dev console.

Console output showing Hello World.

Now that we’ve gotten our feet wet, let’s repeat this process. This time, though, let’s make a more interesting, useful service. Specifically, let’s make the canonical “resize an image” Lambda, but instead of being triggered by a new S3 bucket upload, let’s let the user upload an image directly to our Lambda. That’ll remove the need to bundle any kind of aws-sdk resources in our client-side bundle.

Building a useful Lambda

OK, from the start! This particular Lambda will take an image, resize it, then upload it to an S3 bucket. First, let’s create a new service. I’m calling it cover-art but it could certainly be anything else.

sls create -t aws-nodejs --path cover-art

As before, we’ll add a path to our HTTP endpoint (which in this case will be a POST, instead of GET, since we’re sending the file instead of receiving it) and enable CORS:

// Same as before
  events:
    - http:
      path: upload
      method: post
      cors: true

Next, let’s grant our Lambda access to whatever S3 buckets we’re going to use for the upload. Look in your YAML file — there should be a iamRoleStatements section that contains boilerplate code that’s been commented out. We can leverage some of that by uncommenting it. Here’s the config we’ll use to enable the S3 buckets we want:

iamRoleStatements:
 - Effect: "Allow"
   Action:
     - "s3:*"
   Resource: ["arn:aws:s3:::your-bucket-name/*"]

Note the /* on the end. We don’t list specific bucket names in isolation, but rather paths to resources; in this case, that’s any resources that happen to exist inside your-bucket-name.

Since we want to upload files directly to our Lambda, we need to make one more tweak. Specifically, we need to configure the API endpoint to accept multipart/form-data as a binary media type. Locate the provider section in the YAML file:

provider:
  name: aws
  runtime: nodejs12.x

…and modify if it to:

provider:
  name: aws
  runtime: nodejs12.x
  apiGateway:
    binaryMediaTypes:
      - 'multipart/form-data'

For good measure, let’s give our function an intelligent name. Replace handler: handler.hello with handler: handler.upload, then change module.exports.hello to module.exports.upload in handler.js.

Now we get to write some code

First, let’s grab some helpers.

npm i jimp uuid lambda-multipart-parser

Wait, what’s Jimp? It’s the library I’m using to resize uploaded images. uuid will be for creating new, unique file names of the sized resources, before uploading to S3. Oh, and lambda-multipart-parser? That’s for parsing the file info inside our Lambda.

Next, let’s make a convenience helper for S3 uploading:

const uploadToS3 = (fileName, body) => {
  const s3 = new S3({});
  const  params = { Bucket: "your-bucket-name", Key: `/${fileName}`, Body: body };


  return new Promise(res => {
    s3.upload(params, function(err, data) {
      if (err) {
        return res(CorsResponse({ error: true, message: err }));
      }
      res(CorsResponse({ 
        success: true, 
        url: `https://${params.Bucket}.s3.amazonaws.com/${params.Key}` 
      }));
    });
  });
};

Lastly, we’ll plug in some code that reads the upload files, resizes them with Jimp (if needed) and uploads the result to S3. The final result is below.

'use strict';
const AWS = require("aws-sdk");
const { S3 } = AWS;
const path = require("path");
const Jimp = require("jimp");
const uuid = require("uuid/v4");
const awsMultiPartParser = require("lambda-multipart-parser");


const CorsResponse = obj => ({
  statusCode: 200,
  headers: {
    "Access-Control-Allow-Origin": "*",
    "Access-Control-Allow-Headers": "*",
    "Access-Control-Allow-Methods": "*"
  },
  body: JSON.stringify(obj)
});


const uploadToS3 = (fileName, body) => {
  const s3 = new S3({});
  var params = { Bucket: "your-bucket-name", Key: `/${fileName}`, Body: body };
  return new Promise(res => {
    s3.upload(params, function(err, data) {
      if (err) {
        return res(CorsResponse({ error: true, message: err }));
      }
      res(CorsResponse({ 
        success: true, 
        url: `https://${params.Bucket}.s3.amazonaws.com/${params.Key}` 
      }));
    });
  });
};


module.exports.upload = async event => {
  const formPayload = await awsMultiPartParser.parse(event);
  const MAX_WIDTH = 50;
  return new Promise(res => {
    Jimp.read(formPayload.files[0].content, function(err, image) {
      if (err || !image) {
        return res(CorsResponse({ error: true, message: err }));
      }
      const newName = `${uuid()}${path.extname(formPayload.files[0].filename)}`;
      if (image.bitmap.width > MAX_WIDTH) {
        image.resize(MAX_WIDTH, Jimp.AUTO);
        image.getBuffer(image.getMIME(), (err, body) => {
          if (err) {
            return res(CorsResponse({ error: true, message: err }));
          }
          return res(uploadToS3(newName, body));
        });
      } else {
        image.getBuffer(image.getMIME(), (err, body) => {
          if (err) {
            return res(CorsResponse({ error: true, message: err }));
          }
          return res(uploadToS3(newName, body));
        });
      }
    });
  });
};

I’m sorry to dump so much code on you but — this being a post about Amazon Lambda and serverless — I’d rather not belabor the grunt work within the serverless function. Of course, yours might look completely different if you’re using an image library other than Jimp.

Let’s run it by uploading a file from our client. I’m using the react-dropzone library, so my JSX looks like this:

<Dropzone
  onDrop={files => onDrop(files)}
  multiple={false}
>
  <div>Click or drag to upload a new cover</div>
</Dropzone>

The onDrop function looks like this:

const onDrop = files => {
  let request = new FormData();
  request.append("fileUploaded", files[0]);


  fetch("https://yb1ihnzpy8.execute-api.us-east-1.amazonaws.com/dev/upload", {
    method: "POST",
    mode: "cors",
    body: request
    })
  .then(resp => resp.json())
  .then(res => {
    if (res.error) {
      // handle errors
    } else {
      // success - woo hoo - update state as needed
    }
  });
};

And just like that, we can upload a file and see it appear in our S3 bucket! 

Screenshot of the AWS interface for buckets showing an uploaded file in a bucket that came from the Lambda function.

An optional detour: bundling

There’s one optional enhancement we could make to our setup. Right now, when we deploy our service, Serverless is zipping up the entire services folder and sending all of it to our Lambda. The content currently weighs in at 10MB, since all of our node_modules are getting dragged along for the ride. We can use a bundler to drastically reduce that size. Not only that, but a bundler will cut deploy time, data usage, cold start performance, etc. In other words, it’s a nice thing to have.

Fortunately for us, there’s a plugin that easily integrates webpack into the serverless build process. Let’s install it with:

npm i serverless-webpack --save-dev

…and add it via our YAML config file. We can drop this in at the very end:

// Same as before
plugins:
  - serverless-webpack

Naturally, we need a webpack.config.js file, so let’s add that to the mix:

const path = require("path");
module.exports = {
  entry: "./handler.js",
  output: {
    libraryTarget: 'commonjs2',
    path: path.join(__dirname, '.webpack'),
    filename: 'handler.js',
  },
  target: "node",
  mode: "production",
  externals: ["aws-sdk"],
  resolve: {
    mainFields: ["main"]
  }
};

Notice that we’re setting target: node so Node-specific assets are treated properly. Also note that you may need to set the output filename to  handler.js. I’m also adding aws-sdk to the externals array so webpack doesn’t bundle it at all; instead, it’ll leave the call to const AWS = require("aws-sdk"); alone, allowing it to be handled by our Lamdba, at runtime. This is OK since Lambdas already have the aws-sdk available implicitly, meaning there’s no need for us to send it over the wire. Finally, the mainFields: ["main"] is to tell webpack to ignore any ESM module fields. This is necessary to fix some issues with the Jimp library.

Now let’s re-deploy, and hopefully we’ll see webpack running.

Now our code is bundled nicely into a single file that’s 935K, which zips down further to a mere 337K. That’s a lot of savings!

Odds and ends

If you’re wondering how you’d send other data to the Lambda, you’d add what you want to the request object, of type FormData, from before. For example:

request.append("xyz", "Hi there");

…and then read formPayload.xyz in the Lambda. This can be useful if you need to send a security token, or other file info.

If you’re wondering how you might configure env variables for your Lambda, you might have guessed by now that it’s as simple as adding some fields to your serverless.yaml file. It even supports reading the values from an external file (presumably not committed to git). This blog post by Philipp Müns covers it well.

Wrapping up

Serverless is an incredible framework. I promise, we’ve barely scratched the surface. Hopefully this post has shown you its potential, and motivated you to check it out even further.

If you’re interested in learning more, I’d recommend the learning materials from David Wells, an engineer at Netlify, and former member of the serverless team, as well as the Serverless Handbook by Swizec Teller

The post Building Your First Serverless Service With AWS Lambda Functions appeared first on CSS-Tricks.

React Suspense in Practice

This post is about understanding how Suspense works, what it does, and seeing how it can integrate into a real web app. We'll look at how to integrate routing and data loading with Suspense in React. For routing, I'll be using vanilla JavaScript, and I'll be using my own micro-graphql-react GraphQL library for data.

If you're wondering about React Router, it seems great, but I've never had the chance to use it. My own side project has a simple enough routing story that I always just did it by hand. Besides, using vanilla JavaScript will give us a better look at how Suspense works.

A little background

Let’s talk about Suspense itself. Kingsley Silas provides a thorough overview of it, but the first thing to note is that it's still an experimental API. That means — and React’s docs say the same — not to lean on it yet for production-ready work. There’s always a chance it will change between now and when it’s fully complete, so please bear that in mind.

That said, Suspense is all about maintaining a consistent UI in the face of asynchronous dependencies, such as lazily loaded React components, GraphQL data, etc. Suspense provides low-level API's that allow you to easily maintain your UI while your app is managing these things.

But what does "consistent" mean in this case? It means not rendering a UI that's partially complete. It means, if there are three data sources on the page, and one of them has completed, we don't want to render that updated piece of state, with a spinner next to the now-outdated other two pieces of state.

What we do want to do is indicate to the user that data are loading, while continuing to show either the old UI, or an alternative UI which indicates we're waiting on data; Suspense supports either, which I'll get into.

What exactly Suspense does

This is all less complicated than it may seem. Traditionally in React, you'd set state, and your UI would update. Life was simple. But it also led to the sorts of inconsistencies described above. What Suspense adds is the ability to have a component notify React at render time that it's waiting for asynchronous data; this is called suspending, and it can happen anywhere in a component's tree, as many times as needed, until the tree is ready. When a component suspends, React will decline to render the pending state update until all suspended dependencies have been satisfied.

So what happens when a component suspends? React will look up the tree, find the first <Suspense> component, and render its fallback. I'll be providing plenty of examples, but for now, know that you can provide this:

<Suspense fallback={<Loading />}>

…and the <Loading /> component will render if any child components of <Suspense> are suspended.

But what if we already have a valid, consistent UI, and the user loads new data, causing a component to suspend? This would cause the entire existing UI to un-render, and the fallback to show. That'd still be consistent, but hardly a good UX. We'd prefer the old UI stay on the screen while the new data are loading.

To support this, React provides a second API, useTransition, which effectively makes a state change in memory. In other words, it allows you to set state in memory while keeping your existing UI on screen; React will literally keep a second copy of your component tree rendered in memory, and set state on that tree. Components may suspend, but only in memory, so your existing UI will continue to show on the screen. When the state change is complete, and all suspensions have resolved, the in-memory state change will render onto the screen. Obviously you want to provide feedback to your user while this is happening, so useTransition provides a pending boolean, which you can use to display some sort of inline "loading" notification while suspensions are being resolved in memory.

When you think about it, you probably don't want your existing UI to show indefinitely while your loading is pending. If the user tries to do something, and a long period of time elapses before it's finished, you should probably consider the existing UI outdated and invalid. At this point, you probably will want your component tree to suspend, and your <Suspense> fallback to display.

To accomplish this, useTransition takes a timeoutMs value. This indicates the amount of time you're willing to let the in-memory state change run, before you suspend.

const Component = props => {
  const [startTransition, isPending] = useTransition({ timeoutMs: 3000 });
  // .....
};

Here, startTransition is a function. When you want to run a state change "in memory," you call startTransition, and pass a lambda expression that does your state change.

startTransition(() => {
  dispatch({ type: LOAD_DATA_OR_SOMETHING, value: 42 });
})

You can call startTransition wherever you want. You can pass it to child components, etc. When you call it, any state change you perform will happen in memory. If a suspension happens, isPending will become true, which you can use to display some sort of inline loading indicator.

That's it. That's what Suspense does.

The rest of this post will get into some actual code to leverage these features.

Example: Navigation

To tie navigation into Suspense, you'll be happy to know that React provides a primitive to do this: React.lazy. It's a function that takes a lambda expression that returns a Promise, which resolves to a React component. The result of this function call becomes your lazily loaded component. It sounds complicated, but it looks like this:

const SettingsComponent = lazy(() => import("./modules/settings/settings"));

SettingsComponent is now a React component that, when rendered (but not before), will call the function we passed in, which will call import() and load the JavaScript module located at ./modules/settings/settings.

The key piece is this: while that import() is in flight, the component rendering SettingsComponent will suspend. It seems we have all the pieces in hand, so let's put them together and build some Suspense-based navigation.

Navigation helpers

But first, for context, I'll briefly cover how navigation state is managed in this app, so the Suspense code will make more sense.

I'll be using my booklist app. It's just a side project of mine I mainly keep around to mess around with bleeding-edge web technology. It was written by me alone, so expect parts of it to be a bit unrefined (especially the design).

The app is small, with about eight different modules a user can browse to, without any deeper navigation. Any search state a module might use is stored in the URL’s query string. With this in mind, there are a few methods which scrape the current module name, and search state from the URL. This code uses the query-string and history packages from npm, and looks somewhat like this (some details have been removed for simplicity, like authentication).

import createHistory from "history/createBrowserHistory";
import queryString from "query-string";
export const history = createHistory();
export function getCurrentUrlState() {
  let location = history.location;
  let parsed = queryString.parse(location.search);
  return {
    pathname: location.pathname,
    searchState: parsed
  };
}
export function getCurrentModuleFromUrl() {
  let location = history.location;
  return location.pathname.replace(/\//g, "").toLowerCase();
}

I have an appSettings reducer that holds the current module and searchState values for the app, and uses these methods to sync with the URL when needed.

The pieces of a Suspense-based navigation

Let's get started with some Suspense work. First, let's create the lazy-loaded components for our modules.

const ActivateComponent = lazy(() => import("./modules/activate/activate"));
const AuthenticateComponent = lazy(() =>
  import("./modules/authenticate/authenticate")
);
const BooksComponent = lazy(() => import("./modules/books/books"));
const HomeComponent = lazy(() => import("./modules/home/home"));
const ScanComponent = lazy(() => import("./modules/scan/scan"));
const SubjectsComponent = lazy(() => import("./modules/subjects/subjects"));
const SettingsComponent = lazy(() => import("./modules/settings/settings"));
const AdminComponent = lazy(() => import("./modules/admin/admin"));

Now we need a method that chooses the right component based on the current module. If we were using React Router, we'd have some nice <Route /> components. Since we're rolling this manually, a switch will do.

export const getModuleComponent = moduleToLoad => {
  if (moduleToLoad == null) {
    return null;
  }
  switch (moduleToLoad.toLowerCase()) {
    case "activate":
      return ActivateComponent;
    case "authenticate":
      return AuthenticateComponent;
    case "books":
      return BooksComponent;
    case "home":
      return HomeComponent;
    case "scan":
      return ScanComponent;
    case "subjects":
      return SubjectsComponent;
    case "settings":
      return SettingsComponent;
    case "admin":
      return AdminComponent;
  }
  
  return HomeComponent;
};

The whole thing put together

With all the boring setup out of the way, let's see what the entire app root looks like. There's a lot of code here, but I promise, relatively few of these lines pertain to Suspense, and I'll cover all of it.

const App = () => {
  const [startTransitionNewModule, isNewModulePending] = useTransition({
    timeoutMs: 3000
  });
  const [startTransitionModuleUpdate, moduleUpdatePending] = useTransition({
    timeoutMs: 3000
  });
  let appStatePacket = useAppState();
  let [appState, _, dispatch] = appStatePacket;
  let Component = getModuleComponent(appState.module);
  useEffect(() => {
    startTransitionNewModule(() => {
      dispatch({ type: URL_SYNC });
    });
  }, []);
  useEffect(() => {
    return history.listen(location => {
      if (appState.module != getCurrentModuleFromUrl()) {
        startTransitionNewModule(() => {
          dispatch({ type: URL_SYNC });
        });
      } else {
        startTransitionModuleUpdate(() => {
          dispatch({ type: URL_SYNC });
        });
      }
    });
  }, [appState.module]);
  return (
    <AppContext.Provider value={appStatePacket}>
      <ModuleUpdateContext.Provider value={moduleUpdatePending}>
        <div>
          <MainNavigationBar />
          {isNewModulePending ? <Loading /> : null}
          <Suspense fallback={<LongLoading />}>
            <div id="main-content" style={{ flex: 1, overflowY: "auto" }}>
              {Component ? <Component updating={moduleUpdatePending} /> : null}
            </div>
          </Suspense>
        </div>
      </ModuleUpdateContext.Provider>
    </AppContext.Provider>
  );
};

First, we have two different calls to useTransition. We'll use one for routing to a new module, and the other for updating search state for the current module. Why the difference? Well, when a module's search state is updating, that module will likely want to display an inline loading indicator. That updating state is held by the moduleUpdatePending variable, which you'll see I put on context for the active module to grab, and use as needed:

<div>
  <MainNavigationBar />
  {isNewModulePending ? <Loading /> : null}
  <Suspense fallback={<LongLoading />}>
    <div id="main-content" style={{ flex: 1, overflowY: "auto" }}>
      {Component ? <Component updating={moduleUpdatePending} /> : null} // highlight
    </div>
  </Suspense>
</div>

The appStatePacket is the result of the app state reducer I discussed above (but did not show). It contains various pieces of application state which rarely change (color theme, offline status, current module, etc).

let appStatePacket = useAppState();

A little later, I grab whichever component happens to be active, based on the current module name. Initially this will be null.

let Component = getModuleComponent(appState.module);

The first call to useEffect will tell our appSettings reducer to sync with the URL at startup.

useEffect(() => {
  startTransitionNewModule(() => {
    dispatch({ type: URL_SYNC });
  });
}, []);

Since this is the initial module the web app navigates to, I wrap it in startTransitionNewModule to indicate that a fresh module is loading. While it might be tempting to have the appSettings reducer have the initial module name as its initial state, doing this prevents us from calling our startTransitionNewModule callback, which means our Suspense boundary would render the fallback immediately, instead of after the timeout.

The next call to useEffect sets up a history subscription. No matter what, when the url changes we tell our app settings to sync against the URL. The only difference is which startTransition that same call is wrapped in.

useEffect(() => {
  return history.listen(location => {
    if (appState.module != getCurrentModuleFromUrl()) {
      startTransitionNewModule(() => {
        dispatch({ type: URL_SYNC });
      });
    } else {
      startTransitionModuleUpdate(() => {
        dispatch({ type: URL_SYNC });
      });
    }
  });
}, [appState.module]);

If we're browsing to a new module, we call startTransitionNewModule. If we're loading a component that hasn't been loaded already, React.lazy will suspend, and the pending indicator visible only to the app's root will set, which will show a loading spinner at the top of the app while the lazy component is fetched and loaded. Because of how useTransition works, the current screen will continue to show for three seconds. If that time expires and the component is still not ready, our UI will suspend, and the fallback will render, which will show the <LongLoading /> component:

{isNewModulePending ? <Loading /> : null}
<Suspense fallback={<LongLoading />}>
  <div id="main-content" style={{ flex: 1, overflowY: "auto" }}>
    {Component ? <Component updating={moduleUpdatePending} /> : null}
  </div>
</Suspense>

If we're not changing modules, we call startTransitionModuleUpdate:

startTransitionModuleUpdate(() => {
  dispatch({ type: URL_SYNC });
});

If the update causes a suspension, the pending indicator we're putting on context will be triggered. The active component can detect that and show whatever inline loading indicator it wants. As before, if the suspension takes longer than three seconds, the same Suspense boundary from before will be triggered... unless, as we'll see later, there's a Suspense boundary lower in the tree.

One important thing to note is that these three-second timeouts apply not only to the component loading, but also being ready to display. If the component loads in two seconds, and, when rendering in memory (since we're inside of a startTransition call) suspends, the useTransition will continue to wait for up to one more second before Suspending.

In writing this blog post, I used Chrome's slow network modes to help force loading to be slow, to test my Suspense boundaries. The settings are in the Network tab of Chrome's dev tools.

Let's open our app to the settings module. This will be called:

dispatch({ type: URL_SYNC });

Our appSettings reducer will sync with the URL, then set module to "settings." This will happen inside of startTransitionNewModule so that, when the lazy-loaded component attempts to render, it'll suspend. Since we're inside startTransitionNewModule, the isNewModulePending will switch over to true, and the <Loading /> component will render.

If the component is still not ready to render within three seconds, the in-memory version of our component tree will switch over, suspend, and our Suspense boundary will render the <LongLoading /> component.
When it’s done, the settings module will show.

So what happens when we browse somewhere new? Basically the same thing as before, except this call:

dispatch({ type: URL_SYNC });

…will come from the second instance of useEffect. Let's browse to the books module and see what happens. First, the inline spinner shows as expected:

If the three-second timeout elapses, our Suspense boundary will render its fallback:
And, eventually, our books module loads:

Searching and updating

Let's stay within the books module, and update the URL search string to kick off a new search. Recall from before that we were detecting the same module in that second useEffect call and using a dedicated useTransition call for it. From there, we were putting the pending indicator on context for whichever module was active for us to grab and use.

Let's see some code to actually use that. There's not really much Suspense-related code here. I’m grabbing the value from context, and if true, rendering an inline spinner on top of my existing results. Recall that this happens when a useTransition call has begun, and the app is suspended in memory. While that’s happening, we continue to show the existing UI, but with this loading indicator.

const BookResults: SFC<{ books: any; uiView: any }> = ({ books, uiView }) => {
  const isUpdating = useContext(ModuleUpdateContext);
  return (
    <>
      {!books.length ? (
        <div
          className="alert alert-warning"
          style={{ marginTop: "20px", marginRight: "5px" }}
        >
          No books found
        </div>
      ) : null}
      {isUpdating ? <Loading /> : null}
      {uiView.isGridView ? (
        <GridView books={books} />
      ) : uiView.isBasicList ? (
        <BasicListView books={books} />
      ) : uiView.isCoversList ? (
        <CoversView books={books} />
      ) : null}
    </>
  );
};

Let's set a search term and see what happens. First, the inline spinner displays.

Then, if the useTransition timeout expires, we'll get the Suspense boundary's fallback. The books module defines its own Suspense boundary in order to provide a more fine-tuned loading indicator, which looks like this:

This is a key point. When making Suspense boundary fallbacks, try not to throw up any sort of spinner and "loading" message. That made sense for our top-level navigation because there's not much else to do. But when you're in a specific part of your application, try to make your fallback re-use many of the same components with some sort of loading indicator where the data would be — but with everything else disabled.

This is what the relevant components look like for my books module:

const RenderModule: SFC<{}> = ({}) => {
  const uiView = useBookSearchUiView();
  const [lastBookResults, setLastBookResults] = useState({
    totalPages: 0,
    resultsCount: 0
  });
  return (
    <div className="standard-module-container margin-bottom-lg">
      <Suspense fallback={<Fallback uiView={uiView} {...lastBookResults} />}>
        <MainContent uiView={uiView} setLastBookResults={setLastBookResults} />
      </Suspense>
    </div>
  );
};
const Fallback: SFC<{
  uiView: BookSearchUiView;
  totalPages: number;
  resultsCount: number;
}> = ({ uiView, totalPages, resultsCount }) => {
  return (
    <>
      <BooksMenuBarDisabled
        totalPages={totalPages}
        resultsCount={resultsCount}
      />
      {uiView.isGridView ? (
        <GridViewShell />
      ) : (
        <h1>
          Books are loading <i className="fas fa-cog fa-spin"></i>
        </h1>
      )}
    </>
  );
};

A quick note on consistency

Before we move on, I'd like to point out one thing from the earlier screenshots. Look at the inline spinner that displays while the search is pending, then look at the screen when that search suspended, and next, the finished results:

Notice how there's a "C++" label to the right of the search pane, with an option to remove it from the search query? Or rather, notice how that label is only on the second two screenshots? The moment the URL updates, the application state governing that label is updated; however, that state does not initially display. Initially, the state update suspends in memory (since we used useTransition), and the prior UI continues to show.

Then the fallback renders. The fallback renders a disabled version of that same search bar, which does show the current search state (by choice). We've now removed our prior UI (since by now it’s quite old, and stale) and are waiting on the search shown in the disabled menu bar.

This is the sort of consistency Suspense gives you, for free.

You can spend your time crafting nice application states, and React does the leg work of surmising whether things are ready, without you needing to juggle promises.

Nested Suspense boundaries

Let's suppose our top-level navigation takes a while to load our books component to the extent that our “Still loading, sorry” spinner from the Suspense boundary renders. From there, the books component loads and the new Suspense boundary inside the books component renders. But, then, as rendering continues, our book search query fires, and suspends. What will happen? Will the top-level Suspense boundary continue to show, until everything is ready, or will the lower-down Suspense boundary in books take over?

The answer is the latter. As new Suspense boundaries render lower in the tree, their fallback will replace the fallback of whatever antecedent Suspense fallback was already showing. There's currently an unstable API to override this, but if you're doing a good job of crafting your fallbacks, this is probably the behavior you want. You don't want “Still loading, sorry” to just keep showing. Rather, as soon as the books component is ready, you absolutely want to display that shell with the more targeted waiting message.

Now, what if our books module loads and starts to render while the startTransition spinner is still showing and then suspends? In other words, imagine that our startTransition has a timeout of three seconds, the books component renders, the nested Suspense boundary is in the component tree after one second, and the search query suspends. Will the remaining two seconds elapse before that new nested Suspense boundary renders the fallback, or will the fallback show immediately? The answer, perhaps surprisingly, is that the new Suspense fallback will show immediately by default. That’s because it's best to show a new, valid UI as quickly as possible, so the user can see that things are happening, and progressing. 

How data fits in

Navigation is fine, but how does data loading fit into all of this?

It fits in completely and transparently. Data loading triggers suspensions just like navigation with React.lazy, and it hooks into all the same useTransition and Suspense boundaries. This is what's so amazing about Suspense: all your async dependencies seamlessly work in this same system. Managing these various async requests manually to ensure consistency was a nightmare before Suspense, which is precisely why nobody did it. Web apps were notorious for cascading spinners that stopped at unpredictable times, producing inconsistent UIs that were only partially finished.

OK, but how do we actually tie data loading into this? Data loading in Suspense is paradoxically both more complex, and also simple.

I'll explain.

If you're waiting on data, you'll throw a promise in the component that reads (or attempts to read) the data. The promise should be consistent based on the data request. So, four repeated requests for that same "C++" search query should throw the same, identical promise. This implies some sort of caching layer to manage all this. You'll likely not write this yourself. Instead, you'll just hope, and wait for the data library you use to update itself to support Suspense.

This is already done in my micro-graphql-react library. Instead of using the useQuery hook, you’ll use the useSuspenseQuery hook, which has an identical API, but throws a consistent promise when you're waiting on data.

Wait, what about preloading?!

Has your brain turned to mush reading other things on Suspense that talked about waterfalls, fetch-on-render, preloading, etc? Don't worry about it. Here's what it all means.

Let's say you lazy load the books component, which renders and then requests some data, which causes a new Suspense. The network request for the component and the network request for the data will happen one after the other—in a waterfall fashion.

But here's the key part: the application state that led to whatever initial query that ran when the component loaded was already available when you started loading the component (which, in this case, is the URL). So why not "start" the query as soon as you know you'll need it? As soon as you browse to /books, why not fire off the current search query right then and there, so it's already in flight when the component loads.

The micro-graphql-react module does indeed have a preload method, and I urge you to use it. Preloading data is a nice performance optimization, but it has nothing to do with Suspense. Classic React apps could (and should) preload data as soon as they know they'll need it. Vue apps should preload data as soon as they know they'll need it. Svelte apps should... you get the point.

Preloading data is orthogonal to Suspense, which is something you can do with literally any framework. It’s also something we all should have been doing already, even though nobody else was.

But seriously, how do you preload?

That's up to you. At the very least, the logic to run the current search absolutely needs to be completely separated into its own, standalone module. You should literally make sure this preload function is in a file by itself. Don't rely on webpack to treeshake; you'll likely face abject sadness the next time you audit your bundles.

You have a preload() method in its own bundle, so call it. Call it when you know you're about to navigate to that module. I assume React Router has some sort of API to run code on a navigation change. For the vanilla routing code above, I call the method in that routing switch from before. I had omitted it for brevity, but the books entry actually looks like this:

switch (moduleToLoad.toLowerCase()) {
  case "activate":
    return ActivateComponent;
  case "authenticate":
    return AuthenticateComponent;
  case "books":
    // preload!!!
    booksPreload();
    return BooksComponent;

That's it. Here's a live demo to play around with:

To modify the Suspense timeout value, which defaults to 3000ms, navigate to Settings, and check out the misc tab. Just be sure to refresh the page after modifying it.

Wrapping up

I've seldom been as excited for anything in the web dev ecosystem as I am for Suspense. It's an incredibly ambitious system for managing one of the trickiest problems in web development: asynchrony.

The post React Suspense in Practice appeared first on CSS-Tricks.