Comparing Node JavaScript to JavaScript in the Browser

Being able to understand Node continues to be an important skill if you’re a front-end developer. Deno has arrived as another way to run JavaScript outside the browser, but the huge ecosystem of tools and software built with Node mean it’s not going anywhere anytime soon.

If you’ve mainly written JavaScript that runs in the browser and you’re looking to get more of an understanding of the server side, many articles will tell you that Node JavaScript is a great way to write server-side code and capitalize on your JavaScript experience.

I agree, but there are a lot of challenges jumping into Node.js, even if you’re experienced at authoring client-side JavaScript. This article assumes you’ve got Node installed, and you’ve used it to build front-end apps, but want to write your own APIs and tools using Node.

For a beginners explanation of Node and npm you can check out Jamie Corkhill’s “Getting Started With Node” on Smashing Magazine.

Asynchronous JavaScript

We don’t need to write a whole lot of asynchronous code on the browser. The most common usage of asynchronous code on the browser is fetching data from an API using fetch (or XMLHttpRequest if you’re old-school). Other uses of async code might include using setInterval, setTimeout, or responding to user input events, but we can get pretty far writing JavaScript UI without being asynchronous JavaScript geniuses.

If you’re using Node, you will nearly always be writing asynchronous code. From the beginning, Node has been built to leverage a single-threaded event loop using asynchronous callbacks. The Node team blogged in 2011 about how “Node.js promotes an asynchronous coding style from the ground up.” In Ryan Dahl’s talk announcing Node.js in 2009, he talks about the performance benefits of doubling down on asynchronous JavaScript.

The asynchronous-first style is part of the reason Node gained popularity over other attempts at server-side JavaScript implementations such as Netscape’s application servers or Narwhal. However, being forced to write asynchronous JavaScript might cause friction if you aren’t ready for it.

Setting up an example

Let’s say we’re writing a quiz app. We’re going to allow users to build quizes out of multichoice questions to test their friends’ knowledge. You can find a more complete version of what we’ll build at this GitHub repo. You could also clone the entire front-end and back-end to see how it all fits together, or you can take a look at this CodeSandbox (run npm run start to fire it up) and get an idea of what we’re making from there.

Screenshot of a quiz editor written in Node JavaScript that contains four inputs two checkboxes and four buttons.

The quizzes in our app will consist of a bunch of questions, and each of these questions will have a number of answers to choose from, with only one answer being correct.

We can hold this data in an SQLite database. Our database will contain:

  • A table for quizzes with two columns:
    • an integer ID
    • a text title
  • A table for questions with three columns:
    • an integer ID
    • body text
    • An integer reference matching the ID of the quiz each question belongs to
  • A table for answers with four columns:
    • an integer ID
    • body text
    • whether the answer is correct or not
    • an integer reference matching the ID of the question each answer belongs to

SQLite doesn’t have a boolean data type, so we can hold whether an answer is correct in an integer where 0 is false and 1 is true.

First, we’ll need to initialize npm and install the sqlite3 npm package from the command line:

npm init -y
npm install sqlite3

This will create a package.json file. Let’s edit it and add:

"type":"module"

To the top-level JSON object. This will allow us to use modern ES6 module syntax. Now we can create a node script to set up our tables. Let’s call our script migrate.js.

// migrate.js

import sqlite3 from "sqlite3"; 

let db = new sqlite3.Database("quiz.db");
    db.serialize(function () {
      // Setting up our tables:
      db.run("CREATE TABLE quiz (quizid INTEGER PRIMARY KEY, title TEXT)");
      db.run("CREATE TABLE question (questionid INTEGER PRIMARY KEY, body TEXT, questionquiz INTEGER, FOREIGN KEY(questionquiz) REFERENCES quiz(quizid))");
      db.run("CREATE TABLE answer (answerid INTEGER PRIMARY KEY, body TEXT, iscorrect INTEGER, answerquestion INTEGER, FOREIGN KEY(answerquestion) REFERENCES question(questionid))");
      // Create a quiz with an id of 0 and a title "my quiz" 
      db.run("INSERT INTO quiz VALUES(0,\"my quiz\")");
      // Create a question with an id of 0, a question body
      // and a link to the quiz using the id 0
      db.run("INSERT INTO question VALUES(0,\"What is the capital of France?\", 0)");
      // Create four answers with unique ids, answer bodies, an integer for whether
      // they're correct or not, and a link to the first question using the id 0
      db.run("INSERT INTO answer VALUES(0,\"Madrid\",0, 0)");
      db.run("INSERT INTO answer VALUES(1,\"Paris\",1, 0)");
      db.run("INSERT INTO answer VALUES(2,\"London\",0, 0)");
      db.run("INSERT INTO answer VALUES(3,\"Amsterdam\",0, 0)");
  });
db.close();

I’m not going to explain this code in detail, but it creates the tables we need to hold our data. It will also create a quiz, a question, and four answers, and store all of this in a file called quiz.db. After saving this file, we can run our script from the command line using this command:

node migrate.js

If you like, you can open the database file using a tool like DB Browser for SQLite to double check that the data has been created.

Changing the way you write JavaScript

Let’s write some code to query the data we’ve created.

Create a new file and call it index.js .To access our database, we can import sqlite3, create a new sqlite3.Database, and pass the database file path as an argument. On this database object, we can call the get function, passing in an SQL string to select our quiz and a callback that will log the result:

// index.js
import sqlite3 from "sqlite3";

let db = new sqlite3.Database("quiz.db");

db.get(`SELECT * FROM quiz WHERE quizid  = 0`, (err, row) => {
  if (err) {
    console.error(err.message);
  }
  console.log(row);
  db.close();
});

Running this should print { quizid: 0, title: 'my quiz' } in the console.

How not to use callbacks

Now let’s wrap this code in a function where we can pass the ID in as an argument; we want to access any quiz by its ID. This function will return the database row object we get from db.

Here’s where we start running into trouble. We can’t simply return the object inside of the callback we pass to db and walk away. This won’t change what our outer function returns. Instead, you might think we can create a variable (let’s call it result) in the outer function and reassign this variable in the callback. Here is how we might attempt this:

// index.js
// Be warned! This code contains BUGS
import sqlite3 from "sqlite3";

function getQuiz(id) {
  let db = new sqlite3.Database("quiz.db");
  let result;
  db.get(`SELECT * FROM quiz WHERE quizid  = ?`, [id], (err, row) => {
    if (err) {
      return console.error(err.message);
    }
    db.close();
    result = row;
  });
  return result;
}
console.log(getQuiz(0));

If you run this code, the console log will print out undefined! What happened?

We’ve run into a disconnect between how we expect JavaScript to run (top to bottom), and how asynchronous callbacks run. The getQuiz function in the above example runs like this:

  1. We declare the result variable with let result;. We haven’t assigned anything to this variable so its value is undefined.
  2. We call the db.get() function. We pass it an SQL string, the ID, and a callback. But our callback won’t run yet! Instead, the SQLite package starts a task in the background to read from the quiz.db file. Reading from the file system takes a relatively long time, so this API lets our user code move to the next line while Node.js reads from the disk in the background.
  3. Our function returns result. As our callback hasn’t run yet, result still holds a value of undefined.
  4. SQLite finishes reading from the file system and runs the callback we passed, closing the database and assigning the row to the result variable. Assigning this variable makes no difference as the function has already returned its result.

Passing in callbacks

How do we fix this? Before 2015, the way to fix this would be to use callbacks. Instead of only passing the quiz ID to our function, we pass the quiz ID and a callback which will receive the row object as an argument.

Here’s how this looks:

// index.js
import sqlite3 from "sqlite3";
function getQuiz(id, callback) {
  let db = new sqlite3.Database("quiz.db");
  db.get(`SELECT * FROM quiz WHERE quizid  = ?`, [id], (err, row) => {
    if (err) {
       console.error(err.message);
    }
    else {
       callback(row);
    }
    db.close();
  });
}
getQuiz(0,(quiz)=>{
  console.log(quiz);
});

That does it. It’s a subtle difference, and one that forces you to change the way your user code looks, but it means now our console.log runs after the query is complete.

Callback hell

But what if we need to do multiple consecutive asynchronous calls? For instance, what if we were trying to find out which quiz an answer belonged to, and we only had the ID of the answer.

First, I’m going to refactor getQuiz to a more general get function, so we can pass in the table and column to query, as well as the ID:

Unfortunately, we are unable to use the (more secure) SQL parameters for parameterizing the table name, so we’re going to switch to using a template string instead. In production code you would need to scrub this string to prevent SQL injection.

function get(params, callback) {
  // In production these strings should be scrubbed to prevent SQL injection
  const { table, column, value } = params;
  let db = new sqlite3.Database("quiz.db");
  db.get(`SELECT * FROM ${table} WHERE ${column} = ${value}`, (err, row) => {
    callback(err, row);
    db.close();
  });
}

Another issue is that there might be an error reading from the database. Our user code will need to know whether each database query has had an error; otherwise it shouldn’t continue querying the data. We’ll use the Node.js convention of passing an error object as the first argument of our callback. Then we can check if there’s an error before moving forward.

Let’s take our answer with an id of 2 and check which quiz it belongs to. Here’s how we can do this with callbacks:

// index.js
import sqlite3 from "sqlite3";

function get(params, callback) {
  // In production these strings should be scrubbed to prevent SQL injection
  const { table, column, value } = params;
  let db = new sqlite3.Database("quiz.db");
  db.get(`SELECT * FROM ${table} WHERE ${column} = ${value}`, (err, row) => {
    callback(err, row);
    db.close();
  });
}

get({ table: "answer", column: "answerid", value: 2 }, (err, answer) => {
  if (err) {
    console.log(err);
  } else {
    get(
      { table: "question", column: "questionid", value: answer.answerquestion },
      (err, question) => {
        if (err) {
          console.log(err);
        } else {
          get(
            { table: "quiz", column: "quizid", value: question.questionquiz },
            (err, quiz) => {
              if (err) {
                console.log(err);
              } else {
                // This is the quiz our answer belongs to
                console.log(quiz);
              }
            }
          );
        }
      }
    );
  }
});

Woah, that’s a lot of nesting! Every time we get an answer back from the database, we have to add two layers of nesting — one to check for an error, and one for the next callback. As we chain more and more asynchronous calls our code gets deeper and deeper.

We could partially prevent this by using named functions instead of anonymous functions, which would keep the nesting lower, but make our code our code less concise. We’d also have to think of names for all of these intermediate functions. Thankfully, promises arrived in Node back in 2015 to help with chained asynchronous calls like this.

Promises

Wrapping asynchronous tasks with promises allows you to prevent a lot of the nesting in the previous example. Rather than having deeper and deeper nested callbacks, we can pass a callback to a Promise’s then function.

First, let’s change our get function so it wraps the database query with a Promise:

// index.js
import sqlite3 from "sqlite3";
function get(params) {
  // In production these strings should be scrubbed to prevent SQL injection
  const { table, column, value } = params;
  let db = new sqlite3.Database("quiz.db");

  return new Promise(function (resolve, reject) {
    db.get(`SELECT * FROM ${table} WHERE ${column} = ${value}`, (err, row) => {
      if (err) {
        return reject(err);
      }
      db.close();
      resolve(row);
    });
  });
}

Now our code to search for which quiz an answer is a part of can look like this:

get({ table: "answer", column: "answerid", value: 2 })
  .then((answer) => {
    return get({
      table: "question",
      column: "questionid",
      value: answer.answerquestion,
    });
  })
  .then((question) => {
    return get({
      table: "quiz",
      column: "quizid",
      value: question.questionquiz,
    });
  })
  .then((quiz) => {
    console.log(quiz);
  })
  .catch((error) => {
    console.log(error);
  }
);

That’s a much nicer way to handle our asynchronous code. And we no longer have to individually handle errors for each call, but can use the catch function to handle any errors that happen in our chain of functions.

We still need to write a lot of callbacks to get this working. Thankfully, there’s a newer API to help! When Node 7.6.0 was released, it updated its JavaScript engine to V8 5.5 which includes the ability to write ES2017 async/await functions.

Async/Await

With async/await we can write our asynchronouse code almost the same way we write synchronous code. Sarah Drasner has a great post explaining async/await.

When you have a function that returns a Promise, you can use the await keyword before calling it, and it will prevent your code from moving to the next line until the Promise is resolved. As we’ve already refactored the get() function to return a promise, we only need to change our user-code:

async function printQuizFromAnswer() {
  const answer = await get({ table: "answer", column: "answerid", value: 2 });
  const question = await get({
    table: "question",
    column: "questionid",
    value: answer.answerquestion,
  });
  const quiz = await get({
    table: "quiz",
    column: "quizid",
    value: question.questionquiz,
  });
  console.log(quiz);
}

printQuizFromAnswer();

This looks much more familiar to code that we’re used to reading. Just this year, Node released top-level await. This means we can make this example even more concise by removing the printQuizFromAnswer() function wrapping our get() function calls.

Now we have concise code that will sequentially perform each of these asynchronous tasks. We would also be able to simultaneously fire off other asynchronous functions (like reading from files, or responding to HTTP requests) while we’re waiting for this code to run. This is the benefit of all the asynchronous style.

As there are so many asynchronous tasks in Node, such as reading from the network or accessing a database or filesystem. It’s especially important to understand these concepts. It also has a bit of a learning curve.

Using SQL to its full potential

There’s an even better way! Instead of having to worry about these asynchronous calls to get each piece of data, we could use SQL to grab all the data we need in one big query. We can do this with an SQL JOIN query:

// index.js
import sqlite3 from "sqlite3";

function quizFromAnswer(answerid, callback) {
  let db = new sqlite3.Database("quiz.db");
  db.get(
    `SELECT *,a.body AS answerbody, ques.body AS questionbody FROM answer a 
    INNER JOIN question ques ON a.answerquestion=ques.questionid 
    INNER JOIN quiz quiz ON ques.questionquiz = quiz.quizid 
    WHERE a.answerid = ?;`,
    [answerid],
    (err, row) => {
      if (err) {
        console.log(err);
      }
      callback(err, row);
      db.close();
    }
  );
}
quizFromAnswer(2, (e, r) => {
  console.log(r);
});

This will return us all the data we need about our answer, question, and quiz in one big object. We’ve also renamed each body column for answers and questions to answerbody and questionbody to differentiate them. As you can see, dropping more logic into the database layer can simplify your JavaScript (as well as possibly improve performance).

If you’re using a relational database like SQLite, then you have a whole other language to learn, with a whole lot of different features that could save time and effort and increase performance. This adds more to the pile of things to learn for writing Node.

Node APIs and conventions

There are a lot of new node APIs to learn when switching from browser code to Node.js.

Any database connections and/or reads of the filesystem use APIs that we don’t have in the browser (yet). We also have new APIs to set up HTTP servers. We can make checks on the operating system using the OS module, and we can encrypt data with the Crypto module. Also, to make an HTTP request from node (something we do in the browser all the time), we don’t have a fetch or XMLHttpRequest function. Instead, we need to import the https module. However, a recent pull request in the node.js repository shows that fetch in node appears to be on the way! There are still many mismatches between browser and Node APIs. This is one of the problems that Deno has set out to solve.

We also need to know about Node conventions, including the package.json file. Most front-end developers will be pretty familiar with this if they’ve used build tools. If you’re looking to publish a library, the part you might not be used to is the main property in the package.json file. This property contains a path that will point to the entry-point of the library.

There are also conventions like error-first callbacks: where a Node API will take a callback which takes an error as the first argument and the result as the second argument. You could see this earlier in our database code and below using the readFile function.

import fs from 'fs';

fs.readFile('myfile.txt', 'utf8' , (err, data) => {
  if (err) {
    console.error(err)
    return
  }
  console.log(data)
})

Different types of modules

Earlier on, I casually instructed you to throw "type":"module" in your package.json file to get the code samples working. When Node was created in 2009, the creators needed a module system, but none existed in the JavaScript specification. They came up with Common.js modules to solve this problem. In 2015, a module spec was introduced to JavaScript, causing Node.js to have a module system that was different from native JavaScript modules. After a herculean effort from the Node team we are now able to use these native JavaScript modules in Node.

Unfortunately, this means a lot of blog posts and resources will be written using the older module system. It also means that many npm packages won’t use native JavaScript modules, and sometimes there will be libraries that use native JavaScript modules in incompatible ways!

Other concerns

There are a few other concerns we need to think about when writing Node. If you’re running a Node server and there is a fatal exception, the server will terminate and will stop responding to any requests. This means if you make a mistake that’s bad enough on a Node server, your app is broken for everyone. This is different from client-side JavaScript where an edge-case that causes a fatal bug is experienced by one user at a time, and that user has the option of refreshing the page.

Security is something we should already be worried about in the front end with cross-site scripting and cross-site request forgery. But a back-end server has a wider surface area for attacks with vulnerabilities including brute force attacks and SQL injection. If you’re storing and accessing people’s information with Node you’ve got a big responsibility to keep their data safe.

Conclusion

Node is a great way to use your JavaScript skills to build servers and command line tools. JavaScript is a user-friendly language we’re used to writing. And Node’s async-first nature means you can smash through concurrent tasks quickly. But there are a lot of new things to learn when getting started. Here are the resources I wish I saw before jumping in:

And if you are planning to hold data in an SQL database, read up on SQL Basics.


Comparing Node JavaScript to JavaScript in the Browser originally published on CSS-Tricks. You should get the newsletter.

CSS in TypeScript with vanilla-extract

vanilla-extract is a new framework-agnostic CSS-in-TypeScript library. It’s a lightweight, robust, and intuitive way to write your styles. vanilla-extract isn’t a prescriptive CSS framework, but a flexible piece of developer tooling. CSS tooling has been a relatively stable space over the last few years with PostCSS, Sass, CSS Modules, and styled-components all coming out before 2017 (some long before that) and they remain popular today. Tailwind is one of a few tools that has shaken things up in CSS tooling over the last few years.

vanilla-extract aims to shake things up again. It was released this year and has the benefit of being able to leverage some recent trends, including:

  • JavaScript developers switching to TypeScript
  • Browser support for CSS custom properties
  • Utility-first styling

There are a whole bunch of clever innovations in vanilla-extract that I think make it a big deal.

Zero runtime

CSS-in-JS libraries usually inject styles into the document at runtime. This has benefits, including critical CSS extraction and dynamic styling.

But as a general rule of thumb, a separate CSS file is going to be more performant. That’s because JavaScript code needs to go through more expensive parsing/compilation, whereas a separate CSS file can be cached while the HTTP2 protocol lowers the cost of the extra request. Also, custom properties can now provide a lot of dynamic styling for free.

So, instead of injecting styles at runtime, vanilla-extract takes after Linaria and astroturf. These libraries let you author styles using JavaScript functions that get ripped out at build time and used to construct a CSS file. Although you write vanilla-extract in TypeScript, it doesn’t affect the overall size of your production JavaScript bundle.

TypeScript

A big vanilla-extract value proposition is that you get typing. If it’s important enough to keep the rest of your codebase type-safe, then why not do the same with your styles?

TypeScript provides a number of benefits. First, there’s autocomplete. If you type “fo” then, in a TypeScript-friendly editor, you get a list of font options in a drop down — fontFamily, fontKerning, fontWeight, or whatever else matches — to choose from. This makes CSS properties discoverable from the comfort of your editor. If you can’t remember the name of fontVariant but know it’s going to start with the word “font” you type it and scroll through the options. In VS Code, you don’t need to download any additional tooling to get this to happen.

This really speeds up the authoring of styles:

It also means your editor is watching over your shoulder to make sure you aren’t making any spelling mistakes that could cause frustrating bugs.

vanilla-extract types also provide an explanation of the syntax in their type definition and a link to the MDN documentation for the CSS property you’re editing. This removes a step of frantically Googling when styles are behaving unexpectedly.

Image of VSCode with cursor hovering over fontKerning property and a pop up describing what the property does with a link to the Mozilla documentation for the property

Writing in TypeScript means you’re using camel-case names for CSS properties, like backgroundColor. This might be a bit of a change for developers who are used regular CSS syntax, like background-color.

Integrations

vanilla-extract provides first-class integrations for all the newest bundlers. Here’s a full list of integrations it currently supports:

  • webpack
  • esbuild
  • Vite
  • Snowpack
  • NextJS
  • Gatsby

It’s also completely framework-agnostic. All you need to do is import class names from vanilla-Extract, which get converted into a string at build time.

Usage

To use vanilla-Extract, you write up a .css.ts file that your components can import. Calls to these functions get converted to hashed and scoped class name strings in the build step. This might sound similar to CSS Modules, and this isn’t by coincidence: the creator of vanilla-Extract, Mark Dalgleish, is also co-creator of CSS Modules.

style()

You can create an automatically scoped CSS class using the style() function. You pass in the element’s styles, then export the returned value. Import this value somewhere in your user code, and it’s converted into a scoped class name.

// title.css.ts
import {style} from "@vanilla-extract/css";

export const titleStyle = style({
  backgroundColor: "hsl(210deg,30%,90%)",
  fontFamily: "helvetica, Sans-Serif",
  color: "hsl(210deg,60%,25%)",
  padding: 30,
  borderRadius: 20,
});
// title.ts
import {titleStyle} from "./title.css";

document.getElementById("root").innerHTML = `<h1 class="${titleStyle}">Vanilla Extract</h1>`;

Media queries and pseudo selectors can be included inside style declarations, too:

// title.css.ts
backgroundColor: "hsl(210deg,30%,90%)",
fontFamily: "helvetica, Sans-Serif",
color: "hsl(210deg,60%,25%)",
padding: 30,
borderRadius: 20,
"@media": {
  "screen and (max-width: 700px)": {
    padding: 10
  }
},
":hover":{
  backgroundColor: "hsl(210deg,70%,80%)"
}

These style function calls are a thin abstraction over CSS — all of the property names and values map to the CSS properties and values you’re familiar with. One change to get used to is that values can sometimes be declared as a number (e.g. padding: 30) which defaults to a pixel unit value, while some values need to be declared as a string (e.g. padding: "10px 20px 15px 15px").

The properties that go inside the style function can only affect a single HTML node. This means you can’t use nesting to declare styles for the children of an element — something you might be used to in Sass or PostCSS. Instead, you need to style children separately. If a child element needs different styles based on the parent, you can use the selectors property to add styles that are dependent on the parent:

// title.css.ts
export const innerSpan = style({
  selectors:{[`${titleStyle} &`]:{
    color: "hsl(190deg,90%,25%)",
    fontStyle: "italic",
    textDecoration: "underline"
  }}
});
// title.ts
import {titleStyle,innerSpan} from "./title.css";
document.getElementById("root").innerHTML = 
`<h1 class="${titleStyle}">Vanilla <span class="${innerSpan}">Extract</span></h1>
<span class="${innerSpan}">Unstyled</span>`;

Or you can also use the Theming API (which we’ll get to next) to create custom properties in the parent element that are consumed by the child nodes. This might sound restrictive, but it’s intentionally been left this way to increase maintainability in larger codebases. It means that you’ll know exactly where the styles have been declared for each element in your project.

Theming

You can use the createTheme function to build out variables in a TypeScript object:

// title.css.ts
import {style,createTheme } from "@vanilla-extract/css";

// Creating the theme
export const [mainTheme,vars] = createTheme({
  color:{
    text: "hsl(210deg,60%,25%)",
    background: "hsl(210deg,30%,90%)"
  },
  lengths:{
    mediumGap: "30px"
  }
})

// Using the theme
export const titleStyle = style({
  backgroundColor:vars.color.background,
  color: vars.color.text,
  fontFamily: "helvetica, Sans-Serif",
  padding: vars.lengths.mediumGap,
  borderRadius: 20,
});

Then vanilla-extract allows you to make a variant of your theme. TypeScript helps it ensure that your variant uses all the same property names, so you get a warning if you forget to add the background property to the theme.

Image of VS Code where showing a theme being declared but missing the background property causing a large amount of red squiggly lines to warn that the property’s been forgotten

This is how you might create a regular theme and a dark mode:

// title.css.ts
import {style,createTheme } from "@vanilla-extract/css";

export const [mainTheme,vars] = createTheme({
  color:{
    text: "hsl(210deg,60%,25%)",
    background: "hsl(210deg,30%,90%)"
  },
  lengths:{
    mediumGap: "30px"
  }
})
// Theme variant - note this part does not use the array syntax
export const darkMode = createTheme(vars,{
  color:{
    text:"hsl(210deg,60%,80%)",
    background: "hsl(210deg,30%,7%)",
  },
  lengths:{
    mediumGap: "30px"
  }
})
// Consuming the theme 
export const titleStyle = style({
  backgroundColor: vars.color.background,
  color: vars.color.text,
  fontFamily: "helvetica, Sans-Serif",
  padding: vars.lengths.mediumGap,
  borderRadius: 20,
});

Then, using JavaScript, you can dynamically apply the class names returned by vanilla-extract to switch themes:

// title.ts
import {titleStyle,mainTheme,darkMode} from "./title.css";

document.getElementById("root").innerHTML = 
`<div class="${mainTheme}" id="wrapper">
  <h1 class="${titleStyle}">Vanilla Extract</h1>
  <button onClick="document.getElementById('wrapper').className='${darkMode}'">Dark mode</button>
</div>`

How does this work under the hood? The objects you declare in the createTheme function are turned into CSS custom properties attached to the element’s class. These custom properties are hashed to prevent conflicts. The output CSS for our mainTheme example looks like this:

.src__ohrzop0 {
  --color-brand__ohrzop1: hsl(210deg,80%,25%);
  --color-text__ohrzop2: hsl(210deg,60%,25%);
  --color-background__ohrzop3: hsl(210deg,30%,90%);
  --lengths-mediumGap__ohrzop4: 30px;
}

And the CSS output of our darkMode theme looks like this:

.src__ohrzop5 {
  --color-brand__ohrzop1: hsl(210deg,80%,60%);
  --color-text__ohrzop2: hsl(210deg,60%,80%);
  --color-background__ohrzop3: hsl(210deg,30%,10%);
  --lengths-mediumGap__ohrzop4: 30px;
}

So, all we need to change in our user code is the class name. Apply the darkmode class name to the parent element, and the mainTheme custom properties get swapped out for darkMode ones.

Recipes API

The style and createTheme functions provide enough power to style a website on their own, but vanilla-extract provides a few extra APIs to promote reusability. The Recipes API allows you to create a bunch of variants for an element, which you can choose from in your markup or user code.

First, it needs to be separately installed:

npm install @vanilla-extract/recipes

Here’s how it works. You import the recipe function and pass in an object with the properties base and variants:

// button.css.ts
import { recipe } from '@vanilla-extract/recipes';

export const buttonStyles = recipe({
  base:{
    // Styles that get applied to ALL buttons go in here
  },
  variants:{
    // Styles that we choose from go in here
  }
});

Inside base, you can declare the styles that will be applied to all variants. Inside variants, you can provide different ways to customize the element:

// button.css.ts
import { recipe } from '@vanilla-extract/recipes';
export const buttonStyles = recipe({
  base: {
    fontWeight: "bold",
  },
  variants: {
    color: {
      normal: {
        backgroundColor: "hsl(210deg,30%,90%)",
      },
      callToAction: {
        backgroundColor: "hsl(210deg,80%,65%)",
      },
    },
    size: {
      large: {
        padding: 30,
      },
      medium: {
        padding: 15,
      },
    },
  },
});

Then you can declare which variant you want to use in the markup:

// button.ts
import { buttonStyles } from "./button.css";

<button class=`${buttonStyles({color: "normal",size: "medium",})}`>Click me</button>

And vanilla-extract leverages TypeScript giving autocomplete for your own variant names!

You can name your variants whatever you like, and put whatever properties you want in them, like so:

// button.css.ts
export const buttonStyles = recipe({
  variants: {
    animal: {
      dog: {
        backgroundImage: 'url("./dog.png")',
      },
      cat: {
        backgroundImage: 'url("./cat.png")',
      },
      rabbit: {
        backgroundImage: 'url("./rabbit.png")',
      },
    },
  },
});

You can see how this would be incredibly useful for building a design system, as you can create reusable components and control the ways they vary. These variations become easily discoverable with TypeScript — all you need to type is CMD/CTRL + Space (on most editors) and you get a dropdown list of the different ways to customize your component.

Utility-first with Sprinkles

Sprinkles is a utility-first framework built on top of vanilla-extract. This is how the vanilla-extract docs describe it:

Basically, it’s like building your own zero-runtime, type-safe version of Tailwind, Styled System, etc.

So if you’re not a fan of naming things (we all have nightmares of creating an outer-wrapper div then realising we need to wrap it with an . . . outer-outer-wrapper ) Sprinkles might be your preferred way to use vanilla-extract.

The Sprinkles API also needs to be separately installed:

npm install @vanilla-extract/sprinkles

Now we can create some building blocks for our utility functions to use. Let’s create a list of colors and lengths by declaring a couple of objects. The JavaScript key names can be whatever we want. The values will need to be valid CSS values for the CSS properties we plan to use them for:

// sprinkles.css.ts
const colors = {
  blue100: "hsl(210deg,70%,15%)",
  blue200: "hsl(210deg,60%,25%)",
  blue300: "hsl(210deg,55%,35%)",
  blue400: "hsl(210deg,50%,45%)",
  blue500: "hsl(210deg,45%,55%)",
  blue600: "hsl(210deg,50%,65%)",
  blue700: "hsl(207deg,55%,75%)",
  blue800: "hsl(205deg,60%,80%)",
  blue900: "hsl(203deg,70%,85%)",
};

const lengths = {
  small: "4px",
  medium: "8px",
  large: "16px",
  humungous: "64px"
};

We can declare which CSS properties these values are going to apply to by using the defineProperties function:

  • Pass it an object with a properties property.
  • In properties, we declare an object where the keys are the CSS properties the user can set (these need to be valid CSS properties) and the values are the objects we created earlier (our lists of colors and lengths).
// sprinkles.css.ts
import { defineProperties } from "@vanilla-extract/sprinkles";

const colors = {
  blue100: "hsl(210deg,70%,15%)"
  // etc.
}

const lengths = {
  small: "4px",
  // etc.
}

const properties = defineProperties({
  properties: {
    // The keys of this object need to be valid CSS properties
    // The values are the options we provide the user
    color: colors,
    backgroundColor: colors,
    padding: lengths,
  },
});

Then the final step is to pass the return value of defineProperties to the createSprinkles function, and export the returned value:

// sprinkles.css.ts
import { defineProperties, createSprinkles } from "@vanilla-extract/sprinkles";

const colors = {
  blue100: "hsl(210deg,70%,15%)"
  // etc.
}

const lengths = {
  small: "4px",
  // etc. 
}

const properties = defineProperties({
  properties: {
    color: colors,
    // etc. 
  },
});
export const sprinkles = createSprinkles(properties);

Then we can start styling inside our components inline by calling the sprinkles function in the class attribute and choosing which options we want for each element.

// index.ts
import { sprinkles } from "./sprinkles.css";
document.getElementById("root").innerHTML = `<button class="${sprinkles({
  color: "blue200",
  backgroundColor: "blue800",
  padding: "large",
})}">Click me</button>
</div>`;

The JavaScript output holds a class name string for each style property. These class names match a single rule in the output CSS file.

<button class="src_color_blue200__ohrzop1 src_backgroundColor_blue800__ohrzopg src_padding_large__ohrzopk">Click me</button>

As you can see, this API allows you to style elements inside your markup using a set of pre-defined constraints. You also avoid the difficult task of coming up with names of classes for every element. The result is something that feels a lot like Tailwind, but also benefits from all the infrastructure that has been built around TypeScript.

The Sprinkles API also allows you to write conditions and shorthands to create responsive styles using utility classes.

Wrapping up

vanilla-extract feels like a big new step in CSS tooling. A lot of thought has been put into building it into an intuitive, robust solution for styling that utilizes all of the power that static typing provides.

Further reading


The post CSS in TypeScript with vanilla-extract appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Three Buggy React Code Examples and How to Fix Them

There’s usually more than one way to code a thing in React. And while it’s possible to create the same thing different ways, there may be one or two approaches that technically work “better” than others. I actually run into plenty of examples where the code used to build a React component is technically “correct” but opens up issues that are totally avoidable.

So, let’s look at some of those examples. I’m going to provide three instances of “buggy” React code that technically gets the job done for a particular situation, and ways it can be improved to be more maintainable, resilient, and ultimately functional.

This article assumes some knowledge of React hooks. It isn’t an introduction to hooks—you can find a good introduction from Kingsley Silas on CSS Tricks, or take a look at the React docs to get acquainted with them. We also won’t be looking at any of that exciting new stuff coming up in React 18. Instead, we’re going to look at some subtle problems that won’t completely break your application, but might creep into your codebase and can cause strange or unexpected behavior if you’re not careful.

Buggy code #1: Mutating state and props

It’s a big anti-pattern to mutate state or props in React. Don’t do this!

This is not a revolutionary piece of advice—it’s usually one of the first things you learn if you’re getting started with React. But you might think you can get away with it (because it seems like you can in some cases).

I’m going to show you how bugs might creep into your code if you’re mutating props. Sometimes you’ll want a component that will show a transformed version of some data. Let’s create a parent component that holds a count in state and a button that will increment it. We’ll also make a child component that receives the count via props and shows what the count would look like with 5 added to it.

Here’s a Pen that demonstrates a naïve approach:

This example works. It does what we want it to do: we click the increment button and it adds one to the count. Then the child component is re-rendered to show what the count would look like with 5 added on. We changed the props in the child here and it works fine! Why has everybody been telling us mutating props is so bad?

Well, what if later we refactor the code and need to hold the count in an object? This might happen if we need to store more properties in the same useState hook as our codebase grows larger.

Instead of incrementing the number held in state, we increment the count property of an object held in state. In our child component, we receive the object through props and add to the count property to show what the count would look like if we added 5.

Let’s see how this goes. Try incrementing the state a few times in this pen:

Oh no! Now when we increment the count it seems to add 6 on every click! Why is this happening? The only thing that changed between these two examples is that we used an object instead of a number!

More experienced JavaScript programmers will know that the big difference here is that primitive types such as numbers, booleans and strings are immutable and passed by value, whereas objects are passed by reference.

This means that:

  • If you put a number in a variable, assign another variable to it, then change the second variable, the first variable will not be changed.
  • If you if you put an object in a variable, assign another variable to it, then change the second variable, the first variable will get changed.

When the child component changes a property of the state object, it’s adding 5 to the same object React uses when updating the state. This means that when our increment function fires after a click, React uses the same object after it has been manipulated by our child component, which shows as adding 6 on every click.

The solution

There are multiple ways to avoid these problems. For a situation as simple as this, you could avoid any mutation and express the change in a render function:

function Child({state}){
  return <div><p>count + 5 = {state.count + 5} </p></div>
}

However, in a more complicated case, you might need to reuse state.count + 5 multiple times or pass the transformed data to multiple children.

One way to do this is to create a copy of the prop in the child, then transform the properties on the cloned data. There’s a couple of different ways to clone objects in JavaScript with various tradeoffs. You can use object literal and spread syntax:

function Child({state}){
const copy = {...state};
  return <div><p>count + 5 = {copy.count + 5} </p></div>
}

But if there are nested objects, they will still reference the old version. Instead, you could convert the object to JSON then immediately parse it:

JSON.parse(JSON.stringify(myobject))

This will work for most simple object types. But if your data uses more exotic types, you might want to use a library. A popular method would be to use lodash’s deepClone. Here’s a Pen that shows a fixed version using object literal and spread syntax to clone the object:

One more option is to use a library like Immutable.js. If you have a rule to only use immutable data structures, you’ll be able to trust that your data won’t get unexpectedly mutated. Here’s one more example using the immutable Map class to represent the state of the counter app:

Buggy code #2: Derived state

Let’s say we have a parent and a child component. They both have useState hooks holding a count. And let’s say the parent passes its state down as prop down to the child, which the child uses to initialize its count.

function Parent(){
  const [parentCount,setParentCount] = useState(0);
  return <div>
    <p>Parent count: {parentCount}</p>
    <button onClick={()=>setParentCount(c=>c+1)}>Increment Parent</button>
    <Child parentCount={parentCount}/>
  </div>;
}

function Child({parentCount}){
 const [childCount,setChildCount] = useState(parentCount);
  return <div>
    <p>Child count: {childCount}</p>
    <button onClick={()=>setChildCount(c=>c+1)}>Increment Child</button>
  </div>;
}

What happens to the child’s state when the parent’s state changes, and the child is re-rendered with different props? Will the child state remain the same or will it change to reflect the new count that was passed to it?

We’re dealing with a function, so the child state should get blown away and replaced right? Wrong! The child’s state trumps the new prop from the parent. After the child component’s state is initialized in the first render, it’s completely independent from any props it receives.

React stores component state for each component in the tree and the state only gets blown away when the component is removed. Otherwise, the state won’t be affected by new props.

Using props to initialize state is called “derived state” and it is a bit of an anti-pattern. It removes the benefit of a component having a single source of truth for its data.

Using the key prop

But what if we have a collection of items we want to edit using the same type of child component, and we want the child to hold a draft of the item we’re editing? We’d need to reset the state of the child component each time we switch items from the collection.

Here’s an example: Let’s write an app where we can write a daily list of five thing’s we’re thankful for each day. We’ll use a parent with state initialized as an empty array which we’re going to fill up with five string statements.

Then we’ll have a a child component with a text input to enter our statement.

We’re about to use a criminal level of over-engineering in our tiny app, but it’s to illustrate a pattern you might need in a more complicated project: We’re going to hold the draft state of the text input in the child component.

Lowering the state to the child component can be a performance optimization to prevent the parent re-rendering when the input state changes. Otherwise the parent component will re-render every time there is a change in the text input.

We’ll also pass down an example statement as a default value for each of the five notes we’ll write.

Here’s a buggy way to do this:

// These are going to be our default values for each of the five notes
// To give the user an idea of what they might write
const ideaList = ["I'm thankful for my friends",
                  "I'm thankful for my family",
                  "I'm thankful for my health",
                  "I'm thankful for my hobbies",
                  "I'm thankful for CSS Tricks Articles"]

const maxStatements = 5;

function Parent(){
  const [list,setList] = useState([]);
  
  // Handler function for when the statement is completed
  // Sets state providing a new array combining the current list and the new item 
  function onStatementComplete(payload){
    setList(list=>[...list,payload]);
  }
  // Function to reset the list back to an empty array
   function reset(){
    setList([]);
  }
  return <div>
    <h1>Your thankful list</h1>
    <p>A five point list of things you're thankful for:</p>

    {/* First we list the statements that have been completed*/}
    {list.map((item,index)=>{return <p>Item {index+1}: {item}</p>})}

    {/* If the length of the list is under our max statements length, we render 
    the statement form for the user to enter a new statement.
    We grab an example statement from the idealist and pass down the onStatementComplete function.
    Note: This implementation won't work as expected*/}
    {list.length<maxStatements ? 
      <StatementForm initialStatement={ideaList[list.length]} onStatementComplete={onStatementComplete}/>
      :<button onClick={reset}>Reset</button>
    }
  </div>;
}

// Our child StatementForm component This accepts the example statement for it's initial state and the on complete function
function StatementForm({initialStatement,onStatementComplete}){
   // We hold the current state of the input, and set the default using initialStatement prop
 const [statement,setStatement] = useState(initialStatement);

  return <div>
    {/*On submit we prevent default and fire the onStatementComplete function received via props*/}
    <form onSubmit={(e)=>{e.preventDefault(); onStatementComplete(statement)}}>
    <label htmlFor="statement-input">What are you thankful for today?</label><br/>
    {/* Our controlled input below*/}
    <input id="statement-input" onChange={(e)=>setStatement(e.target.value)} value={statement} type="text"/>
    <input type="submit"/>
      </form>
  </div>
}

There’s a problem with this: each time we submit a completed statement, the input incorrectly holds onto the submitted note in the textbox. We want to replace it with an example statement from our list.

Even though we’re passing down a different example string every time, the child remembers the old state and our newer prop is ignored. You could potentially check whether the props have changed on every render in a useEffect, and then reset the state if they have. But that can cause bugs when different parts of your data use the same values and you want to force the child state to reset even though the prop remains the same.

The solution

If you need a child component where the parent needs the ability to reset the child on demand, there is a way to do it: it’s by changing the key prop on the child.

You might have seen this special key prop from when you’re rendering elements based on an array and React throws a warning asking you to provide a key for each element. Changing the key of a child element ensures React creates a brand new version of the element. It’s a way of telling React that you are rendering a conceptually different item using the same component.

Let’s add a key prop to our child component. The value is the index we’re about to fill with our statement:

<StatementForm key={list.length} initialStatement={ideaList[list.length]} onStatementComplte={onStatementComplete}/>

Here’s what this looks like in our list app:

Note the only thing that changed here is that the child component now has a key prop based on the array index we’re about to fill. Yet, the behavior of the component has completely changed.

Now each time we submit and finish writing out statement, the old state in the child component gets thrown away and replaced with the example statement.

Buggy code #3: Stale closure bugs

This is a common issue with React hooks. There’s previously been a CSS-Tricks article about dealing with stale props and states in React’s functional components.

Let’s take a look at a few situations where you might run into trouble. The first crops up is when using useEffect. If we’re doing anything asynchronous inside of useEffect we can get into trouble using old state or props.

Here’s an example. We need to increment a count every second. We set it up on the first render with a useEffect, providing a closure that increments the count as the first argument, and an empty array as the second argument. We’ll give it the empty array as we don’t want React to restart the interval on every render.

function Counter() { 
  let [count, setCount] = useState(0);

  useEffect(() => {
    let id = setInterval(() => {
      setCount(count + 1);
    }, 1000);
    return () => clearInterval(id);
  },[]);

  return <h1>{count}</h1>;
}

Oh no! The count gets incremented to 1 but never changes after that! Why is this happening?

It’s to do with two things:

Having a look at the MDN docs on closures, we can see:

A closure is the combination of a function and the lexical environment within which that function was declared. This environment consists of any local variables that were in-scope at the time the closure was created.

The “lexical environment” in which our useEffect closure is declared is inside our Counter React component. The local variable we’re interested is count, which is zero at the time of the declaration (the first render).

The problem is, this closure is never declared again. If the count is zero at the time declaration, it will always be zero. Each time the interval fires, it’s running a function that starts with a count of zero and increments it to 1.

So how might we get the function declared again? This is where the second argument of the useEffect call comes in. We thought we were extremely clever only starting off the interval once by using the empty array, but in doing so we shot ourselves in the foot. If we had left out this argument, the closure inside useEffect would get declared again with a new count every time.

The way I like to think about it is that the useEffect dependency array does two things:

  • It will fire the useEffect function when the dependency changes.
  • It will also redeclare the closure with the updated dependency, keeping the closure safe from stale state or props.

In fact, there’s even a lint rule to keep your useEffect instances safe from stale state and props by making sure you add the right dependencies to the second argument.

But we don’t actually want to reset our interval every time the component gets rendered either. How do we solve this problem then?

The solution

Again, there are multiple solutions to our problem here. Let’s start with the easiest: not using the count state at all and instead passing a function into our setState call:

function Counter() { 
  let [count, setCount] = useState(0);

  useEffect(() => {
    let id = setInterval(() => {
      setCount(prevCount => prevCount+ 1);
    }, 1000);
    return () => clearInterval(id);
  },[]);

  return <h1>{count}</h1>;
}

That was easy. Another option is to use the useRef hook like this to keep a mutable reference of the count:

function Counter() {
  let [count, setCount] = useState(0);
  const countRef = useRef(count)
  
  function updateCount(newCount){
    setCount(newCount);
    countRef.current = newCount;
  }

  useEffect(() => {
    let id = setInterval(() => {
      updateCount(countRef.current + 1);
    }, 1000);
    return () => clearInterval(id);
  },[]);

  return <h1>{count}</h1>;
}

ReactDOM.render(<Counter/>,document.getElementById("root"))

To go more in depth on using intervals and hooks you can take a look at this article about creating a useInterval in React by Dan Abramov, who is one of the React core team members. He takes a different route where, instead of holding the count in a ref, he places the entire closure in a ref.

To go more in depth on useEffect you can have a look at his post on useEffect.

More stale closure bugs

But stale closures won’t just appear in useEffect. They can also turn up in event handlers and other closures inside your React components. Let’s have a look at a React component with a stale event handler; we’ll create a scroll progress bar that does the following:

  • increases its width along the screen as the user scrolls
  • starts transparent and becomes more and more opaque as the user scrolls
  • provides the user with a button that randomizes the color of the scroll bar

We’re going to leave the progress bar outside of the React tree and update it in the event handler. Here’s our buggy implementation:

<body>
<div id="root"></div>
<div id="progress"></div>
</body>
function Scroller(){

  // We'll hold the scroll position in one state
  const [scrollPosition, setScrollPosition] = useState(window.scrollY);
  // And the current color in another
  const [color,setColor] = useState({r:200,g:100,b:100});
  
  // We assign out scroll listener on the first render
  useEffect(()=>{
   document.addEventListener("scroll",handleScroll);
    return ()=>{document.removeEventListener("scroll",handleScroll);}
  },[]);
  
  // A function to generate a random color. To make sure the contrast is strong enough
  // each value has a minimum value of 100
  function onColorChange(){
    setColor({r:100+Math.random()*155,g:100+Math.random()*155,b:100+Math.random()*155});
  }
  
  // This function gets called on the scroll event
  function handleScroll(e){
    // First we get the value of how far down we've scrolled
    const scrollDistance = document.body.scrollTop || document.documentElement.scrollTop;
    // Now we grab the height of the entire document
    const documentHeight = document.documentElement.scrollHeight - document.documentElement.clientHeight;
     // And use these two values to figure out how far down the document we are
    const percentAlong =  (scrollDistance / documentHeight);
    // And use these two values to figure out how far down the document we are
    const progress = document.getElementById("progress");
    progress.style.width = `${percentAlong*100}%`;
    // Here's where our bug is. Resetting the color here will mean the color will always 
    // be using the original state and never get updated
    progress.style.backgroundColor = `rgba(${color.r},${color.g},${color.b},${percentAlong})`;
    setScrollPosition(percentAlong);
  }
  
  return <div className="scroller" style={{backgroundColor:`rgb(${color.r},${color.g},${color.b})`}}>
    <button onClick={onColorChange}>Change color</button>
    <span class="percent">{Math.round(scrollPosition* 100)}%</span>
  </div>
}

ReactDOM.render(<Scroller/>,document.getElementById("root"))

Our bar gets wider and increasingly more opaque as the page scrolls. But if you click the change color button, our randomized colors are not affecting the progress bar. We’re getting this bug because the closure is affected by component state, and this closure is never being re-declared so we only get the original value of the state and no updates.

You can see how setting up closures that call external APIs using React state, or component props might give you grief if you’re not careful.

The solution

Again, there are multiple ways to fix this problem. We could keep the color state in a mutable ref which we could later use in our event handler:

const [color,setColor] = useState({r:200,g:100,b:100});
const colorRef = useRef(color);

function onColorChange(){
  const newColor = {r:100+Math.random()*155,g:100+Math.random()*155,b:100+Math.random()*155};
  setColor(newColor);
  colorRef.current=newColor;
  progress.style.backgroundColor = `rgba(${newColor.r},${newColor.g},${newColor.b},${scrollPosition})`;
}

This works well enough but it doesn’t feel ideal. You may need to write code like this if you’re dealing with third-party libraries and you can’t find a way to pull their API into your React tree. But by keeping one of our elements out of the React tree and updating it inside of our event handler, we’re swimming against the tide.

This is a simple fix though, as we’re only dealing with the DOM API. An easy way to refactor this is to include the progress bar in our React tree and render it in JSX allowing it to reference the component’s state. Now we can use the event handling function purely for updating state.

function Scroller(){
  const [scrollPosition, setScrollPosition] = useState(window.scrollY);
  const [color,setColor] = useState({r:200,g:100,b:100});  

  useEffect(()=>{
   document.addEventListener("scroll",handleScroll);
    return ()=>{document.removeEventListener("scroll",handleScroll);}
  },[]);
  
  function onColorChange(){
    const newColor = {r:100+Math.random()*155,g:100+Math.random()*155,b:100+Math.random()*155};
    setColor(newColor);
  }

  function handleScroll(e){
    const scrollDistance = document.body.scrollTop || document.documentElement.scrollTop;
    const documentHeight = document.documentElement.scrollHeight - document.documentElement.clientHeight;
    const percentAlong =  (scrollDistance / documentHeight);
    setScrollPosition(percentAlong);
  }
  return <>
    <div class="progress" id="progress"
   style={{backgroundColor:`rgba(${color.r},${color.g},${color.b},${scrollPosition})`,width: `${scrollPosition*100}%`}}></div>
    <div className="scroller" style={{backgroundColor:`rgb(${color.r},${color.g},${color.b})`}}>
    <button onClick={onColorChange}>Change color</button>
    <span class="percent">{Math.round(scrollPosition * 100)}%</span>
  </div>
  </>
}

That feels better. Not only have we removed the chance for our event handler to get stale, we’ve also converted our progress bar into a self contained component which takes advantage of the declarative nature of React.

Also, for a scroll indicator like this, you might not even need JavaScript — have take a look at the up-and-coming @scroll-timeline CSS function or an approach using a gradient from Chris’ book on the greatest CSS tricks!

Wrapping up

We’ve had a look at three different ways you can create bugs in your React applications and some ways to fix them. It can be easy to look at counter examples which follow a happy path and don’t show subtleties in the APIs that might cause problems.

If you still find yourself needing to build a stronger mental model of what your React code is doing, here’s a list of resources which can help:


The post Three Buggy React Code Examples and How to Fix Them appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Comparing the New Generation of Build Tools

A bunch of new developer tools have landed in the past year and they are biting at the heels of the tools that have dominated front-end development over the last few years, including webpack, Babel, Rollup, Parcel, create-react-app.

These new tools aren’t designed to perform the exact same function, and each has different things they’re trying to achieve and features to get there. Despite their differences, these tools do share a common goal: improve the developer experience.


Specifically, I’d like to evaluate each one, outlining what they do, why we need them, and their use cases. I realize that comparisons aren’t always fair. Again, it’s not like any of the things we’re looking at in this article are direct competitors. In fact, Snowpack and Vite actually use esbuild under the hood for certain tasks. Our goal is more to get a better view of the landscape of developer tools that run tasks to make our jobs easier. This way, we see what options are out there and how they stack up, so we can make the best choices when we need them.

Of course, all of this will be colored by my experience using React and Preact. I’m more familiar with these frameworks libraries, but we’ll look at their support for other front-end frameworks too.

There have been a whole lot of great articles, streams and podcasts about these new developer tools. There are a couple of ShopTalk Show episodes I’d recommend for more context: Episode 454 discusses Vite and Episode 448 features the creators of wmr and Snowpack. Something that stands out from these episodes is that a huge amount of work has gone into building these tools to modernize our developer environments.

Why are these tools all arriving now?

In part, I think these tools are arriving as a reaction to JavaScript tooling fatigue — something captured nicely in this article about learning JavaScript back in 2016. They also fill a missing middle ground between writing a single vanilla JavaScript file, and having to download 200 megabytes of tooling dependencies before you’ve written a line of your own code. They come batteries-included without the dependency list, and are part of a trend of collapsing layers in the JavaScript ecosystem.

Snowpack, Vite, and wmr have all been enabled by native JavaScript modules in the browser. Back in 2018, Firefox 60 was released with ECMAScript 2015 modules enabled by default. Since then, all major browser engines have supported native JavaScript modules. Node.js also shipped with native JavaScript modules in November 2019. We’re still finding out what possibilities native JavaScript modules unlock today in 2021.

How are these different from existing tools?

Whether we use webpack, Rollup, or Parcel for a development server, the tool bundles our entire codebase from our source code and a node_modules folder, runs these through build processes —  like Babel, TypeScript, or PostCSS — then pushes the bundled code to our browser. This all takes work, and can slow development servers to a crawl in larger codebases, even after all the work that’s gone into caching and optimizing.

Snowpack, Vite, and wmr development servers don’t follow this model. Instead, they wait until the browser finds an import statement and makes an HTTP request for the module. Only after this request is made will the tool apply transforms to the requested module and any leaf nodes in the module’s import tree, then serve these to the browser. This speeds things up a lot as there’s less work in the process of pushing to a dev server.

You’ll notice esbuild missing from this picture. It’s a bundler first and foremost. It doesn’t side-step bundling the way the other tools do. Instead, esbuild processes code extremely fast by avoiding expensive transformations, leveraging parallelization and using the Go language.

The experiment

I took one of the the example apps from the React docs and rebuilt it with each tool covered in this article. The project I went with was Snap Shot by Yogita Verma. Here’s a link to the original repo, and a link to my repo with the four versions of Snap Shot, each using a different build tool. We’ll compare the output of each build step later. Rebuilding this app allowed me to test out the developer experience of pulling some pretty standard React dependencies into the tools, including React Router and axios.

Comparable features

Before we get into the specifics of each individual tool, they all support the following features out of the box (to varying degrees):

  • First-class support for native JavaScript modules
  • TypeScript compilation (but not type checking)
  • JSX
  • Plugin API for extensibility
  • A built-in development server
  • CSS bundling and support for CSS-in-JS libraries

All of these tools can compile TypeScript into JavaScript, but will do so even if there are type errors. For proper type checking you would need to install TypeScript and run tsc --noEmit on your root JavaScript file, or alternatively, use editor plugins to watch for type errors.

OK, let’s take look at each tool.

esbuild

esbuild was created by Evan Wallace (CTO of Figma). Its main feature is that it provides a build step 10×-100× faster than Node-based bundlers (by their own benchmarks). It doesn’t provide many of the developer conveniences you might find in something like create-react-app. But there are more and more esbuild starters popping up that fills those gaps, including create-react-app-esbuild, estrella and Snowpack, which uses esbuild for its build step.

esbuild is very new. It hasn’t yet reached a 1.0 version and isn’t quite ready for production use — but it’s not far off. It gives you intuitive JavaScript and command line APIs with smart defaults.

Use cases

esbuild is a complete game-changer in the bundler world. It’s going to be most useful in large codebases where the speed difference between esbuild and node bundlers gets multiplied. When esbuild hits 1.0 it’s going to be very useful in big production sites, and will save teams a whole lot of time waiting for builds to complete. Unfortunately, big production sites will have to wait until esbuild becomes stable. In the meantime it’ll just be good to add some speed to your bundling in side projects.

esbuild’s lightening fast speed will be a bonus for any kind of work that you’re doing. Less time spent waiting for builds to run is always going to be good for developer experience! This considered, if you’re prototyping quick applications you might want to start with something more high level than esbuild — otherwise, you’ll need to spend some time pulling in dependencies and configuring your environment before you get conveniences we expect in the JavaScript ecosystem. Also, if you want to minimize the size of your bundle as much as possible you may want to use Rollup and terser, which will produce slightly smaller bundle sizes.

Setup

I decided to start a React project in esbuild in a naïve way: npm installing esbuild, React and ReactDOM. I created a src/app.jsx file and a dist/index.html file. Then, I used the following command to compile the app into a dist/bundle.js file:

./node_modules/.bin/esbuild src/app.jsx --bundle --platform=browser --outfile=dist/bundle.js

When I hosted and opened index.html in the browser, I was met with the “white screen of death” and an “Uncaught ReferenceError: process is not defined” console error. Both the docs and the CLI explain exactly what you need to do to prevent this but it might be a bit of a “gotcha” for beginners, because it requires an extra argument when bundling React:

--define:process.env.NODE_ENV=\"production\"

Or, if you’re including esbuild in npm scripts written like this to escape the quotes:

--define:process.env.NODE_ENV=\\\"production\\\"

This define argument is needed for any library bundled for the browser that expects node environment variables. Vue 2.0 also expects these. You won’t have the same problem with Preact because it doesn’t expect any environment variables and ships ready for the browser by default.

After I ran the command with the define argument, my “Hello world” React app was working perfectly. JSX works out of the box with .jsx files. That said, React needs to be manually imported and then JSX is converted to the React.createElement. However, there are ways to add auto imports in JSX and/or configure JSX for Preact.

Usage

esbuild provides a --serve option for a development server. This bypasses the filesystem and serves modules straight from memory, ensuring that the browser doesn’t pull older versions of modules. However, it doesn’t include live/hot reloading, so you will find yourself refreshing the browser after saving which isn’t an ideal experience.

I decided to use the newly-released watch feature.This tells esbuild to recompile code every time a source file is saved. But we still need a server to see our saved changes. We can pull in a development server package, such as Luke Jackson’s servor:

npm install servor --save-dev

Then we can use the esbuild Javascript API to start as server and run esbuild’s watch mode at the same time. Let’s create a file at the root of our project called watch.js:

// watch.js
const esbuild = require("esbuild");
const servor = require("servor");

esbuild.build({
  // pass any options to esbuild here...
  entryPoints: ["src/app.jsx"],
  outdir: "dist",
  define: { "process.env.NODE_ENV": '"production"' },
  watch: true,
});

async function serve(){
  console.log("running server from: http://localhost:8080/");
  await servor({
    // pass any options to servor here...
    browser:true,
    root: "dist",
    port: 8080,
  });
}

serve();

Now run node watch.js in the command line. This gives us a nice dev server, though again, it doesn’t give us hot module replacement or fast refresh (i.e., your client-side state won’t be preserved). But this was enough for my testing needs.

Even though we’re rebundling our entire application every time we save a file, we’d need to have a pretty massive application before esbuild slows down. After I set up this tooling, I was getting instant feedback from changes. My computer uses an intel i7 from 2012, so it certainly isn’t a top-of-the-line machine.

If you need a preconfigured version of esbuild with live reload and some React defaults, you can clone this repo.

Supported files

esbuild can import CSS in JavaScript if that’s your style. It will compile CSS into an output file with the same name as your main output JavaScript file. It can also bundle CSS @import statements by default. There is no support for CSS Modules, but there are plans for it.

There is a growing community of plugins for esbuild. For example, there are plugins available for Vue single file components, and Svelte components.

esbuild works with JSON files and can bundle them into JavaScript modules without any configuration.

It can also import images in JavaScript with the option to either convert them into data URLs or copying them into an output folder. This behavior isn’t enabled by default, but you can add the following in your esbuild config object to enable either option:

loader: { '.png': 'dataurl' } // Converts to data url in JS bundle
loader: { '.png': 'file' } // Copies to output folder

Code splitting appears to be a work in progress, but is mostly there in the ESM output format, and it does look like it is a priority for the project. It’s also worth mentioning that tree-shaking is built into esbuild by default and can’t be turned off.

Production build

Using the “minify” and “bundle” options in your esbuild command won’t create a bundle quite as small as a Rollup/Terser pipeline. This is because esbuild sacrifices some bundle size optimization to get through your code in as few passes as possible. However, the difference may be pretty negligible, and worth it for the increase in bundling speed, depending on your project. In my clone of the Snap Shot application, esbuild created a bundle of 177 KB which isn’t a lot more than the 165KB produced by Vite, which uses rollup and terser.

Overall

esbuild
Templates for multiple front end frameworks
Hot module replacement development server
Streaming imports
Preconfigured production build 
Automatic PostCSS and preprocessor conversion
HTM transform
Rollup plugin support
Size on disk (default install)7.34 MB

esbuild is an extremely powerful tool. But it might be difficult if you’re used to zero-config setups. If you need more, then you might want to take a look at the next tool, Snowpack, which uses esbuild.

Snowpack

Snowpack is a build tool by the creators of Skypack and Pika. It provides an awesome development server and was created with an “unbundled development” philosophy. To quote the documentation: “You should be able to use a bundler because you want to, and not because you need to.”

By default, Snowpack’s build step doesn’t bundle files into a single package but provides unbundled esmodules that run in the browser. esbuild is actually included in there as a dependency, but the idea is to use JavaScript modules and only bundle with esbuild when it’s needed.

Snowpack has some pretty slick documentation, including a list of guides for using it with JavaScript frameworks, and a bunch of templates for them. Some of the guides are still a work in progress, but others like the one for React are nice and clear. It also looks like Snowpack treats Svelte as a first-class citizen. I actually first heard about Snowpack from Rich Harris’s “Futuristic Web Development” talk at Svelte Summit 2020. That said, the upcoming Svelte meta-framework SvelteKit was supposed to be powered by Snowpack but has since switched to Vite (which we’ll review next).

Use cases

Snowpack is a good choice if you want to double down on unbundled deployment. You may be writing source code with a small number of modules. This would mean you’re not creating a big request waterfall with an unbundled build. If you don’t need the added complexity and technical debt of bundling, then Snowpack is a great choice. A good use case would be if you’re incrementally adopting a front-end framework into a server-rendered or static application. You’d be pulling in as little tooling as possible from the node ecosystem but you’d still be getting the benefits of declarative frontend frameworks.

Secondly, I’d argue that Snowpack is a great wrapper around esbuild. If you want to try out esbuild but also want a development server and pre-written templates for front-end frameworks, then you can’t go wrong with Snowpack. Enable esbuild in the build step of your Snowpack config and you’re good to go.

As things currently stand, I’d argue that Snowpack wouldn’t be the best replacement for a zero configuration tool like create-react-app because you’ll need to pull in plugins and configure them yourself if you have a big application and need a super-fancy optimized production-ready build step.

Setup

Let’s start a project with Snowpack by jumping into the command line:

mkdir snowpackproject
cd snowpackproject
npm init #fill with defaults 
npm install snowpack

Now, let’s add the following to package.json:

// package.json
"scripts": {
  "start": "snowpack dev",
  "build": "snowpack build"
},

Next, we’ll create a configuration file:

// Mac or Linux
touch snowpack.config.js
// Windows
new-item snowpack.config.js

I think the most magical part of Snowpack comes when setting one innocent-looking key value pair in the configuration file. Paste this into the configuration file, for example:

// snowpack.config.js
module.exports = {
  packageOptions: {
    "source": "remote",
  }
};

source: remote enables something called streaming imports. Streaming imports enable Snowpack to bypass npm installation by converting bare imports (e.g., import React from 'react';) into CDN imports from Skypack.

Moving ahead, let’s make an index.html file:

<!--index.html-->
<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">>
  <title>Snowpack streaming imports</title>
</head>
<body>
  <div id="root"></div>
  <!-- Note the type="module". This is important for JavaScript module imports. -->
  <script type="module" src="app.js"></script>
</body>
</html>

And, finally, we’ll add an app.jsx file:

// app.jsx
import React from 'react'
import ReactDOM from 'react-dom'
const App = ()=>{
  return <h1>Welcome to Snowpack streaming imports!</h1>
}
ReactDOM.render(<App />,document.getElementById('root')); 0

Note that we didn’t npm install React or ReactDOM at any stage. But if we start up the Snowpack developer server like this:

./node_modules/.bin/snowpack dev

…our app still works!

Instead of pulling from a node_modules folder, Snowpack pulls the npm package down from Skypack, a CDN that hosts the npm registry, and it’s is pre-optimized to work in the browser. Snowpack then serves it in a ./_snowpack/pkg URL.

Usage

This is a big step away from Node/npm-based workflow. What we’re actually looking at is a new CDN/JavaScript module-based workflow.

If, however, we our app as it is and run a production build, Snowpack throws an error. This is because it needs to know which versions of React and ReactDOM to use when building. You can fix this by writing to a snowpack.deps.json which can automatically be created by running the following:

./node_modules/.bin/snowpack add react
./node_modules/.bin/snowpack add react-dom

That won’t download the package from npm, but it will record the version of the packages used for Snowpack builds.

One caveat is that we miss out on developer error messages, as Skypack will ship the production version of packages.

Even if we aren’t using streaming imports, the Snowpack development server bundles each dependency from node_modules into one JavaScript file per dependency, converts those files to a native JavaScript module, then serves it to the browser. This means the browser can cache these scripts and only re-request them if they’ve changed. The development server automatically refreshes on save, but doesn’t preserve the client-side state. All dependencies from node seemed to work out of the box regardless of whether they were using legacy module formats or node APIs (such as the infamous process.env we had trouble with in esbuild).

Preserving client-side state in React requires react-refresh, which requires a few Babel packages of its own as dependencies. These aren’t included by default, but are available using the more maximal React template. The template pulls in react-refresh, Prettier, Chai, and React Testing Library, for a total Node dependency package weighing in at 80 MB:

npx create-snowpack-app my-react-project --template @snowpack/app-template-react

Supported files

JSX is supported, but again, only with .jsx files by default. Snowpack automatically detects if whether React or Preact is being used, and decides accordingly which render function to use for the JSX transform. However, if we want to customize the JSX further than this, we’d need to pull in Babel via their plugin. There is also a Snowpack plugin available for Vue single file components and, of course, for Svelte components. Further, Snowpack compiles TypeScript, but for type checking we need the TypeScript plugin.

CSS can be imported into JavaScript and are tossed into the document <head> at runtime. CSS modules are also supported out of the box for scoping as long as they have the .module.css extension.

Imported JSON files will be cast into a JavaScript module with an object as a default export. Snowpack supports images and copies them into the production folder. Going along with its unbundled philosophy, Snowpack does not include images as data URLs in the bundle.

Production build

The default snowpack build command basically copies the exact source file structure into an output folder. For files that compile to JavaScript (e.g. TypeScript, JSX, JSON, .vue, .svelte), it transforms each individual file into a separate browser-friendly JavaScript module.

This works fine, but isn’t great for production, as it could cause a big waterfall of requests if the source code is split into a lot of files. In the Snap Shot application I ended up with 184KB of source files which would then request another 105 KB of dependencies from Skypack which made for a pretty huge waterfall.

However, Snowpack pulls esbuild as a dependency and we can enable esbuild to bundle, minify and compile our code by adding an “optimize” object to the Snowpack config:

// snowpack.config.js
module.exports = {
  optimize: {
    bundle: true,
    minify: true,
    target: 'es2018',
  },
};

This runs the code using the optimization features provided by esbuild, so by just adding these options we could get the same build we had earlier on with esbuild.

Since esbuild hasn’t reached 1.0 yet, Snowpack recommends using either the webpack or Rollup plugin for production builds, both of which need to be configured.

Overall

Snowpack provides a lightweight developer experience with a full-featured development server, detailed documentation, and easy-to-install templates. You are left to decide whether you want to bundle your application and how you want to do so. If you want a tool that provides both a dev server and a more opinionated build step, you might want to take a look at Vite, the next tool on our list.

Snowpack
Templates for multiple front end frameworks
Hot module replacement development server✅ (when using templates)
Streaming imports
Preconfigured production build 
Automatic PostCSS and preprocessor conversion
HTM transform
Rollup plugin support✅ (when using snowpack-plugin-rollup-bundle for build step)
Size on disk (default install)16 MB

Vite

Vite is developed by Vue creator (and Hades speedrunner) Evan You. Where esbuild concentrates on the build step and Snowpack concentrates on the development server, Vite provides both: a full development server and an optimized build command using Rollup.

Use cases

If you want a serious create-react-app or Vue CLI competitor, Vite is the closest one in the bunch because it comes with batteries-included features. The lightening-fast development server and zero-config optimized production build mean you can get from zero to production without any configuration. Vite is a tool that could be used in both a tiny side-project or a big production application. A good use case for Vite would be any sizeable single page app.

Why wouldn’t you use Vite? Vite is an opinionated tool and you might disagree with its opinions. You might not want to use Rollup for your build (we’ve been talking about how fast esbuild is), or you might want your tooling to give you the full power of Babel, eslint and the ecosystem of webpack loaders out of the box.

Also, you want zero-config server-side rendering meta-frameworks, you’d be better off staying with webpack-based frameworks, like Nuxt.js and Next.js until the story for Vite server-side rendering is more complete.

Setup

Vite has more opinionated defaults than esbuild and Snowpack. Its documentation is clear and detailed. We get full support for Vue with Evan being the creator and all, so Vite is a definite happy path for Vue developers. That said, Vite can be used with any front-end framework and even provides a list of templates to get you started.

Usage

Vite’s development server is pretty powerful. Vite pre-bundles all of a project’s dependencies together into a single native JavaScript module with esbuild, then serves it up with a heavily cached HTTP header. This means no time is wasted on compiling, serving or requesting imported dependencies after the first page load. Vite also provides clear error messaging, printing the exact block of code and the line numbers to troubleshoot. Again with Vite, I didn’t have any issues pulling in dependencies that used node APIs or legacy formats. They all seemed to be shimmed into a browser-acceptable esmodule.

Vite’s React and Vue templates both pull in plugins that enable hot module replacement. The Vue template pulls in a Vue plugin for single file components, and a Vue plugin for JSX. The React template pulls in the react-refresh plugin. Either way, both will give you hot module replacement and client-side state preservation. Sure, they add a few more dependencies, including Babel packages, but, Babel isn’t actually necessary when using JSX in Vite. By default, JSX works the same way as esbuild — it converts to React.createElement. It won’t automatically import React, but its behavior can be configured.

And while we’re at it, Vite doesn’t support streaming imports like Snowpack and wmr do. That means npm-installing dependencies as usual.

One cool thing is that Vite includes experimental support for server-side rendering. Pick your framework of choice and generate static HTML that ships directly to the client. At the moment, it looks like we need to construct this architecture on our own, but still, this looks like a good opportunity for meta-frameworks to be built on top of Vite. Evan You already has a work in progress called VitePress, a replacement for VuePress with the benefits of using Vite. And Sveltekit has also added Vite to its dependency list. It looks like CSS code splitting inclusion was part of the reason Sveltekit made the switch to Vite.

Supported files

For CSS, Vite provides the most features out of all of the tools that we are looking at. It supports bundling CSS imports as well as CSS modules. But we can also npm install PostCSS plugins and create a postcss.config.js file, and Vite will automatically start applying these transforms to CSS.

We can install and use CSS preprocessors — simply npm install the preprocessor and rename the file to the right extension (e.g. .filename.scss) and Vite will start applying the corresponding preprocessor. And, as we said in the overview, Vite support CSS code-splitting.

Image imports default to a public URL, but we’re also able load them into the bundle as strings by using a ?raw parameter at the end of the URL string.

JSON files can be imported in the source and converted into an esmodule exporting a single object. We can also provide a named import and Vite will look in the root field of the JSON file to find the import and treeshake the rest.

Production build

Vite uses Rollup for a preconfigured production build with a bunch of optimizations. It intentionally provides a zero-config build which should be enough for most use cases.

The build comes with the Rollup features we expect: bundling, minification and tree shaking. But we also get extras, like code-splitting dynamic imports and something called “asynchronous chunk loading” which is a fancy way to say that if we request a JavaScript module that imports another module, the build will be pre-optimized to load both at the same time (asynchronously).

Running Vite’s default build with the Snap Shot app I ended up with one 5KB JavaScript file and one 160KB JavaScript file (for a grand total of 165KB) and all CSS in the project was automatically minified to a tiny 2.71KB file.

Overall

The opinionated nature of Vite makes it a serious competitor with our current tooling. A lot of work has been done to make the developer experience really seamless and make production-ready builds out of the box.

Vite
Templates for multiple front end frameworks
Hot module replacement development server✅ (when using templates)
Streaming imports
Preconfigured production build 
Automatic PostCSS and preprocessor conversion
HTM transform
Rollup plugin support✅ 
Size on disk (default install)17.1 MB

wmr

Like Vite, wmr is another opinionated build tool that provides both a development server and a build step. It was built by the creator of Preact, Jason Miller, so it’s definitely a happy path for Preact developers. Jason Miller explained the thinking behind wmr when he appeared as a guest on the JS Party podcast:

Preact is tiny and it’s really good if you want to do a lightweight project. Where is our tooling for that? We have a webpack-based tool that’s used in used in production by a bunch of high profile sites, but that’s the heavyweight tool. Where’s the prototyping tool? That was the one hand. The other hand is myself and a bunch of others who happened to be on the Preact team; We had been kind of on the sidelines of the bundler ecosystem for a little while, prodding people, trying to get consensus on a direction that we can move in to further this idea of writing modern code and shipping modern code.

This tells us that wmr is all about writing and shipping modern code, enabling lighter tooling in a project.

You might be wondering what wmr stands for? Nothing! The names “Web Modules Runtime” and “Wet Module Replacement” were floated, but it’s a fake abbreviation, similar to npm.

wmr is built with the same ruthless bundle size purging as Preact, so it’s tiny — weighing a mere 2.6 MB — and contains exactly zero npm dependencies. Still, it manages to pack in a whole lot of really awesome features including a hot-module-replacing development server and an optimized production build.

Use cases

I would use wmr if I was looking to create a prototype using Preact as fast as possible. There’s no configuration and it only takes seconds to download. It feels like using a supercharged static file server. With TypeScript, an optimized-build step, and static HTML rendering, wmr offers everything needed to ship small-to-medium sized applications. Its small size is also great for quickly trying out a library or demoing an idea.

wmr may not be the tool for you if you’re not using Preact, React or vanilla JavaScript. The Preact team has yet to provide templates for other frameworks. The documentation also isn’t as detailed as the other tools we’ve looked at. This means the further you stray from the happy path, the more you’ll dig into the source. So, I can’t recommend it if a lot of customization is needed.

Setup

If you’re using preact there is absolutely zero setup needed except for a quick npm install. To use React with wmr instead of Preact, there are currently two steps to take. First, alias htm/preact to htm/react, and react to es-react in your package.json:

"alias": {
  "htm/preact": "htm/react",
  "react": "es-react"
},

Then add imports from es-react into your components:

// ReactDOM only needed on root render
import { React, ReactDOM,} from 'es-react';

This means we aren’t actually using the normal React package you might be used to, but instead pulling in React from es-react. This is because wmr relies on packages being compatible with native JavaScript modules. React does not use native modules by default, instead using an older style of module called UMD modules. es-react is a package that pulls in React but provides exports that are compatible with the web platform.

This illustrates wmr’s philosophy of using native primitives of the web platform as opposed to using tooling to sidestep and abstract it away.

Another option could be to use Skypack imports in our application, which is also pre-optimized to work in the browser:

import React from 'https://cdn.skypack.dev/react';
import ReactDOM from 'https://cdn.skypack.dev/react-dom';

wmr expects that you’re writing modern code which runs in the browser, which may mean that you’ll need to do some configuration if you pull in dependencies that use node APIs or legacy module systems. To get the Snap Shot app working I needed to dive into node modules and convert a library or two to use native JavaScript module syntax. This might slow you down if you are using older libraries. The Preact ecosystem is all optimized to run in the browser and shouldn’t require any massaging. This is another reason to stick with the Preact happy path in wmr.

There are plugins for wmr. It exposes a plugin API that supports Rollup plugins for a build step. There are more and more wmr-specific examples on the docs, including a plugin that minifies HTML, and one that features filesystembased routing.

wmr supports different frameworks, but there aren’t any pre-built templates for them. And at first I found it rather difficult to configure the JSX transform. All that said, Jason has confirmed there are plans to make JSX more configurable and that wmr is intended to be framework-agnostic. JSX is planned to work out of the box in regular JavaScript files.

Usage

To get started you can either run this command in the command line:

npm init wmr your-project-name

Or alternatively, you can run these commands to manually build up your application:

npm init -y
npm install wmr
mkdir public
touch public/index.html
touch public/index.js 

Then add an script import in the body of your index.html (again make sure to use type="module"):

<script type="module" src="./index.js"></script>  

Now you can write a Preact hello world into your index.js file:

import { render } from 'preact';
render(<h1>Hello World!</h1>, document.body);

And finally start your development server:

node_modules/.bin/wmr

Now we have a full hot module replacement development server which will immediately respond to any changes to our source-code.

wmr uses a tool called htm when transforming JSX, which provides some awesome benefits. Let’s say we’re writing a counter using Preact in wmr and make a mistake:

import { render } from 'preact';
import { useState } from 'preact/hooks';
function App() {
  const [count,setCount] = useState(0)
  return <>
  <button onClick={()=>{setCount(cout+5)}}>Click to add 5 to count</button> // HIGHLIGHT
  <p>count: {count}</p>
  </>
}
render(<App />, document.body);

count is misspelled in the onClick handler function, so running this results in an error. Usually, we would have to rely on our tooling and a source map to gather information about where the bug is located, but wmr takes a different solution. With htm, this gets as close as you can get to native JSX in the browser by using tagged template literals. So, where writing React or Preact code usually looks like this:

<MyComponent>I am JSX. I am not actually valid Javascript</MyComponent>

…htm looks more like this:

html`<${MyComponent}>I am about as close as it gets to JSX as you can get while being able to run in the browser</MyComponent>`

Now, if we’re debugging our code and open the “Sources” panel in DevTools, we should see a script that’s almost identical to what the source code looks like in the editor.

Image of the source code.

This way, we can properly investigate where bugs are located in the browser without having to use sourcemaps. Sure, this specific example is pretty contrived, but you can see how this could be really useful because it means wmr doesn’t need source maps in your developer environment.

wmr supports streaming imports by default, so bare imports will be pulled down from the npm registry. This is done through a complex process which examines all of the source in the npm package, removes all the tests and metadata, and converts it to a single native JavaScript import. Similar to Snowpack, it’s possible to build a complex app without using npm to install anything. In fact, wmr is the first tool to support this idea.

Supported files

As far as the other types of files wmr supports, CSS files can be imported in JavaScript, and CSS modules are supported as well.

There isn’t any built-in support for either Vue single file components or Svelte components. However, wmr’s build step works with Rollup plugins and the development server can be configured with Polka/Express middleware, so it’s possible to use these to convert imports into Vue and Svelte components. In fact, I wrote a small plugin for Vue Single file Components to show how this might be done.

We can’t import images into JavaScript as data URLs in wmr without a plugin. Instead, we need to import them using a syntactically correct JavaScript method. So, if we have, say, a picture of a dog in our public folder we might include it in a Preact component like this:

function Dog() {
  return <img src={new URL('./dog.jpg', import.meta.url)} alt="dog hanging out"></img>
}

And once the build step runs, the image is copied and accessible from the distribution folder. There is hot module replacement for images in the development server, so changes with images are immediately reflected in the browser.

One more note on file support: JSON can be imported, and is converted into a JavaScript object for use. But when actually building an application, we’d need the Rollup JSON plugin.

Production build

wmr provides a production build step that includes bundling, minification and tree-shaking without any additional dependencies. Having a look at the source of wmr, it looks like rollup and terser are used under the hood, and minified versions of these are included in the wmr package. The wmr bundle for the Snap Shot app was 164KB so it created a bundle just a tiny bit smaller than total size of the two JavaScript files created by Vite.

There is also a way to configure wmr is such a way that it renders an application to static HTML and hydrates on the browser using preact-iso. This means wmr can be used as a meta-framework for Preact, similar to Next.js.

Overall

I love the experience of using wmr to prototype both React and Preact applications. It feels great to get started with a tool that is ridiculously small but provides developer conveniences that get close to matching heavyweight bundlers.

wmr
Templates for multiple front end frameworks
Hot module replacement development server
Streaming imports
Preconfigured production build 
Automatic PostCSS and preprocessor conversion
HTM transform
Rollup plugin support✅ 
Size on disk (default install)2.57 MB

Feature comparison

We just covered a lot ground! Rather than scroll up and down this article to compare results, I’ve compiled everything here to see how the tools stack up wen they’re side-by-side. I’ve even included additional comparisons for features that we didn’t explicitly covers.

Use cases

ToolUse case
esbuildLarge codebases. Not ready for production yet.
SnowpackSmall applications that don’t need bundling or applications where you want to choose which bundler you use. Also good for incrementally adopting JavaScript frameworks in server-rendered apps. 
ViteReplacement for Vue CLI/Create-React-App for producing single page applications. This is the happy path for Vue.
wmrPrototypes. Good for small- to medium-size apps and can be used for either single page or server-rendered apps. This is the happy path for Preact.

Setup

esbuildSnowpackVitewmr
Templates for multiple front end frameworks
Size on disk (default install)7.34 MB16 MB17.1 MB2.57 MB
Zero-config production build
HMR Development Server with zero config
Process.env handling for node packages

Development server

esbuildSnowpackVitewmr
Hot module replacement
CSS hot replacement
npm dependency pre-bundling
Browser error messaging
HTM tranform

Production build

esbuildSnowpackVitewmr
Output bundle size of Snap Shop app177 KB184 KB of multiple JavaScript files, plus 105 KB of Skypack CDN dependencies 165 KB (one 5 KB file and one 160 KB file)164 KB
Go-based bundling✅ when using esbuild is used in build step
Preconfigured production build
Asynchronous chunk loading
Rollup plugin support

Other features

esbuildSnowpackVitewmr
Streaming inputs
Server-side rendering✅ (experimental)
CSS Modules
Automatic PostCSS and preprocessor conversion

Wrapping up

I’m super excited about building JavaScript applications with any and all of the tools we just looked at. Whether we’re writing a small side project or a big production site, all these tools speed up feedback loops and increase productivity. They’ve opened up the gates to ask what’s necessary in the JavaScript ecosystem and whether we can start losing the cruft brought by legacy modules and browsers. These tools are going to lower the barrier-to-entry for new developers by providing a leaner, faster developer environment with fewer abstractions between the code that gets written and the code that runs in the browser.

If you’re getting sick of waiting for your dependencies to download and your build steps to run, I’d recommend giving this new generation of tooling a try.

Further reading

Other new JavaScript tooling to check out

  • Rome – A full toolchain including linting, compiling, bundling, test-running and formatting
  • SWC – A rust-based JavaScript/TypeScript compiler
  • Deno – A runtime for JavaScript and TypeScript (similar to Node.js)

The post Comparing the New Generation of Build Tools appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Lightweight Form Validation with Alpine.js and Iodine.js

Many users these days expect instant feedback in form validation. How do you achieve this level of interactivity when you’re building a small static site or a server-rendered Rails or Laravel app? Alpine.js and Iodine.js are two minimal JavaScript libraries we can use to create highly interactive forms with little technical debt and a negligible hit to our page-load time. Libraries like these prevent you from having to pull in build-step heavy JavaScript tooling which can complicate your architecture.

I‘m going to iterate through a few versions of form validation to explain the APIs of these two libraries. If you want to copy and paste the finished product here‘s what we’re going to build. Try playing around with missing or invalid inputs and see how the form reacts:

A quick look at the libraries

Before we really dig in, it’s a good idea to get acquainted with the tooling we’re using.

Alpine is designed to be pulled into your project from a CDN. No build step, no bundler config, and no dependencies. It only needs a short GitHub README for its documentation. At only 8.36 kilobytes minfied and gzipped, it’s about a fifth of the size of a create-react-app hello world. Hugo Di Fracesco offers a complete and thorough overview of what it is an how it works. His initial description of it is pretty great:

Alpine.js is a Vue template-flavored replacement for jQuery and vanilla JavaScript rather than a React/Vue/Svelte/WhateverFramework competitor.

Iodine, on the other hand, is a micro form validation library, created by Matt Kingshott who works in the Laravel/Vue/Tailwind world. Iodine can be used with any front-end-framework as a form validation helper. It allows us to validate a single piece of data with multiple rules. Iodine also returns sensible error messages when validation fails. You can read more in Matt’s blog post explaining the reasoning behind Iodine.

A quick look at how Iodine works

Here’s a very basic client side form validation using Iodine. We‘ll write some vanilla JavaScript to listen for when the form is submitted, then use DOM methods to map through the inputs to check each of the input values. If it‘s incorrect, we’ll add an “invalid” class to the invalid inputs and prevent the form from submitting.

We’ll pull in Iodine from this CDN link for this example:

<script src="https://cdn.jsdelivr.net/gh/mattkingshott/iodine@3/dist/iodine.min.js" defer></script>

Or we can import it into a project with Skypack:

import kingshottIodine from "https://cdn.skypack.dev/@kingshott/iodine";

We need to import kingshottIodine when importing Iodine from Skypack. This still adds Iodine to our global/window scope. In your user code, you can continue to refer to the library as Iodine, but make sure to import kingshottIodine if you’re grabbing it from Skypack.

To check each input, we call the is method on Iodine. We pass the value of the input as the first parameter, and an array of strings as the second parameter. These strings are the rules the input needs to follow to be valid. A list of built-in rules can be found in the Iodine documentation.

Iodine’s is method either returns true if the value is valid, or a string that indicates the failed rule if the check fails. This means we‘ll need to use a strict comparison when reacting to the output of the function; otherwise, JavaScript assesses the string as true. What we can do is store an array of strings for the rules for each input as JSON in HTML data attributes. This isn’t built into either Alpine or Iodine, but I find it a nice way to co-locate inputs with their constraints. Note that if you do this you’ll need to surround the JSON with single quotes and use double quotes inside the attribute to follow the JSON spec.

Here’s how this looks in our HTML:

<input name="email" type="email" id="email" data-rules='["required","email"]'>

When we‘re mapping through the DOM to check the validity of each input, we call the Iodine function with the element‘s input value, then the JSON.encode() result of the input’s dataset.rules. This is what this looks like using vanilla JavaScript DOM methods:

let form = document.getElementById("form");

// This is a nice way of getting a list of checkable input elements
// And converting them into an array so we can use map/filter/reduce functions:
let inputs = [...form.querySelectorAll("input[data-rules]")];

function onSubmit(event) {
  inputs.map((input) => {
    if (Iodine.is(input.value, JSON.parse(input.dataset.rules)) !== true) {
      event.preventDefault();
      input.classList.add("invalid");
    }
  });
}
form.addEventListener("submit", onSubmit);

Here’s what this very basic implementation looks like:

As you can tell this is not a great user experience. Most importantly, we aren’t telling the user what is wrong with the submission. The user also has to wait until the form is submitted before finding out anything is wrong. And frustratingly, all of the inputs keep the “invalid” class even after the user has corrected them to follow our validation rules.

This is where Alpine comes into play

Let’s pull it in and use it to provide nice user feedback while interacting with the form.

A good option for form validation is to validate an input when it’s blurred or on any changes after it has been blurred. This makes sure we‘re not yelling at the user before they’ve finished writing, but still give them instant feedback if they leave an invalid input or go back and correct an input value.

We’ll pull Alpine in from the CDN:

<script src="https://cdn.jsdelivr.net/gh/alpinejs/alpine@v2.7.3/dist/alpine.min.js" defer></script>

Or we can import it into a project with Skypack:

import alpinejs from "https://cdn.skypack.dev/alpinejs";

Now there’s only two pieces of state we need to hold for each input:

  • Whether the input has been blurred
  • The error message (the absence of this will mean we have a valid input)

The validation that we show in the form is going to be a function of these two pieces of state.

Alpine lets us hold this state in a component by declaring a plain JavaScript object in an x-data attribute on a parent element. This state can be accessed and mutated by its children elements to create interactivity. To keep our HTML clean, we can declare a JavaScript function that returns all the data and/or functions the form would need. Alpine will look for the this function in the global/window scope of our JavaScript code if we add this function to the x-data attribute. This also provides a reusable way to share logic as we can use the same function in multiple components or even multiple projects.

Let’s initialize the form data to hold objects for each input field with two properties: an empty string for the errorMessage and a boolean called blurred. We’ll use the name attribute of each element as their keys.


<form id="form" x-data="form()" action="">
  <h1>Log In</h1>

  <label for="username">Username</label>
  <input name="username" id="username" type="text" data-rules='["required"]'>

  <label for="email">Email</label>
  <input name="email" type="email" id="email" data-rules='["required","email"]'>

  <label for="password">Password</label>
  <input name="password" type="password" id="password" data-rules='["required","minimum:8"]'>

  <label for="passwordConf">Confirm Password</label>
  <input name="passwordConf" type="password" id="passwordConf" data-rules='["required","minimum:8"]'>

  <input type="submit">
</form>

And here’s our function to set up the data. Note that the keys match the name attribute of our inputs:

window.form = () => { 
  return {
    username: {errorMessage:'', blurred:false},
    email: {errorMessage:'', blurred:false},
    password: {errorMessage:'', blurred:false},
    passwordConf: {errorMessage:'', blurred:false},
  }
}

Now we can use Alpine’s x-bind:class attribute on our inputs to add the “invalid” class if the input has blurred and a message exists for the element in our component data. Here’s how this looks in our username input:

<input name="username" id="username" type="text" 
x-bind:class="{'invalid':username.errorMessage && username.blurred}" data-rules='["required"]'>

Responding to input changes

Now we need our form to respond to input changes and on blurring input states. We can do this by adding event listeners. Alpine gives a concise API to do this either using x-on or, similar to Vue, we can use an @ symbol. Both ways of declaring these act the same way.

On the input event we need to change the errorMessage in the component data to an error message if the value is invalid; otherwise, we’ll make it an empty string.

On the blur event we need to set the blurred property as true on the object with a key matching the name of the blurred element. We also need to recalculate the error message to make sure it doesn’t use the blank string we initialized as the error message.

So we’re going to add two more functions to our form to react to blurring and input changes, and use the name value of the event target to find what part of our component data to change. We can declare these functions as properties in the object returned by the form() function.

Here’s our HTML for the username input with the event listeners attached:

<input 
  name="username" id="username" type="text"
  x-bind:class="{'invalid':username.errorMessage && username.blurred}" 
  @blur="blur" @input="input"
  data-rules='["required"]'
>

And our JavaScript with the functions responding to the event listeners:

window.form = () => {
  return {
    username: {errorMessage:'', blurred:false},
    email: {errorMessage:'', blurred:false},
    password:{ errorMessage:'', blurred:false},
    passwordConf: {errorMessage:'', blurred:false},
    blur: function(event) {
      let ele = event.target;
      this[ele.name].blurred = true;
      let rules = JSON.parse(ele.dataset.rules)
      this[ele.name].errorMessage = this.getErrorMessage(ele.value, rules);
    },
    input: function(event) {
      let ele = event.target;
      let rules = JSON.parse(ele.dataset.rules)
      this[ele.name].errorMessage = this.getErrorMessage(ele.value, rules);
    },
    getErrorMessage: function() {
    // to be completed
    }
  }
}

Getting and showing errors

Next up, we need to write our getErrorMessage function.

If the Iodine check returns true, we‘ll set the errorMessage property to an empty string. Otherwise, we’ll pass the rule that has broken to another Iodine method: getErrorMessage. This will return a human-readable message. Here’s what this looks like:

getErrorMessage:function(value, rules){
  let isValid = Iodine.is(value, rules);
  if (isValid !== true) {
    return Iodine.getErrorMessage(isValid);
  }
  return '';
}

Now we also need to show our error messages to the user.

Let’s add <p> tags with an error-message class below each input. We can use another Alpine attribute called x-show on these elements to only show them when their error message exists. The x-show attribute causes Alpine to toggle display: none; on the element based on whether a JavaScript expression resolves to true. We can use the same expression we used in the show-invalid class on the input.

To display the text, we can connect our error message with x-text. This will automatically bind the innertext to a JavaScript expression where we can use our component state. Here’s what this looks like:

<p x-show="username.errorMessage && username.blurred" x-text="username.errorMessage" class="error-message"></p>

One last thing we can do is re-use the onsubmit code from before we pulled in Alpine, but this time we can add the event listener to the form element with @submit and use a submit function in our component data. Alpine lets us use $el to refer to the parent element holding our component state. This means we don’t have to write lengthier DOM methods:

<form id="form" x-data="form()" @submit="submit" action="">
  <!-- inputs...  -->
</form>
submit: function (event) {
  let inputs = [...this.$el.querySelectorAll("input[data-rules]")];
  inputs.map((input) => {
    if (Iodine.is(input.value, JSON.parse(input.dataset.rules)) !== true) {
      event.preventDefault();
    }
  });
}

This is getting there:

  • We have real-time feedback when the input is corrected.
  • Our form tells the user about any issues before they submit the form, and only after they’ve blurred the inputs.
  • Our form does not submit when there are invalid properties.

Validating on the client side of a server-side rendered app

There are still some problems with this version, though some won‘t be immediately obvious in the Pen as they‘re related to the server. For example, it‘s difficult to validate all errors on the client side in a server-side rendered app. What if the email address is already in use? Or a complicated database record needs to be checked? Our form needs to have a way to show errors found on the server. There are ways to do this with AJAX, but we’ll look at a more lightweight solution.

We can store the server side errors in another JSON array data attribute on each input. Most back-end frameworks will provide a reasonably easy way to do this. We can use another Alpine attribute called x-init to run a function when the component initializes. In this function we can pull the server-side errors from the DOM into each input’s component data. Then we can update the getErrorMessage function to check whether there are server errors and return these first. If none exist, then we can check for client-side errors.

<input name="username" id="username" type="text" 
x-bind:class="{'invalid':username.errorMessage && username.blurred}" 
@blur="blur" @input="input" data-rules='["required"]' 
data-server-errors='["Username already in use"]'>

And to make sure the server side errors don’t show the whole time, even after the user starts correcting them, we’ll replace them with an empty array whenever their input gets changed.

Here’s what our init function looks like now:

init: function () {
  this.inputElements = [...this.$el.querySelectorAll("input[data-rules]")];
  this.initDomData();
},
initDomData: function () {
  this.inputElements.map((ele) => {
  this[ele.name] = {
    serverErrors: JSON.parse(ele.dataset.serverErrors),
    blurred: false
    };
  });
}

Handling interdependent inputs

Some of the form inputs may depend on others for their validity. For example, a password confirmation input would depend on the password it is confirming. Or a date you started a job field would need to hold a value later than your date-of-birth field. This means it’s a good idea to check all the inputs of the form every time an input gets changed.

We can map through all of the input elements and set their state on every input and blur event. This way, we know that inputs that rely on each other will not be using stale data.

To test this out, let’s add a matchingPassword rule for our password confirmation. Iodine lets us add new custom rules with an addRule method.

Iodine.addRule(
  "matchingPassword",
  value => value === document.getElementById("password").value
);

Now we can set a custom error message by adding a key to the messages property in Iodine:

Iodine.messages.matchingPassword="Password confirmation needs to match password";

We can add both of these calls in our init function to set up this rule.

In our previous implementation, we could have changed the “password” field and it wouldn’t have made the “password confirmation” field invalid. But now that we’re mapping through all the inputs on every change, our form will always make sure the password and the password confirmation match.

Some finishing touches

One little refactor we can do is to make the getErrorMessage function only return a message if the input has been blurred — this can make our HTML slightly shorter by only needing to check one value before deciding whether to invalidate an input. This means our x-bind attribute can be as short as this:

x-bind:class="{'invalid':username.errorMessage}"

Here’s what our functions look like to map through the inputs and set the errorMessage data now:

updateErrorMessages: function () {
  // Map through the input elements and set the 'errorMessage'
  this.inputElements.map((ele) => {
    this[ele.name].errorMessage = this.getErrorMessage(ele);
  });
},
getErrorMessage: function (ele) {
  // Return any server errors if they're present
  if (this[ele.name].serverErrors.length > 0) {
    return input.serverErrors[0];
  }
  // Check using Iodine and return the error message only if the element has not been blurred
  const error = Iodine.is(ele.value, JSON.parse(ele.dataset.rules));
  if (error !== true && this[ele.name].blurred) {
    return Iodine.getErrorMessage(error);
  }
  // Return empty string if there are no errors
  return "";
},

We can also remove the @blur and @input events from all of our inputs by listening for these events in the parent form element. However, there is a problem with this: the blur event does not bubble (parent elements listening for this event will not be passed it when it fires on their children). Luckily, we can replace blur with the focusout event, which is basically the same event, but this one bubbles, so we can listen for it in our form parent element.

Finally, our code is growing a lot of boilerplate. If we were to change any input names we would have to rewrite the data in our function every time and add new event listeners. To prevent rewriting the component data every time, we can map through the form’s inputs that have a data-rules attribute to generate our initial component data in the init function. This makes the code more reusable for additional forms. All we’d need to do is include the JavaScript and add the rules as a data attribute and we’re good to go.

Oh, and hey, just because it’s so easy to do with Alpine, let’s add a fade-in transition that brings attention to the error messaging:

<p class="error-message" x-show.transition.in="username.errorMessage" x-text="username.errorMessage"></p>

And here’s the end result. Reactive, reusable form validation at a minimal page-load cost.

If you want to use this in your own application, you can copy the form function to reuse all the logic we’ve written. All you’d need to do is configure your HTML attributes and you’d be ready to go.


The post Lightweight Form Validation with Alpine.js and Iodine.js appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Using JavaScript to Adjust Saturation and Brightness of RGB Colors

Lately I’ve been taking a look into designing with color (or “colour” as we spell it where I’m from in New Zealand). Looking at Adam Wathan and Steve Schroger’s advice on the subject, we find that we’re going to need more than just five nice looking hex codes from a color palette generator when building an application. We’re going to need a lot of grays and a few primary colors. From these primary colors we’ll want a variety of levels of brightness and saturation.

I’ve mainly been using hex codes or RGB colors when developing applications and I’ve found I get slowed down by trying to work out different levels of lightness and saturation from a single hue.  So, to save you from getting RSI by carefully moving the color picker in VS Code, or continually opening hexcolortool, let’s look at some code to help you manipulate those colors.

HSL values

An effective way to write web colors is to use HSL values, especially if you plan to alter the colors manually. HSL stands for hue, saturation, lightness. Using HSL, you can declare your hue as a number from 0 to 360. Then you can note down a percentage for saturation and lightness respectively. For instance:

div {
  background-color: hsl(155, 30%, 80%);
}

This will give you a light, muted, mint green color. What if we needed to throw some dark text over this div? We could use a color close to black, but consistent with the background. For example, we can grab the same HSL values and pull the lightness down to 5%: 

div {
  background-color: hsl(155, 30%, 80%);
  color: hsl(155, 30%, 5%);
}

Nice. Now we have text that is very close to black, but looks a bit more natural, and is tied to its background. But what if this wasn’t a paragraph of text, but a call-to-action button instead? We can draw some more attention by ramping up the saturation and lowering the lightness a little on the background:

.call-to-action {
  background-color: hsl(155, 80%, 60%);
  color: hsl(155, 30%, 5%);
}

Or, what if there was some text that wasn’t as important? We could turn back up the brightness on the text, and lower the saturation. This takes away some of the contrast and allows this less important text to fade into the background a bit more. That said, we need to be careful to keep a high enough contrast for accessibility and readability, so let’s lighten the background again:

div {
  background-color: hsl(155, 30%, 80%);
  color: hsl(155, 30%, 5%);
}

.lessimportant {
  color: hsl(155, 15%, 40%);
}

HSL values are supported in all major browsers and they are a superior way of defining colors compared to RGB. This is because they allow you to be more declarative with the hue, saturation and lightness of a color.

But, what if you’ve already committed to using RGB values? Or you get an email from your boss asking “is this going to work on IE 8?”

Libraries

There are a lot of great color libraries out there that are able to convert HSL values back into hex codes or RGB colors. Most of them also have a variety of manipulation functions to help build a color scheme.

Here is a list of some libraries I know:

  • If converting between formats is a problem, try colvertize by Philipp Mildenberger. It’s a lightweight library providing a lot of conversion methods and a few manipulation methods.
  • Then we have color, maintained by Josh Junon. This allows you to declare, process and extract colors using a fluent interface. It provides a variety of conversions and manipulation methods.
  • Another one is TinyColor by Brian Grinstead over at Mozilla, which can handle a whole lot of input types as well as utility functions. It also provides a few functions to help generate color schemes.

Also here is a great CSS-Tricks article on converting color formats.

Colour Grid Tool

Another option is you can try out a color tool I built called Colour Grid. To quote Refactoring UI, “As tempting as it is, you can’t rely purely on math to craft the perfect color palette.”

Naturally, after reading this, I built a React app to mathematically craft a color palette. Okay, it won’t solve all your problems, but it might start you off with some options. The app will create 100 different levels of saturation and lightness based the hue you select. You can either click a grid item to copy the hex code, or copy a color as a CSS custom property from a text area at the end. This could be something to try if you need a quick way to get variations from one or two hues. 

Here are some techniques I learned for processing RGB colors as well for if you are using RGB colors and need a way to transform them.

How to find the lightness of an RGB color

Disclaimer: This technique does not account for the intrinsic value of a hue. The intrinsic value of a hue is its inherent brightness before you’ve started adding any black or white. It’s illustrated by the fact pure yellow looks a lot brighter to us than a pure purple.

This technique produces the level of lightness based on a programmatic measure of how much white or black is mixed in. The perceived brightness is affected by more than this measure so remember to also use your eyes to judge the level of light you need.

The level of lightness of an RGB color can be worked out by finding the average of the highest and lowest of the RGB values, then dividing this by 255 (the middle color does not affect the lightness).

This will give you a decimal between zero and one representing the lightness. Here is a JavaScript function for this:

function getLightnessOfRGB(rgbString) {
  // First convert to an array of integers by removing the whitespace, taking the 3rd char to the 2nd last then splitting by ','
  const rgbIntArray = (rgbString.replace(/ /g, '').slice(4, -1).split(',').map(e => parseInt(e)));


  // Get the highest and lowest out of red green and blue
  const highest = Math.max(...rgbIntArray);
  const lowest = Math.min(...rgbIntArray);


  // Return the average divided by 255
  return (highest + lowest) / 2 / 255;
}

Here’s a CodePen using this function:

How to saturate an RGB color without changing lightness or hue

What can we do with our newfound ability to find the lightness of an RGB? It can help us saturate an RGB color without changing the lightness.

Saturating an RGB comes with a few problems, though:

  • There is no information in the RGB format of a gray color to tell us what the saturated version will look like because gray doesn’t have a hue. So if we’re going to write a function to saturate a color, we need to deal with this case.
  • We can’t actually get to a pure hue unless the color is 50% lightness — anything else will be diluted by either black or white. So we have a choice of whether to keep the same lightness as we saturate the color, or move the color towards 50% lightness to get the most vibrant version. For this example, we’ll keep the same level of lightness.

Let’s start start with the color rgb(205, 228, 219) — a light, muted cyan. To saturate a color we need to increase the difference between the lowest and highest RGB value. This will move it toward a pure hue.

If we want to keep the lightness the same, we’re going to need to increase the highest value and decrease the lowest value by an equal amount. But because the RGB values need to be clamped between 0 and 255, our saturation options will be limited when the color is lighter or darker. This means there is a range of saturation we have available for any given lightness.

Let’s grab the saturation range available for our color. We can work it out by finding the lowest of these two:

  • The difference between the RGB values of a gray with the same lightness as our color, and 255
  • The difference between the RGB values of a gray with the same lightness as our color, and 0 (which is just the gray value itself)

To get a fully gray version of a color, we can grab the end result of the getLightnessOfRGB function from the previous section and multiply it by 255. Then use this number for all three of our RGB values to get a gray that’s the same lightness as our original color. 

Let’s do this now:

// Using the previous "getLightnessOfRGB" function
const grayVal = getLightnessOfRGB('rgb(205, 228, 219)')*255; // 217
// So a gray version of our color would look like rgb(217,217,217);
// Now let's get the saturation range available:
const saturationRange =  Math.round(Math.min(255-grayVal,grayVal)); // 38

Let’s say we want to saturate the color by 50%. To do this  want to increase the highest RGB value and decrease the lowest by 50% of the saturation range. However, this may put us over 255 or under zero, so we need to clamp the change by the minimum of these two values:

  • The difference between the highest RGB value and 255
  • The difference between the lowest RGB value and 0 (which is the value itself)
// Get the maximum change by getting the minimum out of: 
// (255 - the highest value) OR (the lowest value)
const maxChange = Math.min(255-228, 205); // 27


// Now we will be changing our values by the lowest out of:
// (the saturation range * the increase fraction) OR (the maximum change)
const changeAmount = Math.min(saturationRange/0.5, maxChange) // 19

This means we need to add 19 to the highest RGB value (green) and subtract 19 from the lowest RGB value:

const redResult = 205 - 19; // 186
const greenResult= 228 + 19; // 247

What about the third value?

This is where things get a bit more complicated. The middle value’s distance from gray can be worked with the ratio between it and the distance from gray of either of the other two values.

As we move the highest and lowest values further away from gray, the middle value increases/decreases in proportion with them. 

Now let’s get the difference between the highest value and full gray. Then the difference between the middle value and the full gray. Then we’ll get the ratio between these. I’m going to also remove the rounding from working out the gray value to make this more exact:

const grayVal = getLightnessOfRGB('rgb(205, 228, 219)')*255;
const highDiff = grayVal - 228; // -11 subtracting green - the highest value
const midDiff = grayVal - 219; // -2 subtracting blue - the middle value
const middleValueRatio = midDiff / highDiff; // 0.21739130434782608

Then what we need to do is get the difference between our new RGB green value (after we added 19 to it) and the gray value, then multiply this by our ratio. We add this back on to the gray value and that’s our answer for our newly saturated blue:

// 247 is the green value after we applied the saturation transformation
const newBlue = Math.round(grayVal+(247-grayVal)*middleValueRatio); // 223

So after we’ve applied our transformations, we we get an RGB color of rgb(186, 247, 223 — a more vibrant version of the color we started with. But its kept its lightness and hue.

Here are a couple of JavaScript functions that work together to saturate a color by 10%. The second function here returns an array of objects representing the RGB values in order of size. This second function is used in all of the rest of the functions in this article.

If you give it a gray, it will just return the same color:

function saturateByTenth(rgb) {
  const rgbIntArray = (rgb.replace(/ /g, '').slice(4, -1).split(',').map(e => parseInt(e)));
  const grayVal = getLightnessOfRGB(rgb)*255;
  const [lowest,middle,highest] = getLowestMiddleHighest(rgbIntArray);


  if(lowest.val===highest.val){return rgb;}
  
  const saturationRange =  Math.round(Math.min(255-grayVal,grayVal));
  const maxChange = Math.min((255-highest.val),lowest.val);
  const changeAmount = Math.min(saturationRange/10, maxChange);
  const middleValueRatio =(grayVal-middle.val)/(grayVal-highest.val);
  
  const returnArray=[];
  returnArray[highest.index]= Math.round(highest.val+changeAmount);
  returnArray[lowest.index]= Math.round(lowest.val-changeAmount);
  returnArray[middle.index]= Math.round(grayVal+(returnArray[highest.index]-grayVal)*middleValueRatio);
   return (`rgb(${[returnArray].join()})`);
}


function getLowestMiddleHighest(rgbIntArray) {
  let highest = {val:-1,index:-1};
  let lowest = {val:Infinity,index:-1};


  rgbIntArray.map((val,index)=>{
    if(val>highest.val){
      highest = {val:val,index:index};
    }
    if(val<lowest.val){
      lowest = {val:val,index:index};
    }
  });


  if(lowest.index===highest.index){
    lowest.index=highest.index+1;
  }
  
  let middle = {index: (3 - highest.index - lowest.index)};
  middle.val = rgbIntArray[middle.index];
  return [lowest,middle,highest];
}

How to desaturate an RGB Color

If we completely desaturate a color, we’ll end up with a shade of gray. RGB grays will always have three equal RGB values, so we could just use the grayVal from the previous function to make a gray color with the same lightness as any given color.

What if we don’t want to go straight to gray, and only want to slightly desaturate a color? We can do this by reversing the previous example.

Let’s look at another example. If we start with rgb(173, 31, 104), we have a saturated rouge. Let’s grab the decimal measure of lightness and multiply it by 255 to get the gray version:

const grayVal = Math.round(getLightnessOfRGB('rgb(173, 31, 104)') * 255); // 102

This means that if we fully desaturate this color to gray we’re going to end up with rgb(102, 102, 102). Let’s desaturate it by 30%.

First, we need to find the saturation range of the color again:

const saturationRange = Math.round(Math.min(255-grayVal,grayVal)); // 102

To desaturate our color by 30%, we want to move the highest and lowest color by 30% of this range toward full gray. But we also need to clamp the change amount by the distance between either of these colors (the distance will be the same for the highest and lowest), and full gray.

// Get the maximum change by getting the difference between the lowest (green) and the gray value
const maxChange = grayVal-31; // 71
// Now grab the value that represents 30% of our saturation range
const changeAmount = Math.min(saturationRange * 0.3, maxChange) // 30.59999

And add this change amount to the lowest RGB value and subtract it from the highest value: 

const newGreen =Math.Round(31+changeAmount); // 62
const newRed =Math.Round(173-changeAmount); // 142

Then use the same ratio technique as the last function to find the value for the third color:

const highDiff = grayVal - 173; // -71 subtracting red - the highest value
const midDiff = grayVal - 104; // -2 subtracting blue - the middle value
const middleValueRatio = midDiff / highDiff; // 0.02816901408
const newBlue = Math.Round(grayVal+(142.4-grayVal)*middleValueRatio); // 103

So that means the RGB representation of our rouge desaturated by 30% would be rgb(142, 62, 103). The hue and the lightness are exactly the same, but it’s a bit less vibrant.

Here’s a JavaScript function that will desaturate a color by 10%. It’s basically a reverse of the previous function.

function desaturateByTenth(rgb) {
  const rgbIntArray = (rgb.replace(/ /g, '').slice(4, -1).split(',').map(e => parseInt(e)));
  //grab the values in order of magnitude 
  //this uses the getLowestMiddleHighest function from the saturate section
  const [lowest,middle,highest] = getLowestMiddleHighest(rgbIntArray);
  const grayVal = getLightnessOfRGB(rgb) * 255;


  if(lowest.val===highest.val){return rgb;}
  
  const saturationRange =  Math.round(Math.min(255-grayVal,grayVal));
  const maxChange = grayVal-lowest.val;
  const changeAmount = Math.min(saturationRange/10, maxChange);
                               
  const middleValueRatio =(grayVal-middle.val)/(grayVal-highest.val);
  
  const returnArray=[];
  returnArray[highest.index]= Math.round(highest.val-changeAmount);
  returnArray[lowest.index]= Math.round(lowest.val+changeAmount);
  returnArray[middle.index]= Math.round(grayVal+(returnArray[highest.index]-grayVal)*middleValueRatio);
  return (`rgb(${[returnArray].join()})`);
}



Here’s a CodePen to experiment with the effect of these saturation functions:

How to lighten an RGB color keeping the hue the same

To lighten an RGB value and keep the hue the same, we need to increase each RGB value by the same proportion of difference between the value and 255. Let’s say we have this color: rgb(0, 153, 255). That’s a fully saturated blue/cyan. Let’s look at the difference between each RGB value and 255: 

  • Red is zero, so the difference is 255. 
  • Green is 153, so the difference is 102. 
  • Blue is 255, so the difference is zero. 

Now when we lighten the color, we need to increase each RGB value by the same fraction of our differences. One thing to note is that we are essentially mixing white into our color. This means that the color will slowly lose its saturation as it lightens.

Let’s increase the lightness on this color by a tenth. We’ll start with out lowest RGB value, red. We add on a tenth of 255 to this value. We also need to use Math.min to make sure that the value doesn’t increase over 255:

const red = 0;
const newRed = Math.round( red + Math.min( 255-red, 25.5 )); // 26

Now the other two RGB values need to increase by the same fraction of distance to 255.

To work this out, we get the difference between the lowest RGB value (before we increased it) and 255. Red was zero so our difference is 255. Then we get the amount the lowest RGB value increased in our transformation. Red increased from zero to 26, so our increase is 26.

Dividing the increase by the difference between the original color and 255 gives us a fraction we can use to work out the other values.

const redDiff = 255 - red; // 255
const redIncrease = newRed - red; // 26
const increaseFraction = redIncrease / redDiff; // 0.10196

Now we multiply the difference between the other RGB values and 255 by this fraction. This gives us the amount we need to add to each value.

const newGreen = Math.round(153 + (255 - 153) * increaseFraction); // 163
const newBlue = Math.round(255 + (255 - 255) * increaseFraction); // 255

This means the color we end up with is rgb(26, 163, 255). That’s still the same hue, but a touch lighter.

Here’s a function that does this: 

function lightenByTenth(rgb) {

  const rgbIntArray = rgb.replace(/ /g, '').slice(4, -1).split(',').map(e => parseInt(e));
  // Grab the values in order of magnitude 
  // This uses the getLowestMiddleHighest function from the saturate section
  const [lowest,middle,highest]=getLowestMiddleHighest(rgbIntArray);
  
  if(lowest.val===255){
    return rgb;
  }
  
  const returnArray = [];

  // First work out increase on lower value
  returnArray[lowest.index]= Math.round(lowest.val+(Math.min(255-lowest.val,25.5)));

  // Then apply to the middle and higher values
  const increaseFraction  = (returnArray[lowest.index]-lowest.val)/ (255-lowest.val);
  returnArray[middle.index]= middle.val +(255-middle.val)*increaseFraction ;
  returnArray[highest.index]= highest.val +(255-highest.val)*increaseFraction ;
  
  // Convert the array back into an rgb string
  return (`rgb(${returnArray.join()})`);
}

How to darken an RGB color keeping the hue the same

Darkening an RGB color is pretty similar. Instead of adding to the values to get 255, we’re subtracting from the values to get toward zero.

Also we start our transformation by reducing the highest value and getting the fraction of this decrease. We use this fraction to reduce the other two values by their distance to zero. This is a reversal of what we did lightening a color.

Darkening a color will also cause it to slowly lose its level of saturation.

function darkenByTenth(rgb) {
  
  // Our rgb to int array function again
  const rgbIntArray = rgb.replace(/ /g, '').slice(4, -1).split(',').map(e => parseInt(e));
  //grab the values in order of magnitude 
  //this uses the function from the saturate function
  const [lowest,middle,highest]=getLowestMiddleHighest(rgbIntArray);
  
  if(highest.val===0){
    return rgb;
  }

  const returnArray = [];

  returnArray[highest.index] = highest.val-(Math.min(highest.val,25.5));
  const decreaseFraction  =(highest.val-returnArray[highest.index])/ (highest.val);
  returnArray[middle.index]= middle.val -middle.val*decreaseFraction; 
  returnArray[lowest.index]= lowest.val -lowest.val*decreaseFraction;              
                            
  // Convert the array back into an rgb string
  return (`rgb(${returnArray.join()}) `);
}

Here’s a CodePen to experiment with the effect of the lightness functions:


If you ever do need to work with RGB colors, these functions will help you get you started. You can also give the HSL format a try, as well as the color libraries to extend browser support, and the Colour Grid tool for conversions.


The post Using JavaScript to Adjust Saturation and Brightness of RGB Colors appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.