JavaScript’s Rust tool belt

Category Image 080

#​702 — August 29, 2024

Read on the Web

JavaScript Weekly

Rspack 1.0: The Rust-Powered JavaScript Bundler — Far from being ‘yet another bundler’ with its own approach and terminology to learn, Rspack prides itself on being webpack API and ecosystem compatible, while offering many times the performance. The team now considers it production ready and encourages you to try your webpack-based projects on it.

Rspack Contributors

💡 Rspack also has a family of ancillary tools worth checking out, such as Rsdoctor, a tool for analyzing and visualizing your build process (for both Rspack and webpack!)

Front-End System Design — Learn to create scalable, efficient user interfaces in this extensive video course by Evgennii Ray. Explore the box model, browser rendering, DOM manipulation, state management, performance and much more.

Frontend Masters sponsor

How to Create an NPM Package in 2024 — Sounds simple, but there are a lot of steps involved if you want to follow best practices, introduce useful tools, and get things just right. Matt Pocock walks through the process here, and there’s a 14-minute screencast too, if you’d prefer to watch along.

Matt Pocock

IN BRIEF:

🤖 v0 is an AI-powered tool from Vercel for, originally, generating shadcn/ui-powered React components based upon prompts you supply. Now, however, it has basic Vue.js support too.

Deno 1.46 has been released and promises to be the final 1.x release before the much awaited Deno 2.0. Deno’s Node compatibility improves even more (it now supports Playwright and many more things) and ships with V8 12.9.

📊 IEEE has published its latest annual list of top programming languages. JavaScript takes third place, but TypeScript has leapt up several places to fourth.

RELEASES:

Prisma 5.19 – The popular ORM for Node.js and TypeScript adds ‘TypedSQL’, a way to write raw SQL queries in a type-safe way.

📈 billboard.js 3.13 – Popular D3 chart library adds area-step-range charts.

pnpm 9.9 – Fast, space efficient package manager.

React Email 3.0, Ember 5.11, Bun v1.1.26

📒 Articles & Tutorials

JS Dates are About to Be Fixed — Handling dates and times is famously a painful area for programmers and JavaScript hasn’t done a lot to make it easier. Libraries like Moment.js help a lot, but Iago looks at how the Temporal proposal and its features will begin to help a lot more over time.

Iago Lastra

Weekly Chats on the Art and Practice of Programming — Your home for weekly conversations with fascinating guests about how technology is made and where it’s headed.

The Stack Overflow Podcast sponsor

JavaScript Generators Explained — Jan was frustrated by the quality of documentation and articles explaining generators in JavaScript, and set out to explain things in a way that a more advanced developer could appreciate.

Jan Hesters

Implementing a React-a-Like from Scratch — While it’s unlikely you’ll actually want to do this, at least thinking about it can prove instructive as to what’s going on in React’s engine room.

Robby Pruzan

▶  How to Implement the 2048 Game in JavaScript — Ania is back with one of her usual easy to follow walkthroughs of implementing a complete game in JavaScript. This time it’s the 2048 sliding puzzle game. (Two weeks ago she did Tic-Tac-Toe as well.)

Ania Kubów

Learn Role-Based Access Control and Simplify Permissions Management — Enhance security and streamline access by managing user roles with Clerk Organizations.

Clerk sponsor

📄 The Only Widely Recognized JS Feature Ever Deprecated – Spoiler: It’s with. Trevor Lasn

📄 Generating Unique Random Numbers in JavaScript Using Sets Amejimaobari

📺 21 Talks from the Chain React 2024 Conference – A React Native event. YouTube

📄 Exposing Internal Methods on Vue Custom Elements Jaime Jones

📄 The Interface Segregation Principle in React Alex Kondov

🛠 Code & Tools

TypeScript 5.6 Release Candidate — As always, Daniel presents an epic roundup of what’s new. We’ll focus more on it next week though, as the final release is anticipated to land next Tuesday (September 3).

Daniel Rosenwasser (Microsoft)

Vuestic UI 1.10: A Vue.js 3.0 UI Framework — Features 60 customizable and responsive components and with the v1.10 release it’s gained a significant bundle size optimization, a custom compiler that improves build time performance, and other minor enhancements. GitHub repo.

Vuestic UI

✅ Bye Bye Bugs — Get 80% automated E2E test coverage for mobile and web apps in under 4 months with QA Wolf. With QA cycles complete in minutes (not days), bugs don’t stand a chance. Schedule a demo.

QA Wolf sponsor

Material UI v6: The Popular React UI Design/Component System — At ten years old, the popular design system has its latest major release. There’s a focus on improved theming, color scheme management, container queries, and React 19 support. There are revamped templates to be inspired by, too.

García, Bittu, Andai, et al.

npm-check-updates 17.0: Update package.json Dependencies to Latest Versions — That is, as opposed to the specified versions. It includes a handy -i interactive mode so you can look at potential upgrades and then opt in to them one by one.

Raine Revere

Code Hike 1.0: Turn Markdown into Rich Interactive Experiences — Aimed at use cases like code walkthroughs and interactive docs, Code Hike bridges the gap between Markdown and React when creating technical content that takes full advantage of the modern web.

Rodrigo Pombo

Calendar.js: A Calendar Control with Drag and Drop — A responsive calendar with no dependencies, full drag and drop support (even between calendars), and many ways to manage events with recurring events, exporting, holidays, and more.

William Troup

📊 Perspective 3.0 – Data visualization and analytics component. The core is written in C++ and compiled to WebAssembly where it can be used from JavaScript. Their homepage shows it off well with a live example.

json-viewer 3.5 – Display JSON data in a readable, user-friendly way.

♟️ Stockfish.js 16.1 – A JavaScript chess engine.

jest-dom 6.5 – Jest matchers to test DOM state.

Marked 14.1 – Fast Markdown compiler / parser.

Javet 3.1.5 – Java + V8. Embed JS into Java.

Pixi.js 8.3.4 – Fast 2D on WebGL engine.

A regular expression refresher

Category Image 080

#​701 — August 22, 2024

Read on the Web

JavaScript Weekly

Regexes Got Good: The History (and Future) of Regular Expressions in JavaScript — Regular expression support was always a little underwhelming in JS, but things have improved. Steven takes us on a tour to refresh our knowledge, as well as show off his ‘regex’ library that boosts JS regexes to a true A++ rating. Steven was co-author of O’Reilly’s Regular Expressions Cookbook and High Performance JavaScript so knows his stuff.

Steven Levithan

WorkOS: The Modern Identity Platform for B2B SaaS — WorkOS is a modern identity platform for B2B SaaS, offering flexible and easy-to-use APIs to integrate SSO, SCIM, and RBAC in minutes instead of months. It’s trusted by hundreds of high-growth startups such as Perplexity, Vercel, Drata, and Webflow.

WorkOS sponsor

Node v22.7.0 (Current) Released — Node 22.6 let you strip types from source code, but now with –experimental-transform-types you can transform TypeScript-only syntax into JavaScript before running it too. Module syntax detection is now also enabled by default.

Rafael Gonzaga

Bun v1.1.25: Now Running at 1.29 Million Requests per Second — I’m having a little fun with the title, but the latest version of the JavaScriptCore-based JS runtime has added node:cluster support and uses this to demo a high level of HTTP throughput on a ‘Hello World’ example. Support for V8’s C++ API has also landed – notable because Bun isn’t V8-based.

Ashcon Partovi

IN BRIEF:

We’ve mentioned ECMASCript 2024 a bit recently, but Pawel Grzybek has a neat and tidy overview of what’s new in the ES2024 spec.

🐝 Could Wasp be ‘the JavaScript answer to Django’ for full-stack webdev? The Wasp team certainly thinks so.

🎙️ Ryan Dahl, creator of both Node.js and Deno, went on the Stack Overflow podcast to talk about Deno’s current limitations and what’s coming in Deno 2.0.

RELEASES:

PlayCanvas Engine 2.0 – A powerful JS-based Web graphics platform.

Node v20.17.0 (LTS) – The LTS release of Node adds support for require-ing synchronous ESM graphs.

Astro 4.14 – The popular agnostic content site framework now includes an experimental API for managing site content.

pnpm 9.8, Vuetify 3.7, Neo.mjs 7.0

Join Us for ViteConf on October 3rd — Learn how the best teams are building the next generation of the web with Vite!

StackBlitz sponsor

📒 Articles & Tutorials

50 TypeScript F–k Ups Mistakes — An admittedly colorfully-titled book digging into lots of subtle mistakes you might run into with TypeScript. It’s available on Leanpub in PDF, iPad, and Kindle forms, or you can read it all directly on its GitHub repo. At least worth a skim in case you’re running into any of its points..

Azat Mardan

The Official Redux Essentials Tutorial, Redux — The long standing guide to how to use the popular Redux state container the right way with best practices has undergone a big reworking with TypeScript used throughout, new concepts added, and more coverage of RTK/React Toolkit features.

Redux Team

React is (Becoming) a Full-Stack Framework — Is React merely a frontend library? How does the backend fit in? The author shares his thoughts on what led him to start considering React as more of a full-stack solution.

Robin Wieruch

📄 Using JavaScript Generators to Visualize Algorithms Alexander G. Covic

📄 Optimizing SPA Load Times with Async Chunks Preloading Matteo Mazzarolo

📄 Using isolatedModules in Angular 18.2 Thompson and Lyding (Angular Team)

📄 How to Generate a PDF in a JavaScript App Colby Fayock

🛠 Code & Tools

Milkdown: Plugin-Driven WYSIWYG Markdown Editor Framework — A lightweight WYSIWYG Markdown editor based around a plugin system that enables a significant level of customization. It’s neat to see the docs are rendered by the editor itself. GitHub repo.

Mirone

Fuite 5.0: A Tool for Finding Memory Leaks in Web Apps — A CLI tool that you can point at a URL to analyze for memory leaks. Here’s how it works. There’s also a video tutorial.

Nolan Lawson

✂️ Cut Your QA Cycles Down to Minutes with Automated Testing — Are slow test cycles limiting your dev teams’ release velocity? QA Wolf provides high-volume, high-speed test coverage for web and mobile apps — reducing your test cycles to minutes. Learn more.

QA Wolf sponsor

LogTape: Simple Logging Library with Zero Dependencies — I’m digging this new style of library that promises support across all the main runtimes (Node, Deno, Bun) as well as edge functions and the browser devtools.

Hong Minhee

📊 Chart.js 4.4: Canvas-Based Charts for the Web — One of those libraries that feels like it’s been around forever but still looks fresh and gets good updates. Bar, line, area, bubble, pie, donut, scatter, and radar charts are all a piece of cake to render. Samples and GitHub repo.

Chart.js Contributors

Legend State: A Tiny, Fast and Modern React State System — A year ago, Jack Herrington wondered if Legend State could be ▶️ ‘the ultimate state manager’ and things have progressed a lot since, with it now boasting being the fastest React state library in town.

Jay Meistrich

Tagger: Zero Dependency, Vanilla JavaScript Tagging Library — You can play with a live demo here.

Jakub T. Jankiewicz

tinykeys 3.0: A Keybindings Library in ~650 Bytes — Keeps things as simple and sweet as possible.

Jamie Kyle

heic-to: Convert HEIC/HEIF Images to JPEG or PNG in the Browser

Hopper Gee

Cheerio 1.0 – HTML/XML manipulation library for Node.

🎨 Chroma.js 3.0 – JavaScript color manipulation library.

eta (η) 3.5 – Embedded JS template engine for Node, Deno, and browsers.

Embla Carousel 8.2 – Carousel library with fluid motion and good swipe precision.

d3-graphviz 5.6 – Graphviz DOT rendering and animated transitions.

Alpine AJAX 0.9 – Alpine.js plugin for building server-powered frontends.

Happy DOM 15.0 – JS implementation of a web browser sans UI.

Elliptic 6.5.7 – Elliptic curve cryptography in plain JS.

Poku 2.5 – Cross-platform JavaScript test runner.

💚 Use Node? Check out the latest issue of Node Weekly, our sibling email about all things relating to Node.js — from tutorials and screencasts to news and releases. We do include some Node related items here in JavaScript Weekly, but we save most of it for there.

→ Check out Node Weekly

Sticky Headers And Full-Height Elements: A Tricky Combination

Category Image 080

I was recently asked by a student to help with a seemingly simple problem. She’d been working on a website for a coffee shop that sports a sticky header, and she wanted the hero section right underneath that header to span the rest of the available vertical space in the viewport.

Here’s a visual demo of the desired effect for clarity.

Looks like it should be easy enough, right? I was sure (read: overconfident) that the problem would only take a couple of minutes to solve, only to find it was a much deeper well than I’d assumed.

Before we dive in, let’s take a quick look at the initial markup and CSS to see what we’re working with:

<body>
<header class=”header”>Header Content</header>
<section class=”hero”>Hero Content</section>
<main class=”main”>Main Content</main>
</body>

.header {
position: sticky;
top: 0; /* Offset, otherwise it won’t stick! */
}

/* etc. */

With those declarations, the .header will stick to the top of the page. And yet the .hero element below it remains intrinsically sized. This is what we want to change.

The Low-Hanging Fruit

The first impulse you might have, as I did, is to enclose the header and hero in some sort of parent container and give that container 100vh to make it span the viewport. After that, we could use Flexbox to distribute the children and make the hero grow to fill the remaining space.

<body>
<div class=”container”>
<header class=”header”>Header Content</header>
<section class=”hero”>Hero Content</section>
</div>
<main class=”main”>Main Content</main>
</body>

.container {
height: 100vh;
display: flex;
flex-direction: column;
}

.hero {
flex-grow: 1;
}

/* etc. */

This looks correct at first glance, but watch what happens when scrolling past the hero.

See the Pen Attempt #1: Container + Flexbox [forked] by Philip.

The sticky header gets trapped in its parent container! But.. why?

If you’re anything like me, this behavior is unintuitive, at least initially. You may have heard that sticky is a combination of relative and fixed positioning, meaning it participates in the normal flow of the document but only until it hits the edges of its scrolling container, at which point it becomes fixed. While viewing sticky as a combination of other values can be a useful mnemonic, it fails to capture one important difference between sticky and fixed elements:

A position: fixed element doesn’t care about the parent it’s nested in or any of its ancestors. It will break out of the normal flow of the document and place itself directly offset from the viewport, as though glued in place a certain distance from the edge of the screen.

Conversely, a position: sticky element will be pushed along with the edges of the viewport (or next closest scrolling container), but it will never escape the boundaries of its direct parent. Well, at least if you don’t count visually transform-ing it. So a better way to think about it might be, to steal from Chris Coyier, that “position: sticky is, in a sense, a locally scoped position: fixed.” This is an intentional design decision, one that allows for section-specific sticky headers like the ones made famous by alphabetical lists in mobile interfaces.

See the Pen Sticky Section Headers [forked] by Philip.

Okay, so this approach is a no-go for our predicament. We need to find a solution that doesn’t involve a container around the header.

Fixed, But Not Solved

Maybe we can make our lives a bit simpler. Instead of a container, what if we gave the .header element a fixed height of, say, 150px? Then, all we have to do is define the .hero element’s height as height: calc(100vh – 150px).

See the Pen Attempt #2: Fixed Height + Calc() [forked] by Philip.

This approach kinda works, but the downsides are more insidious than our last attempt because they may not be immediately apparent. You probably noticed that the header is too tall, and we’d wanna do some math to decide on a better height.

Thinking ahead a bit,

What if the .header’s children need to wrap or rearrange themselves at different screen sizes or grow to maintain legibility on mobile?
What if JavaScript is manipulating the contents?

All of these things could subtly change the .header’s ideal size, and chasing the right height values for each scenario has the potential to spiral into a maintenance nightmare of unmanageable breakpoints and magic numbers — especially if we consider this needs to be done not only for the .header but also the .hero element that depends on it.

I would argue that this workaround also just feels wrong. Fixed heights break one of the main affordances of CSS layout — the way elements automatically grow and shrink to adapt to their contents — and not relying on this usually makes our lives harder, not simpler.

So, we’re left with…

A Novel Approach

Now that we’ve figured out the constraints we’re working with, another way to phrase the problem is that we want the .header and .hero to collectively span 100vh without sizing the elements explicitly or wrapping them in a container. Ideally, we’d find something that already is 100vh and align them to that. This is where it dawned on me that display: grid may provide just what we need!

Let’s try this: We declare display: grid on the body element and add another element before the .header that we’ll call .above-the-fold-spacer. This new element gets a height of 100vh and spans the grid’s entire width. Next, we’ll tell our spacer that it should take up two grid rows and we’ll anchor it to the top of the page.

This element must be entirely empty because we don’t ever want it to be visible or to register to screen readers. We’re merely using it as a crutch to tell the grid how to behave.

<body>
<!– This spacer provides the height we want –>
<div class=”above-the-fold-spacer”></div>

<!– These two elements will place themselves on top of the spacer –>
<header class=”header”>Header Content</header>
<section class=”hero”>Hero Content</section>

<!– The rest of the page stays unaffected –>
<main class=”main”>Main Content</main>
</body>

body {
display: grid;
}

.above-the-fold-spacer {
height: 100vh;
/* Span from the first to the last grid column line */
/* (Negative numbers count from the end of the grid) */
grid-column: 1 / -1;
/* Start at the first grid row line, and take up 2 rows */
grid-row: 1 / span 2;
}

/* etc. */

This is the magic ingredient.

By adding the spacer, we’ve created two grid rows that together take up exactly 100vh. Now, all that’s left to do, in essence, is to tell the .header and .hero elements to align themselves to those existing rows. We do have to tell them to start at the same grid column line as the .above-the-fold-spacer element so that they won’t try to sit next to it. But with that done… ta-da!

See the Pen The Solution: Grid Alignment [forked] by Philip.

The reason this works is that a grid container can have multiple children occupying the same cell overlaid on top of each other. In a situation like that, the tallest child element defines the grid row’s overall height — or, in this case, the combined height of the two rows (100vh).

To control how exactly the two visible elements divvy up the available space between themselves, we can use the grid-template-rows property. I made it so that the first row uses min-content rather than 1fr. This is necessary so that the .header doesn’t take up the same amount of space as the .hero but instead only takes what it needs and lets the hero have the rest.

Here’s our full solution:

body {
display: grid;
grid-template-rows: min-content 1fr;
}

.above-the-fold-spacer {
height: 100vh;
grid-column: 1 / -1;
grid-row: 1 / span 2;
}

.header {
position: sticky;
top: 0;
grid-column-start: 1;
grid-row-start: 1;
}

.hero {
grid-column-start: 1;
grid-row-start: 2;
}

And voila: A sticky header of arbitrary size above a hero that grows to fill the remaining visible space!

Caveats and Final Thoughts

It’s worth noting that the HTML order of the elements matters here. If we define .above-the-fold-spacer after our .hero section, it will overlay and block access to the elements underneath. We can work around this by declaring either order: -1, z-index: -1, or visibility: hidden.

Keep in mind that this is a simple example. If you were to add a sidebar to the left of your page, for example, you’d need to adjust at which column the elements start. Still, in the majority of cases, using a CSS Grid approach is likely to be less troublesome than the Sisyphean task of manually managing and coordinating the height values of multiple elements.

Another upside of this approach is that it’s adaptable. If you decide you want a group of three elements to take up the screen’s height rather than two, then you’d make the invisible spacer span three rows and assign the visible elements to the appropriate one. Even if the hero element’s content causes its height to exceed 100vh, the grid adapts without breaking anything. It’s even well-supported in all modern browsers.

The more I think about this technique, the more I’m persuaded that it’s actually quite clean. Then again, you know how lawyers can talk themselves into their own arguments? If you can think of an even simpler solution I’ve overlooked, feel free to reach out and let me know!

Generating Unique Random Numbers In JavaScript Using Sets

Category Image 080

JavaScript comes with a lot of built-in functions that allow you to carry out so many different operations. One of these built-in functions is the Math.random() method, which generates a random floating-point number that can then be manipulated into integers.

However, if you wish to generate a series of unique random numbers and create more random effects in your code, you will need to come up with a custom solution for yourself because the Math.random() method on its own cannot do that for you.

In this article, we’re going to be learning how to circumvent this issue and generate a series of unique random numbers using the Set object in JavaScript, which we can then use to create more randomized effects in our code.

Note: This article assumes that you know how to generate random numbers in JavaScript, as well as how to work with sets and arrays.

Generating a Unique Series of Random Numbers

One of the ways to generate a unique series of random numbers in JavaScript is by using Set objects. The reason why we’re making use of sets is because the elements of a set are unique. We can iteratively generate and insert random integers into sets until we get the number of integers we want.

And since sets do not allow duplicate elements, they are going to serve as a filter to remove all of the duplicate numbers that are generated and inserted into them so that we get a set of unique integers.

Here’s how we are going to approach the work:

  1. Create a Set object.
  2. Define how many random numbers to produce and what range of numbers to use.
  3. Generate each random number and immediately insert the numbers into the Set until the Set is filled with a certain number of them.

The following is a quick example of how the code comes together:

function generateRandomNumbers(count, min, max) {
  // 1: Create a Set object
  let uniqueNumbers = new Set();
  while (uniqueNumbers.size < count) {
    // 2: Generate each random number
    uniqueNumbers.add(Math.floor(Math.random() * (max - min + 1)) + min);
  }
  // 3: Immediately insert them numbers into the Set...
  return Array.from(uniqueNumbers);
}
// ...set how many numbers to generate from a given range
console.log(generateRandomNumbers(5, 5, 10));

What the code does is create a new Set object and then generate and add the random numbers to the set until our desired number of integers has been included in the set. The reason why we’re returning an array is because they are easier to work with.

One thing to note, however, is that the number of integers you want to generate (represented by count in the code) should be less than the upper limit of your range plus one (represented by max + 1 in the code). Otherwise, the code will run forever. You can add an if statement to the code to ensure that this is always the case:

function generateRandomNumbers(count, min, max) {
  // if statement checks that count is less than max + 1
  if (count > max + 1) {
    return "count cannot be greater than the upper limit of range";
  } else {
    let uniqueNumbers = new Set();
    while (uniqueNumbers.size < count) {
      uniqueNumbers.add(Math.floor(Math.random() * (max - min + 1)) + min);
    }
    return Array.from(uniqueNumbers);
  }
}
console.log(generateRandomNumbers(5, 5, 10));
Using the Series of Unique Random Numbers as Array Indexes

It is one thing to generate a series of random numbers. It’s another thing to use them.

Being able to use a series of random numbers with arrays unlocks so many possibilities: you can use them in shuffling playlists in a music app, randomly sampling data for analysis, or, as I did, shuffling the tiles in a memory game.

Let’s take the code from the last example and work off of it to return random letters of the alphabet. First, we’ll construct an array of letters:

const englishAlphabets = [
  'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 
  'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z'
];

// rest of code

Then we map the letters in the range of numbers:

const englishAlphabets = [
  'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 
  'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z'
];

// generateRandomNumbers()

const randomAlphabets = randomIndexes.map((index) => englishAlphabets[index]);

In the original code, the generateRandomNumbers() function is logged to the console. This time, we’ll construct a new variable that calls the function so it can be consumed by randomAlphabets:

const englishAlphabets = [
  'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 
  'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z'
];

// generateRandomNumbers()

const randomIndexes = generateRandomNumbers(5, 0, 25);
const randomAlphabets = randomIndexes.map((index) => englishAlphabets[index]);

Now we can log the output to the console like we did before to see the results:

const englishAlphabets = [
  'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 
  'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z'
];

// generateRandomNumbers()

const randomIndexes = generateRandomNumbers(5, 0, 25);
const randomAlphabets = randomIndexes.map((index) => englishAlphabets[index]);
console.log(randomAlphabets);

And, when we put the generateRandomNumbers`()` function definition back in, we get the final code:

const englishAlphabets = [
  'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 
  'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z'
];
function generateRandomNumbers(count, min, max) {
  if (count > max + 1) {
    return "count cannot be greater than the upper limit of range";
  } else {
    let uniqueNumbers = new Set();
    while (uniqueNumbers.size < count) {
      uniqueNumbers.add(Math.floor(Math.random() * (max - min + 1)) + min);
    }
    return Array.from(uniqueNumbers);
  }
}
const randomIndexes = generateRandomNumbers(5, 0, 25);
const randomAlphabets = randomIndexes.map((index) => englishAlphabets[index]);
console.log(randomAlphabets);

So, in this example, we created a new array of alphabets by randomly selecting some letters in our englishAlphabets array.

You can pass in a count argument of englishAlphabets.length to the generateRandomNumbers function if you desire to shuffle the elements in the englishAlphabets array instead. This is what I mean:

generateRandomNumbers(englishAlphabets.length, 0, 25);
Wrapping Up

In this article, we’ve discussed how to create randomization in JavaScript by covering how to generate a series of unique random numbers, how to use these random numbers as indexes for arrays, and also some practical applications of randomization.

The best way to learn anything in software development is by consuming content and reinforcing whatever knowledge you’ve gotten from that content by practicing. So, don’t stop here. Run the examples in this tutorial (if you haven’t done so), play around with them, come up with your own unique solutions, and also don’t forget to share your good work. Ciao!

Regexes Got Good: The History And Future Of Regular Expressions In JavaScript

Category Image 080

Modern JavaScript regular expressions have come a long way compared to what you might be familiar with. Regexes can be an amazing tool for searching and replacing text, but they have a longstanding reputation (perhaps outdated, as I’ll show) for being difficult to write and understand.

This is especially true in JavaScript-land, where regexes languished for many years, comparatively underpowered compared to their more modern counterparts in PCRE, Perl, .NET, Java, Ruby, C++, and Python. Those days are over.

In this article, I’ll recount the history of improvements to JavaScript regexes (spoiler: ES2018 and ES2024 changed the game), show examples of modern regex features in action, introduce you to a lightweight JavaScript library that makes JavaScript stand alongside or surpass other modern regex flavors, and end with a preview of active proposals that will continue to improve regexes in future versions of JavaScript (with some of them already working in your browser today).

The History of Regular Expressions in JavaScript

ECMAScript 3, standardized in 1999, introduced Perl-inspired regular expressions to the JavaScript language. Although it got enough things right to make regexes pretty useful (and mostly compatible with other Perl-inspired flavors), there were some big omissions, even then. And while JavaScript waited 10 years for its next standardized version with ES5, other programming languages and regex implementations added useful new features that made their regexes more powerful and readable.

But that was then.

Did you know that nearly every new version of JavaScript has made at least minor improvements to regular expressions?

Let’s take a look at them.

Don’t worry if it’s hard to understand what some of the following features mean — we’ll look more closely at several of the key features afterward.

  • ES5 (2009) fixed unintuitive behavior by creating a new object every time regex literals are evaluated and allowed regex literals to use unescaped forward slashes within character classes (/[/]/).
  • ES6/ES2015 added two new regex flags: y (sticky), which made it easier to use regexes in parsers, and u (unicode), which added several significant Unicode-related improvements along with strict errors. It also added the RegExp.prototype.flags getter, support for subclassing RegExp, and the ability to copy a regex while changing its flags.
  • ES2018 was the edition that finally made JavaScript regexes pretty good. It added the s (dotAll) flag, lookbehind, named capture, and Unicode properties (via \p{...} and \P{...}, which require ES6’s flag u). All of these are extremely useful features, as we’ll see.
  • ES2020 added the string method matchAll, which we’ll also see more of shortly.
  • ES2022 added flag d (hasIndices), which provides start and end indices for matched substrings.
  • And finally, ES2024 added flag v (unicodeSets) as an upgrade to ES6’s flag u. The v flag adds a set of multicharacter “properties of strings” to \p{...}, multicharacter elements within character classes via \p{...} and \q{...}, nested character classes, set subtraction [A--B] and intersection [A&&B], and different escaping rules within character classes. It also fixed case-insensitive matching for Unicode properties within negated sets [^...].

As for whether you can safely use these features in your code today, the answer is yes! The latest of these features, flag v, is supported in Node.js 20 and 2023-era browsers. The rest are supported in 2021-era browsers or earlier.

Each edition from ES2019 to ES2023 also added additional Unicode properties that can be used via \p{...} and \P{...}. And to be a completionist, ES2021 added string method replaceAll — although, when given a regex, the only difference from ES3’s replace is that it throws if not using flag g.

Aside: What Makes a Regex Flavor Good?

With all of these changes, how do JavaScript regular expressions now stack up against other flavors? There are multiple ways to think about this, but here are a few key Aspects:

  • Performance.
    This is an important Aspect but probably not the main one since mature regex implementations are generally pretty fast. JavaScript is strong on regex performance (at least considering V8’s Irregexp engine, used by Node.js, Chromium-based browsers, and even Firefox; and JavaScriptCore, used by Safari), but it uses a backtracking engine that is missing any syntax for backtracking control — a major limitation that makes ReDoS vulnerability more common.
  • Support for advanced features that handle common or important use cases.
    Here, JavaScript stepped up its game with ES2018 and ES2024. JavaScript is now best in class for some features like lookbehind (with its infinite-length support) and Unicode properties (with multicharacter “properties of strings,” set subtraction and intersection, and script extensions). These features are either not supported or not as robust in many other flavors.
  • Ability to write readable and maintainable patterns.
    Here, native JavaScript has long been the worst of the major flavors since it lacks the x (“extended”) flag that allows insignificant whitespace and comments. Additionally, it lacks regex subroutines and subroutine definition groups (from PCRE and Perl), a powerful set of features that enable writing grammatical regexes that build up complex patterns via composition.

So, it’s a bit of a mixed bag.

JavaScript regexes have become exceptionally powerful, but they’re still missing key features that could make regexes safer, more readable, and more maintainable (all of which hold some people back from using this power).

The good news is that all of these holes can be filled by a JavaScript library, which we’ll see later in this article.

Using JavaScript’s Modern Regex Features

Let’s look at a few of the more useful modern regex features that you might be less familiar with. You should know in advance that this is a moderately advanced guide. If you’re relatively new to regex, here are some excellent tutorials you might want to start with:

Named Capture

Often, you want to do more than just check whether a regex matches — you want to extract substrings from the match and do something with them in your code. Named capturing groups allow you to do this in a way that makes your regexes and code more readable and self-documenting.

The following example matches a record with two date fields and captures the values:

const record = 'Admitted: 2024-01-01\nReleased: 2024-01-03';
const re = /^Admitted: (?<admitted>\d{4}-\d{2}-\d{2})\nReleased: (?<released>\d{4}-\d{2}-\d{2})$/;
const match = record.match(re);
console.log(match.groups);
/* → {
  admitted: '2024-01-01',
  released: '2024-01-03'
} */

Don’t worry — although this regex might be challenging to understand, later, we’ll look at a way to make it much more readable. The key things here are that named capturing groups use the syntax (?<name>...), and their results are stored on the groups object of matches.

You can also use named backreferences to rematch whatever a named capturing group matched via \k<name>, and you can use the values within search and replace as follows:

// Change 'FirstName LastName' to 'LastName, FirstName'
const name = 'Shaquille Oatmeal';
name.replace(/(?<first>\w+) (?<last>\w+)/, '$<last>, $<first>');
// → 'Oatmeal, Shaquille'

For advanced regexers who want to use named backreferences within a replacement callback function, the groups object is provided as the last argument. Here’s a fancy example:

function fahrenheitToCelsius(str) {
  const re = /(?<degrees>-?\d+(\.\d+)?)F\b/g;
  return str.replace(re, (...args) => {
    const groups = args.at(-1);
    return Math.round((groups.degrees - 32) * 5/9) + 'C';
  });
}
fahrenheitToCelsius('98.6F');
// → '37C'
fahrenheitToCelsius('May 9 high is 40F and low is 21F');
// → 'May 9 high is 4C and low is -6C'

Lookbehind

Lookbehind (introduced in ES2018) is the complement to lookahead, which has always been supported by JavaScript regexes. Lookahead and lookbehind are assertions (similar to ^ for the start of a string or \b for word boundaries) that don’t consume any characters as part of the match. Lookbehinds succeed or fail based on whether their subpattern can be found immediately before the current match position.

For example, the following regex uses a lookbehind (?<=...) to match the word “cat” (only the word “cat”) if it’s preceded by “fat ”:

const re = /(?<=fat )cat/g;
'cat, fat cat, brat cat'.replace(re, 'pigeon');
// → 'cat, fat pigeon, brat cat'

You can also use negative lookbehind — written as (?<!...) — to invert the assertion. That would make the regex match any instance of “cat” that’s not preceded by “fat ”.

const re = /(?<!fat )cat/g;
'cat, fat cat, brat cat'.replace(re, 'pigeon');
// → 'pigeon, fat cat, brat pigeon'

JavaScript’s implementation of lookbehind is one of the very best (matched only by .NET). Whereas other regex flavors have inconsistent and complex rules for when and whether they allow variable-length patterns inside lookbehind, JavaScript allows you to look behind for any subpattern.

The matchAll Method

JavaScript’s String.prototype.matchAll was added in ES2020 and makes it easier to operate on regex matches in a loop when you need extended match details. Although other solutions were possible before, matchAll is often easier, and it avoids gotchas, such as the need to guard against infinite loops when looping over the results of regexes that might return zero-length matches.

Since matchAll returns an iterator (rather than an array), it’s easy to use it in a for...of loop.

const re = /(?<char1>\w)(?<char2>\w)/g;
for (const match of str.matchAll(re)) {
  const {char1, char2} = match.groups;
  // Print each complete match and matched subpatterns
  console.log(Matched "${match[0]}" with "${char1}" and "${char2}");
}

Note: matchAll requires its regexes to use flag g (global). Also, as with other iterators, you can get all of its results as an array using Array.from or array spreading.

const matches = [...str.matchAll(/./g)];

Unicode Properties

Unicode properties (added in ES2018) give you powerful control over multilingual text, using the syntax \p{...} and its negated version \P{...}. There are hundreds of different properties you can match, which cover a wide variety of Unicode categories, scripts, script extensions, and binary properties.

Note: For more details, check out the documentation on MDN.

Unicode properties require using the flag u (unicode) or v (unicodeSets).

Flag v

Flag v (unicodeSets) was added in ES2024 and is an upgrade to flag u — you can’t use both at the same time. It’s a best practice to always use one of these flags to avoid silently introducing bugs via the default Unicode-unaware mode. The decision on which to use is fairly straightforward. If you’re okay with only supporting environments with flag v (Node.js 20 and 2023-era browsers), then use flag v; otherwise, use flag u.

Flag v adds support for several new regex features, with the coolest probably being set subtraction and intersection. This allows using A--B (within character classes) to match strings in A but not in B or using A&&B to match strings in both A and B. For example:

// Matches all Greek symbols except the letter 'π'
/[\p{Script_Extensions=Greek}--π]/v

// Matches only Greek letters
/[\p{Script_Extensions=Greek}&&\p{Letter}]/v

For more details about flag v, including its other new features, check out this explainer from the Google Chrome team.

A Word on Matching Emoji

Emoji are 🤩🔥😎👌, but how emoji get encoded in text is complicated. If you’re trying to match them with a regex, it’s important to be aware that a single emoji can be composed of one or many individual Unicode code points. Many people (and libraries!) who roll their own emoji regexes miss this point (or implement it poorly) and end up with bugs.

The following details for the emoji “👩🏻‍🏫” (Woman Teacher: Light Skin Tone) show just how complicated emoji can be:

// Code unit length
'👩🏻‍🏫'.length;
// → 7
// Each astral code point (above \uFFFF) is divided into high and low surrogates

// Code point length
[...'👩🏻‍🏫'].length;
// → 4
// These four code points are: \u{1F469} \u{1F3FB} \u{200D} \u{1F3EB}
// \u{1F469} combined with \u{1F3FB} is '👩🏻'
// \u{200D} is a Zero-Width Joiner
// \u{1F3EB} is '🏫'

// Grapheme cluster length (user-perceived characters)
[...new Intl.Segmenter().segment('👩🏻‍🏫')].length;
// → 1

Fortunately, JavaScript added an easy way to match any individual, complete emoji via \p{RGI_Emoji}. Since this is a fancy “property of strings” that can match more than one code point at a time, it requires ES2024’s flag v.

If you want to match emojis in environments without v support, check out the excellent libraries emoji-regex and emoji-regex-xs.

Making Your Regexes More Readable, Maintainable, and Resilient

Despite the improvements to regex features over the years, native JavaScript regexes of sufficient complexity can still be outrageously hard to read and maintain.

Regular Expressions are SO EASY!!!! pic.twitter.com/q4GSpbJRbZ

— Garabato Kid (@garabatokid) July 5, 2019


ES2018’s named capture was a great addition that made regexes more self-documenting, and ES6’s String.raw tag allows you to avoid escaping all your backslashes when using the RegExp constructor. But for the most part, that’s it in terms of readability.

However, there’s a lightweight and high-performance JavaScript library named regex (by yours truly) that makes regexes dramatically more readable. It does this by adding key missing features from Perl-Compatible Regular Expressions (PCRE) and outputting native JavaScript regexes. You can also use it as a Babel plugin, which means that regex calls are transpiled at build time, so you get a better developer experience without users paying any runtime cost.

PCRE is a popular C library used by PHP for its regex support, and it’s available in countless other programming languages and tools.

Let’s briefly look at some of the ways the regex library, which provides a template tag named regex, can help you write complex regexes that are actually understandable and maintainable by mortals. Note that all of the new syntax described below works identically in PCRE.

Insignificant Whitespace and Comments

By default, regex allows you to freely add whitespace and line comments (starting with #) to your regexes for readability.

import {regex} from 'regex';
const date = regex`
  # Match a date in YYYY-MM-DD format
  (?<year>  \d{4}) - # Year part
  (?<month> \d{2}) - # Month part
  (?<day>   \d{2})   # Day part
`;

This is equivalent to using PCRE’s xx flag.

Subroutines and Subroutine Definition Groups

Subroutines are written as \g<name> (where name refers to a named group), and they treat the referenced group as an independent subpattern that they try to match at the current position. This enables subpattern composition and reuse, which improves readability and maintainability.

For example, the following regex matches an IPv4 address such as “192.168.12.123”:

import {regex} from 'regex';
const ipv4 = regex`\b
  (?<byte> 25[0-5] | 2[0-4]\d | 1\d\d | [1-9]?\d)
  # Match the remaining 3 dot-separated bytes
  (\. \g<byte>){3}
\b`;

You can take this even further by defining subpatterns for use by reference only via subroutine definition groups. Here’s an example that improves the regex for admittance records that we saw earlier in this article:

const record = 'Admitted: 2024-01-01\nReleased: 2024-01-03';
const re = regex`
  ^ Admitted:\ (?<admitted> \g<date>) \n
    Released:\ (?<released> \g<date>) $

  (?(DEFINE)
    (?<date>  \g<year>-\g<month>-\g<day>)
    (?<year>  \d{4})
    (?<month> \d{2})
    (?<day>   \d{2})
  )
`;
const match = record.match(re);
console.log(match.groups);
/* → {
  admitted: '2024-01-01',
  released: '2024-01-03'
} */

A Modern Regex Baseline

regex includes the v flag by default, so you never forget to turn it on. And in environments without native v, it automatically switches to flag u while applying v’s escaping rules, so your regexes are forward and backward-compatible.

It also implicitly enables the emulated flags x (insignificant whitespace and comments) and n (“named capture only” mode) by default, so you don’t have to continually opt into their superior modes. And since it’s a raw string template tag, you don’t have to escape your backslashes \\\\ like with the RegExp constructor.

Atomic Groups and Possessive Quantifiers Can Prevent Catastrophic Backtracking

Atomic groups and possessive quantifiers are another powerful set of features added by the regex library. Although they’re primarily about performance and resilience against catastrophic backtracking (also known as ReDoS or “regular expression denial of service,” a serious issue where certain regexes can take forever when searching particular, not-quite-matching strings), they can also help with readability by allowing you to write simpler patterns.

Note: You can learn more in the regex documentation.

What’s Next? Upcoming JavaScript Regex Improvements

There are a variety of active proposals for improving regexes in JavaScript. Below, we’ll look at the three that are well on their way to being included in future editions of the language.

Duplicate Named Capturing Groups

This is a Stage 3 (nearly finalized) proposal. Even better is that, as of recently, it works in all major browsers.

When named capturing was first introduced, it required that all (?<name>...) captures use unique names. However, there are cases when you have multiple alternate paths through a regex, and it would simplify your code to reuse the same group names in each alternative.

For example:

/(?<year>\d{4})-\d\d|\d\d-(?<year>\d{4})/

This proposal enables exactly this, preventing a “duplicate capture group name” error with this example. Note that names must still be unique within each alternative path.

Pattern Modifiers (aka Flag Groups)

This is another Stage 3 proposal. It’s already supported in Chrome/Edge 125 and Opera 111, and it’s coming soon for Firefox. No word yet on Safari.

Pattern modifiers use (?ims:...), (?-ims:...), or (?im-s:...) to turn the flags i, m, and s on or off for only certain parts of a regex.

For example:

/hello-(?i:world)/
// Matches 'hello-WORLD' but not 'HELLO-WORLD'

Escape Regex Special Characters with RegExp.escape

This proposal recently reached Stage 3 and has been a long time coming. It isn’t yet supported in any major browsers. The proposal does what it says on the tin, providing the function RegExp.escape(str), which returns the string with all regex special characters escaped so you can match them literally.

If you need this functionality today, the most widely-used package (with more than 500 million monthly npm downloads) is escape-string-regexp, an ultra-lightweight, single-purpose utility that does minimal escaping. That’s great for most cases, but if you need assurance that your escaped string can safely be used at any arbitrary position within a regex, escape-string-regexp recommends the regex library that we’ve already looked at in this article. The regex library uses interpolation to escape embedded strings in a context-aware way.

Conclusion

So there you have it: the past, present, and future of JavaScript regular expressions.

If you want to journey even deeper into the lands of regex, check out Awesome Regex for a list of the best regex testers, tutorials, libraries, and other resources. And for a fun regex crossword puzzle, try your hand at regexle.

May your parsing be prosperous and your regexes be readable.

Uniting Web And Native Apps With 4 Unknown JavaScript APIs

Category Image 080

A couple of years ago, four JavaScript APIs that landed at the bottom of awareness in the State of JavaScript survey. I took an interest in those APIs because they have so much potential to be useful but don’t get the credit they deserve. Even after a quick search, I was amazed at how many new web APIs have been added to the ECMAScript specification that aren’t getting their dues and with a lack of awareness and browser support in browsers.

That situation can be a “catch-22”:

An API is interesting but lacks awareness due to incomplete support, and there is no immediate need to support it due to low awareness.

Most of these APIs are designed to power progressive web apps (PWA) and close the gap between web and native apps. Bear in mind that creating a PWA involves more than just adding a manifest file. Sure, it’s a PWA by definition, but it functions like a bookmark on your home screen in practice. In reality, we need several APIs to achieve a fully native app experience on the web. And the four APIs I’d like to shed light on are part of that PWA puzzle that brings to the web what we once thought was only possible in native apps.

You can see all these APIs in action in this demo as we go along.

1. Screen Orientation API

The Screen Orientation API can be used to sniff out the device’s current orientation. Once we know whether a user is browsing in a portrait or landscape orientation, we can use it to enhance the UX for mobile devices by changing the UI accordingly. We can also use it to lock the screen in a certain position, which is useful for displaying videos and other full-screen elements that benefit from a wider viewport.

Using the global screen object, you can access various properties the screen uses to render a page, including the screen.orientation object. It has two properties:

  • type: The current screen orientation. It can be: "portrait-primary", "portrait-secondary", "landscape-primary", or "landscape-secondary".
  • angle: The current screen orientation angle. It can be any number from 0 to 360 degrees, but it’s normally set in multiples of 90 degrees (e.g., 0, 90, 180, or 270).

On mobile devices, if the angle is 0 degrees, the type is most often going to evaluate to "portrait" (vertical), but on desktop devices, it is typically "landscape" (horizontal). This makes the type property precise for knowing a device’s true position.

The screen.orientation object also has two methods:

  • .lock(): This is an async method that takes a type value as an argument to lock the screen.
  • .unlock(): This method unlocks the screen to its default orientation.

And lastly, screen.orientation counts with an "orientationchange" event to know when the orientation has changed.

Browser Support

Finding And Locking Screen Orientation

Let’s code a short demo using the Screen Orientation API to know the device’s orientation and lock it in its current position.

This can be our HTML boilerplate:

<main>
  <p>
    Orientation Type: <span class="orientation-type"></span>
    <br />
    Orientation Angle: <span class="orientation-angle"></span>
  </p>

  <button type="button" class="lock-button">Lock Screen</button>

  <button type="button" class="unlock-button">Unlock Screen</button>

  <button type="button" class="fullscreen-button">Go Full Screen</button>
</main>

On the JavaScript side, we inject the screen orientation type and angle properties into our HTML.

let currentOrientationType = document.querySelector(".orientation-type");
let currentOrientationAngle = document.querySelector(".orientation-angle");

currentOrientationType.textContent = screen.orientation.type;
currentOrientationAngle.textContent = screen.orientation.angle;

Now, we can see the device’s orientation and angle properties. On my laptop, they are "landscape-primary" and .

If we listen to the window’s orientationchange event, we can see how the values are updated each time the screen rotates.

window.addEventListener("orientationchange", () => {
  currentOrientationType.textContent = screen.orientation.type;
  currentOrientationAngle.textContent = screen.orientation.angle;
});

To lock the screen, we need to first be in full-screen mode, so we will use another extremely useful feature: the Fullscreen API. Nobody wants a webpage to pop into full-screen mode without their consent, so we need transient activation (i.e., a user click) from a DOM element to work.

The Fullscreen API has two methods:

  1. Document.exitFullscreen() is used from the global document object,
  2. Element.requestFullscreen() makes the specified element and its descendants go full-screen.

We want the entire page to be full-screen so we can invoke the method from the root element at the document.documentElement object:

const fullscreenButton = document.querySelector(".fullscreen-button");

fullscreenButton.addEventListener("click", async () => {
  // If it is already in full-screen, exit to normal view
  if (document.fullscreenElement) {
    await document.exitFullscreen();
  } else {
    await document.documentElement.requestFullscreen();
  }
});

Next, we can lock the screen in its current orientation:

const lockButton = document.querySelector(".lock-button");

lockButton.addEventListener("click", async () => {
  try {
    await screen.orientation.lock(screen.orientation.type);
  } catch (error) {
    console.error(error);
  }
});

And do the opposite with the unlock button:

const unlockButton = document.querySelector(".unlock-button");

unlockButton.addEventListener("click", () => {
  screen.orientation.unlock();
});

Can’t We Check Orientation With a Media Query?

Yes! We can indeed check page orientation via the orientation media feature in a CSS media query. However, media queries compute the current orientation by checking if the width is “bigger than the height” for landscape or “smaller” for portrait. By contrast,

The Screen Orientation API checks for the screen rendering the page regardless of the viewport dimensions, making it resistant to inconsistencies that may crop up with page resizing.

You may have noticed how PWAs like Instagram and X force the screen to be in portrait mode even when the native system orientation is unlocked. It is important to notice that this behavior isn’t achieved through the Screen Orientation API, but by setting the orientation property on the manifest.json file to the desired orientation type.

2. Device Orientation API

Another API I’d like to poke at is the Device Orientation API. It provides access to a device’s gyroscope sensors to read the device’s orientation in space; something used all the time in mobile apps, mainly games. The API makes this happen with a deviceorientation event that triggers each time the device moves. It has the following properties:

  • event.alpha: Orientation along the Z-axis, ranging from 0 to 360 degrees.
  • event.beta: Orientation along the X-axis, ranging from -180 to 180 degrees.
  • event.gamma: Orientation along the Y-axis, ranging from -90 to 90 degrees.

Browser Support

Moving Elements With Your Device

In this case, we will make a 3D cube with CSS that can be rotated with your device! The full instructions I used to make the initial CSS cube are credited to David DeSandro and can be found in his introduction to 3D transforms.

To rotate the cube, we change its CSS transform properties according to the device orientation data:

const currentAlpha = document.querySelector(".currentAlpha");
const currentBeta = document.querySelector(".currentBeta");
const currentGamma = document.querySelector(".currentGamma");

const cube = document.querySelector(".cube");

window.addEventListener("deviceorientation", (event) => {
  currentAlpha.textContent = event.alpha;
  currentBeta.textContent = event.beta;
  currentGamma.textContent = event.gamma;

  cube.style.transform = rotateX(${event.beta}deg) rotateY(${event.gamma}deg) rotateZ(${event.alpha}deg);
});

This is the result:

3. Vibration API

Let’s turn our attention to the Vibration API, which, unsurprisingly, allows access to a device’s vibrating mechanism. This comes in handy when we need to alert users with in-app notifications, like when a process is finished or a message is received. That said, we have to use it sparingly; no one wants their phone blowing up with notifications.

There’s just one method that the Vibration API gives us, and it’s all we need: navigator.vibrate().

vibrate() is available globally from the navigator object and takes an argument for how long a vibration lasts in milliseconds. It can be either a number or an array of numbers representing a patron of vibrations and pauses.

navigator.vibrate(200); // vibrate 200ms
navigator.vibrate([200, 100, 200]); // vibrate 200ms, wait 100, and vibrate 200ms.

Browser Support

Vibration API Demo

Let’s make a quick demo where the user inputs how many milliseconds they want their device to vibrate and buttons to start and stop the vibration, starting with the markup:

<main>
  <form>
    <label for="milliseconds-input">Milliseconds:</label>
    <input type="number" id="milliseconds-input" value="0" />
  </form>

  <button class="vibrate-button">Vibrate</button>
  <button class="stop-vibrate-button">Stop</button>
</main>

We’ll add an event listener for a click and invoke the vibrate() method:

const vibrateButton = document.querySelector(".vibrate-button");
const millisecondsInput = document.querySelector("#milliseconds-input");

vibrateButton.addEventListener("click", () => {
  navigator.vibrate(millisecondsInput.value);
});

To stop vibrating, we override the current vibration with a zero-millisecond vibration.

const stopVibrateButton = document.querySelector(".stop-vibrate-button");

stopVibrateButton.addEventListener("click", () => {
  navigator.vibrate(0);
});
4. Contact Picker API

In the past, it used to be that only native apps could connect to a device’s “contacts”. But now we have the fourth and final API I want to look at: the Contact Picker API.

The API grants web apps access to the device’s contact lists. Specifically, we get the contacts.select() async method available through the navigator object, which takes the following two arguments:

  • properties: This is an array containing the information we want to fetch from a contact card, e.g., "name", "address", "email", "tel", and "icon".
  • options: This is an object that can only contain the multiple boolean property to define whether or not the user can select one or multiple contacts at a time.

Browser Support

I’m afraid that browser support is next to zilch on this one, limited to Chrome Android, Samsung Internet, and Android’s native web browser at the time I’m writing this.

Selecting User’s Contacts

We will make another demo to select and display the user’s contacts on the page. Again, starting with the HTML:

<main>
  <button class="get-contacts">Get Contacts</button>
  <p>Contacts:</p>
  <ul class="contact-list">
    <!-- We’ll inject a list of contacts -->
  </ul>
</main>

Then, in JavaScript, we first construct our elements from the DOM and choose which properties we want to pick from the contacts.

const getContactsButton = document.querySelector(".get-contacts");
const contactList = document.querySelector(".contact-list");

const props = ["name", "tel", "icon"];
const options = {multiple: true};

Now, we asynchronously pick the contacts when the user clicks the getContactsButton.


const getContacts = async () => {
  try {
    const contacts = await navigator.contacts.select(props, options);
  } catch (error) {
    console.error(error);
  }
};

getContactsButton.addEventListener("click", getContacts);

Using DOM manipulation, we can then append a list item to each contact and an icon to the contactList element.

const appendContacts = (contacts) => {
  contacts.forEach(({name, tel, icon}) => {
    const contactElement = document.createElement("li");

    contactElement.innerText = ${name}: ${tel};
    contactList.appendChild(contactElement);
  });
};

const getContacts = async () => {
  try {
    const contacts = await navigator.contacts.select(props, options);
    appendContacts(contacts);
  } catch (error) {
    console.error(error);
  }
};

getContactsButton.addEventListener("click", getContacts);

Appending an image is a little tricky since we will need to convert it into a URL and append it for each item in the list.

const getIcon = (icon) => {
  if (icon.length > 0) {
    const imageUrl = URL.createObjectURL(icon[0]);
    const imageElement = document.createElement("img");
    imageElement.src = imageUrl;

    return imageElement;
  }
};

const appendContacts = (contacts) => {
  contacts.forEach(({name, tel, icon}) => {
    const contactElement = document.createElement("li");

    contactElement.innerText = ${name}: ${tel};
    contactList.appendChild(contactElement);

    const imageElement = getIcon(icon);
    contactElement.appendChild(imageElement);
  });
};

const getContacts = async () => {
  try {
    const contacts = await navigator.contacts.select(props, options);
    appendContacts(contacts);
  } catch (error) {
    console.error(error);
  }
};

getContactsButton.addEventListener("click", getContacts);

And here’s the outcome:

Note: The Contact Picker API will only work if the context is secure, i.e., the page is served over https:// or wss:// URLs.

Conclusion

There we go, four web APIs that I believe would empower us to build more useful and robust PWAs but have slipped under the radar for many of us. This is, of course, due to inconsistent browser support, so I hope this article can bring awareness to new APIs so we have a better chance to see them in future browser updates.

Aren’t they interesting? We saw how much control we have with the orientation of a device and its screen as well as the level of access we get to access a device’s hardware features, i.e. vibration, and information from other apps to use in our own UI.

But as I said much earlier, there’s a sort of infinite loop where a lack of awareness begets a lack of browser support. So, while the four APIs we covered are super interesting, your mileage will inevitably vary when it comes to using them in a production environment. Please tread cautiously and refer to Caniuse for the latest support information, or check for your own devices using WebAPI Check.

Chris Corner: Git it

Category Image 080

Julia Evans has released what she’s saying is one of her most popular zines to date: How Git Works.

I don’t think you’d regret reading it. I imagine most of us get by with knowing just enough Git to do our jobs, but are probably using 5% of what it can really do. Being very strong with Git will almost surely benefit you in your career. Imagine helping a superior out of a sticky situation where it might look like code was lost or otherwise screwed up. Being the solution during an emotional time is clutch. Surely this pairs nicely with Oh Shit, Git!, a real classic from Katie-Sylor Miller which I see has been revitalized with Julia here.


Just the other day here at CodePen Headquarters, I saw a co-worker solve an issue with git bisect. Have you even heard of that?! Imagine there is a bug in your code, but you have absolutely no idea when it happened or where in the code it might be. That’s not a good feeling, but it’s exactly where git bisect comes in. As best I understand it, it sets the HEAD of your repo back in time some amount, and there, you test if the bug is present and you can say git bisect good or git bisect bad. Then it moves the HEAD and you keep testing and eventually it gets closer and closer to the exact commit (or at least a range of commits) where the bug happened. Then you can look at the changed files in those commits and figure out where in the code the bug may have came from. So cool!


I certainly know developers who know Git and work with it exclusively at the command line entirely as-provided. But I find it more common among the command line types that they at least have some aliases set up for the most common things they do. Those might be their own aliases, like they’ll make gco do a git checkout, but it’s worth knowing git itself allows you to make aliases within itself, which could be good since they won’t conflict with anything else. (Have I told you how long I had cp aliased to move to our local CodePen project directory? 🤣).


A much more elaborate take on git aliases is called Gut. With it, you don’t git commit anything (with all the params and whatnot you have to also pass), you gut save which launches a little wizard that asks you questions, and then it does the proper git stuff with the information you give it.

I could see that being great for a beginner, but maybe feel a little too slow as you get more comfortable at the command line. Except when it comes to the more advanced stuff and how it looks designed to get you out of binds. The fix and undo commands like awfully helpful and are the kind of things where I can never remember the proper commands.


Paweł Grzybek lays out a classic situation:

Let’s say that we are halfway through the feature, intensely focused on a task, when a critical bug needs to be fixed out of the blue. Happens to us all the time! Should we stash the current changes? Should we quickly smash git add . && git commit -m "wip" and promise that we will sort this mess out later?

His answer is no, it’s using git worktrees. It solves the issue by literally making another copy of your project on disk. And you can open and work on it separately but it all goes to the same repo ultimately. So you can leave your half-done uncommitted work on another worktree while you hop to the other to do work. Me, I’m mostly cool with git stash to tuck stuff away while I go work on something else, or even just the ol “work in progress” or “saving work” commit like Paweł mentioned. It’s not pretty but culturally it’s fine on our project. But I can see how you could get into a groove with worktrees, particularly if your editor supports it nicely.


Phew! We probably talked about Git too much, eh? I know nobody cares. Now let’s go back to just doing the 3-4 commands we know everyday, just with a few more resources in our pocket when we need them.

I gotta leave you with something else. (Digs through bag of hot links.) Ah here we go. This video rules: Flash is dead so I rebuilt it with javascript. Andrew Jakubowicz walks us through building an interface with a pretty modern and lightweight set of tools. Andrew works for Google on Lit, so it’s sort of a big excuse to show off working with Web Components, but it’s a fun ride. At 8 minutes more happens than a typical hour long video.

Scaling Success: Key Insights And Practical Takeaways

Category Image 080

Building successful web products at scale is a multifaceted challenge that demands a combination of technical expertise, strategic decision-making, and a growth-oriented mindset. In Success at Scale, I dive into case studies from some of the web’s most renowned products, uncovering the strategies and philosophies that propelled them to the forefront of their industries.

Here you will find some of the insights I’ve gleaned from these success stories, part of an ongoing effort to build a roadmap for teams striving to achieve scalable success in the ever-evolving digital landscape.

Cultivating A Mindset For Scaling Success

The foundation of scaling success lies in fostering the right mindset within your team. The case studies in Success at Scale highlight several critical mindsets that permeate the culture of successful organizations.

User-Centricity

Successful teams prioritize the user experience above all else.

They invest in understanding their users’ needs, behaviors, and pain points and relentlessly strive to deliver value. Instagram’s performance optimization journey exemplifies this mindset, focusing on improving perceived speed and reducing user frustration, leading to significant gains in engagement and retention.

By placing the user at the center of every decision, Instagram was able to identify and prioritize the most impactful optimizations, such as preloading critical resources and leveraging adaptive loading strategies. This user-centric approach allowed them to deliver a seamless and delightful experience to their vast user base, even as their platform grew in complexity.

Data-Driven Decision Making

Scaling success relies on data, not assumptions.

Teams must embrace a data-driven approach, leveraging metrics and analytics to guide their decisions and measure impact. Shopify’s UI performance improvements showcase the power of data-driven optimization, using detailed profiling and user data to prioritize efforts and drive meaningful results.

By analyzing user interactions, identifying performance bottlenecks, and continuously monitoring key metrics, Shopify was able to make informed decisions that directly improved the user experience. This data-driven mindset allowed them to allocate resources effectively, focusing on the areas that yielded the greatest impact on performance and user satisfaction.

Continuous Improvement

Scaling is an ongoing process, not a one-time achievement.

Successful teams foster a culture of continuous improvement, constantly seeking opportunities to optimize and refine their products. Smashing Magazine’s case study on enhancing Core Web Vitals demonstrates the impact of iterative enhancements, leading to significant performance gains and improved user satisfaction.

By regularly assessing their performance metrics, identifying areas for improvement, and implementing incremental optimizations, Smashing Magazine was able to continuously elevate the user experience. This mindset of continuous improvement ensures that the product remains fast, reliable, and responsive to user needs, even as it scales in complexity and user base.

Collaboration And Inclusivity

Silos hinder scalability.

High-performing teams promote collaboration and inclusivity, ensuring that diverse perspectives are valued and leveraged. The Understood’s accessibility journey highlights the power of cross-functional collaboration, with designers, developers, and accessibility experts working together to create inclusive experiences for all users.

By fostering open communication, knowledge sharing, and a shared commitment to accessibility, The Understood was able to embed inclusive design practices throughout its development process. This collaborative and inclusive approach not only resulted in a more accessible product but also cultivated a culture of empathy and user-centricity that permeated all aspects of their work.

Making Strategic Decisions for Scalability

Beyond cultivating the right mindset, scaling success requires making strategic decisions that lay the foundation for sustainable growth.

Technology Choices

Selecting the right technologies and frameworks can significantly impact scalability. Factors like performance, maintainability, and developer experience should be carefully considered. Notion’s migration to Next.js exemplifies the importance of choosing a technology stack that aligns with long-term scalability goals.

By adopting Next.js, Notion was able to leverage its performance optimizations, such as server-side rendering and efficient code splitting, to deliver fast and responsive pages. Additionally, the developer-friendly ecosystem of Next.js and its strong community support enabled Notion’s team to focus on building features and optimizing the user experience rather than grappling with low-level infrastructure concerns. This strategic technology choice laid the foundation for Notion’s scalable and maintainable architecture.

Ship Only The Code A User Needs, When They Need It

This best practice is so important when we want to ensure that pages load fast without over-eagerly delivering JavaScript a user may not need at that time. For example, Instagram made a concerted effort to improve the web performance of instagram.com, resulting in a nearly 50% cumulative improvement in feed page load time. A key area of focus has been shipping less JavaScript code to users, particularly on the critical rendering path.

The Instagram team found that the uncompressed size of JavaScript is more important for performance than the compressed size, as larger uncompressed bundles take more time to parse and execute on the client, especially on mobile devices. Two optimizations they implemented to reduce JS parse/execute time were inline requires (only executing code when it’s first used vs. eagerly on initial load) and serving ES2017+ code to modern browsers to avoid transpilation overhead. Inline requires improved Time-to-Interactive metrics by 12%, and the ES2017+ bundle was 5.7% smaller and 3% faster than the transpiled version.

While good progress has been made, the Instagram team acknowledges there are still many opportunities for further optimization. Potential areas to explore could include the following:

  • Improved code-splitting, moving more logic off the critical path,
  • Optimizing scrolling performance,
  • Adapting to varying network conditions,
  • Modularizing their Redux state management.

Continued efforts will be needed to keep instagram.com performing well as new features are added and the product grows in complexity.

Accessibility Integration

Accessibility should be an integral part of the product development process, not an afterthought.

Wix’s comprehensive approach to accessibility, encompassing keyboard navigation, screen reader support, and infrastructure for future development, showcases the importance of building inclusivity into the product’s core.

By considering accessibility requirements from the initial design stages and involving accessibility experts throughout the development process, Wix was able to create a platform that empowered its users to build accessible websites. This holistic approach to accessibility not only benefited end-users but also positioned Wix as a leader in inclusive web design, attracting a wider user base and fostering a culture of empathy and inclusivity within the organization.

Developer Experience Investment

Investing in a positive developer experience is essential for attracting and retaining talent, fostering productivity, and accelerating development.

Apideck’s case study in the book highlights the impact of a great developer experience on community building and product velocity.

By providing well-documented APIs, intuitive SDKs, and comprehensive developer resources, Apideck was able to cultivate a thriving developer community. This investment in developer experience not only made it easier for developers to integrate with Apideck’s platform but also fostered a sense of collaboration and knowledge sharing within the community. As a result, ApiDeck was able to accelerate product development, leverage community contributions, and continuously improve its offering based on developer feedback.

Leveraging Performance Optimization Techniques

Achieving optimal performance is a critical aspect of scaling success. The case studies in Success at Scale showcase various performance optimization techniques that have proven effective.

Progressive Enhancement and Graceful Degradation

Building resilient web experiences that perform well across a range of devices and network conditions requires a progressive enhancement approach. Pinafore’s case study in Success at Scale highlights the benefits of ensuring core functionality remains accessible even in low-bandwidth or JavaScript-constrained environments.

By leveraging server-side rendering and delivering a usable experience even when JavaScript fails to load, Pinafore demonstrates the importance of progressive enhancement. This approach not only improves performance and resilience but also ensures that the application remains accessible to a wider range of users, including those with older devices or limited connectivity. By gracefully degrading functionality in constrained environments, Pinafore provides a reliable and inclusive experience for all users.

Adaptive Loading Strategies

The book’s case study on Tinder highlights the power of sophisticated adaptive loading strategies. By dynamically adjusting the content and resources delivered based on the user’s device capabilities and network conditions, Tinder ensures a seamless experience across a wide range of devices and connectivity scenarios. Tinder’s adaptive loading approach involves techniques like dynamic code splitting, conditional resource loading, and real-time network quality detection. This allows the application to optimize the delivery of critical resources, prioritize essential content, and minimize the impact of poor network conditions on the user experience.

By adapting to the user’s context, Tinder delivers a fast and responsive experience, even in challenging environments.

Efficient Resource Management

Effective management of resources, such as images and third-party scripts, can significantly impact performance. eBay’s journey showcases the importance of optimizing image delivery, leveraging techniques like lazy loading and responsive images to reduce page weight and improve load times.

By implementing lazy loading, eBay ensures that images are only loaded when they are likely to be viewed by the user, reducing initial page load time and conserving bandwidth. Additionally, by serving appropriately sized images based on the user’s device and screen size, eBay minimizes the transfer of unnecessary data and improves the overall loading performance. These resource management optimizations, combined with other techniques like caching and CDN utilization, enable eBay to deliver a fast and efficient experience to its global user base.

Continuous Performance Monitoring

Regularly monitoring and analyzing performance metrics is crucial for identifying bottlenecks and opportunities for optimization. The case study on Yahoo! Japan News demonstrates the impact of continuous performance monitoring, using tools like Lighthouse and real user monitoring to identify and address performance issues proactively.

By establishing a performance monitoring infrastructure, Yahoo! Japan News gains visibility into the real-world performance experienced by their users. This data-driven approach allows them to identify performance regression, pinpoint specific areas for improvement, and measure the impact of their optimizations. Continuous monitoring also enables Yahoo! Japan News to set performance baselines, track progress over time, and ensure that performance remains a top priority as the application evolves.

Embracing Accessibility and Inclusive Design

Creating inclusive web experiences that cater to diverse user needs is not only an ethical imperative but also a critical factor in scaling success. The case studies in Success at Scale emphasize the importance of accessibility and inclusive design.

Comprehensive Accessibility Testing

Ensuring accessibility requires a combination of automated testing tools and manual evaluation. LinkedIn’s approach to automated accessibility testing demonstrates the value of integrating accessibility checks into the development workflow, catching potential issues early, and reducing the reliance on manual testing alone.

By leveraging tools like Deque’s axe and integrating accessibility tests into their continuous integration pipeline, LinkedIn can identify and address accessibility issues before they reach production. This proactive approach to accessibility testing not only improves the overall accessibility of the platform but also reduces the cost and effort associated with retroactive fixes. However, LinkedIn also recognizes the importance of manual testing and user feedback in uncovering complex accessibility issues that automated tools may miss. By combining automated checks with manual evaluation, LinkedIn ensures a comprehensive approach to accessibility testing.

Inclusive Design Practices

Designing with accessibility in mind from the outset leads to more inclusive and usable products. Success With Scale\’s case study on Intercom about creating an accessible messenger highlights the importance of considering diverse user needs, such as keyboard navigation and screen reader compatibility, throughout the design process.

By embracing inclusive design principles, Intercom ensures that their messenger is usable by a wide range of users, including those with visual, motor, or cognitive impairments. This involves considering factors such as color contrast, font legibility, focus management, and clear labeling of interactive elements. By designing with empathy and understanding the diverse needs of their users, Intercom creates a messenger experience that is intuitive, accessible, and inclusive. This approach not only benefits users with disabilities but also leads to a more user-friendly and resilient product overall.

User Research And Feedback

Engaging with users with disabilities and incorporating their feedback is essential for creating truly inclusive experiences. The Understood’s journey emphasizes the value of user research and collaboration with accessibility experts to identify and address accessibility barriers effectively.

By conducting usability studies with users who have diverse abilities and working closely with accessibility consultants, The Understood gains invaluable insights into the real-world challenges faced by their users. This user-centered approach allows them to identify pain points, gather feedback on proposed solutions, and iteratively improve the accessibility of their platform.

By involving users with disabilities throughout the design and development process, The Understood ensures that their products not only meet accessibility standards but also provide a meaningful and inclusive experience for all users.

Accessibility As A Shared Responsibility

Promoting accessibility as a shared responsibility across the organization fosters a culture of inclusivity. Shopify’s case study underscores the importance of educating and empowering teams to prioritize accessibility, recognizing it as a fundamental aspect of the user experience rather than a mere technical checkbox.

By providing accessibility training, guidelines, and resources to designers, developers, and content creators, Shopify ensures that accessibility is considered at every stage of the product development lifecycle. This shared responsibility approach helps to build accessibility into the core of Shopify’s products and fosters a culture of inclusivity and empathy. By making accessibility everyone’s responsibility, Shopify not only improves the usability of their platform but also sets an example for the wider industry on the importance of inclusive design.

Fostering A Culture of Collaboration And Knowledge Sharing

Scaling success requires a culture that promotes collaboration, knowledge sharing, and continuous learning. The case studies in Success at Scale highlight the impact of effective collaboration and knowledge management practices.

Cross-Functional Collaboration

Breaking down silos and fostering cross-functional collaboration accelerates problem-solving and innovation. Airbnb’s design system journey showcases the power of collaboration between design and engineering teams, leading to a cohesive and scalable design language across web and mobile platforms.

By establishing a shared language and a set of reusable components, Airbnb’s design system enables designers and developers to work together more efficiently. Regular collaboration sessions, such as design critiques and code reviews, help to align both teams and ensure that the design system evolves in a way that meets the needs of all stakeholders. This cross-functional approach not only improves the consistency and quality of the user experience but also accelerates the development process by reducing duplication of effort and promoting code reuse.

Knowledge Sharing And Documentation

Capturing and sharing knowledge across the organization is crucial for maintaining consistency and enabling the efficient onboarding of new team members. Stripe’s investment in internal frameworks and documentation exemplifies the value of creating a shared understanding and facilitating knowledge transfer.

By maintaining comprehensive documentation, code examples, and best practices, Stripe ensures that developers can quickly grasp the intricacies of their internal tools and frameworks. This documentation-driven culture not only reduces the learning curve for new hires but also promotes consistency and adherence to established patterns and practices. Regular knowledge-sharing sessions, such as tech talks and lunch-and-learns, further reinforce this culture of learning and collaboration, enabling team members to learn from each other’s experiences and stay up-to-date with the latest developments.

Communities Of Practice

Establishing communities of practice around specific domains, such as accessibility or performance, promotes knowledge sharing and continuous improvement. Shopify’s accessibility guild demonstrates the impact of creating a dedicated space for experts and advocates to collaborate, share best practices, and drive accessibility initiatives forward.

By bringing together individuals passionate about accessibility from across the organization, Shopify’s accessibility guild fosters a sense of community and collective ownership. Regular meetings, workshops, and hackathons provide opportunities for members to share their knowledge, discuss challenges, and collaborate on solutions. This community-driven approach not only accelerates the adoption of accessibility best practices but also helps to build a culture of inclusivity and empathy throughout the organization.

Leveraging Open Source And External Expertise

Collaborating with the wider developer community and leveraging open-source solutions can accelerate development and provide valuable insights. Pinafore’s journey highlights the benefits of engaging with accessibility experts and incorporating their feedback to create a more inclusive and accessible web experience.

By actively seeking input from the accessibility community and leveraging open-source accessibility tools and libraries, Pinafore was able to identify and address accessibility issues more effectively. This collaborative approach not only improved the accessibility of the application but also contributed back to the wider community by sharing their learnings and experiences. By embracing open-source collaboration and learning from external experts, teams can accelerate their own accessibility efforts and contribute to the collective knowledge of the industry.

The Path To Sustainable Success

Achieving scalable success in the web development landscape requires a multifaceted approach that encompasses the right mindset, strategic decision-making, and continuous learning. The Success at Scale book provides a comprehensive exploration of these elements, offering deep insights and practical guidance for teams at all stages of their scaling journey.

By cultivating a user-centric, data-driven, and inclusive mindset, teams can prioritize the needs of their users and make informed decisions that drive meaningful results. Adopting a culture of continuous improvement and collaboration ensures that teams are always striving to optimize and refine their products, leveraging the collective knowledge and expertise of their members.

Making strategic technology choices, such as selecting performance-oriented frameworks and investing in developer experience, lays the foundation for scalable and maintainable architectures. Implementing performance optimization techniques, such as adaptive loading, efficient resource management, and continuous monitoring, helps teams deliver fast and responsive experiences to their users.

Embracing accessibility and inclusive design practices not only ensures that products are usable by a wide range of users but also fosters a culture of empathy and user-centricity. By incorporating accessibility testing, inclusive design principles, and user feedback into the development process, teams can create products that are both technically sound and meaningfully inclusive.

Fostering a culture of collaboration, knowledge sharing, and continuous learning is essential for scaling success. By breaking down silos, promoting cross-functional collaboration, and investing in documentation and communities of practice, teams can accelerate problem-solving, drive innovation, and build a shared understanding of their products and practices.

The case studies featured in Success at Scale serve as powerful examples of how these principles and strategies can be applied in real-world contexts. By learning from the successes and challenges of industry leaders, teams can gain valuable insights and inspiration for their own scaling journeys.

As you embark on your path to scaling success, remember that it is an ongoing process of iteration, learning, and adaptation. Embrace the mindsets and strategies outlined in this article, dive deeper into the learnings from the Success at Scale book, and continually refine your approach based on the unique needs of your users and the evolving landscape of web development.

Conclusion

Scaling successful web products requires a holistic approach that combines technical excellence, strategic decision-making, and a growth-oriented mindset. By learning from the experiences of industry leaders, as showcased in the Success at Scale book, teams can gain valuable insights and practical guidance on their journey towards sustainable success.

Cultivating a user-centric, data-driven, and inclusive mindset lays the foundation for scalability. By prioritizing the needs of users, making informed decisions based on data, and fostering a culture of continuous improvement and collaboration, teams can create products that deliver meaningful value and drive long-term growth.

Making strategic decisions around technology choices, performance optimization, accessibility integration, and developer experience investment sets the stage for scalable and maintainable architectures. By leveraging proven optimization techniques, embracing inclusive design practices, and investing in the tools and processes that empower developers, teams can build products that are fast and resilient.

Through ongoing collaboration, knowledge sharing, and a commitment to learning, teams can navigate the complexities of scaling success and create products that make a lasting impact in the digital landscape.

We’re Trying Out Something New

In an effort to conserve resources here at Smashing, we’re trying something new with Success at Scale. The printed book is 304 pages, and we make an expanded PDF version available to everyone who purchases a print book. This accomplishes a few good things:

  • We will use less paper and materials because we are making a smaller printed book;
  • We’ll use fewer resources in general to print, ship, and store the books, leading to a smaller carbon footprint; and
  • Keeping the book at more manageable size means we can continue to offer free shipping on all Smashing orders!

Smashing Books have always been printed with materials from FSC Certified forests. We are committed to finding new ways to conserve resources while still bringing you the best possible reading experience.

Community Matters ❤️

Producing a book takes quite a bit of time, and we couldn’t pull it off without the support of our wonderful community. A huge shout-out to Smashing Members for the kind, ongoing support. The eBook is and always will be free for Smashing Members. Plus, Members get a friendly discount when purchasing their printed copy. Just sayin’! ;-)

More Smashing Books & Goodies

Promoting best practices and providing you with practical tips to master your daily coding and design challenges has always been (and will be) at the core of everything we do at Smashing.

In the past few years, we were very lucky to have worked together with some talented, caring people from the web community to publish their wealth of experience as printed books that stand the test of time. Heather and Steven are two of these people. Have you checked out their books already?

Understanding Privacy

Everything you need to know to put your users first and make a better web.

Get Print + eBook

Touch Design for Mobile Interfaces

Learn how touchscreen devices really work — and how people really use them.

Get Print + eBook

Interface Design Checklists

100 practical cards for common interface design challenges.

Get Print + eBook