The Difference Between Web Sockets, Web Workers, and Service Workers

Web Sockets, Web Workers, Service Workers… these are terms you may have read or overheard. Maybe not all of them, but likely at least one of them. And even if you have a good handle on front-end development, there’s a good chance you need to look up what they mean. Or maybe you’re like me and mix them up from time to time. The terms all look and sound awful similar and it’s really easy to get them confused.

So, let’s break them down together and distinguish Web Sockets, Web Workers, and Service Workers. Not in the nitty-gritty sense where we do a deep dive and get hands-on experience with each one — more like a little helper to bookmark the next time I you need a refresher.

Quick reference

We’ll start with a high-level overview for a quick compare and contrast.

FeatureWhat it is
Web SocketEstablishes an open and persistent two-way connection between the browser and server to send and receive messages over a single connection triggered by events.
Web WorkerAllows scripts to run in the background in separate threads to prevent scripts from blocking one another on the main thread.
Service WorkerA type of Web Worker that creates a background service that acts middleware for handling network requests between the browser and server, even in offline situations.

Web Sockets

A Web Socket is a two-way communication protocol. Think of this like an ongoing call between you and your friend that won’t end unless one of you decides to hang up. The only difference is that you are the browser and your friend is the server. The client sends a request to the server and the server responds by processing the client’s request and vice-versa.

Illustration of two women representing the browser and server, respectively. Arrows between them show the flow of communication in an active connection.

The communication is based on events. A WebSocket object is established and connects to a server, and messages between the server trigger events that send and receive them.

This means that when the initial connection is made, we have a client-server communication where a connection is initiated and kept alive until either the client or server chooses to terminate it by sending a CloseEvent. That makes Web Sockets ideal for applications that require continuous and direct communication between a client and a server. Most definitions I’ve seen call out chat apps as a common use case — you type a message, send it to the server, trigger an event, and the server responds with data without having to ping the server over and again.

Consider this scenario: You’re on your way out and you decide to switch on Google Maps. You probably already know how Google Maps works, but if you don’t, it finds your location automatically after you connect to the app and keeps track of it wherever you go. It uses real-time data transmission to keep track of your location as long as this connection is alive. That’s a Web Socket establishing a persistent two-way conversation between the browser and server to keep that data up to date. A sports app with real-time scores might also make use of Web Sockets this way.

The big difference between Web Sockets and Web Workers (and, by extension as we’ll see, Service Workers) is that they have direct access to the DOM. Whereas Web Workers (and Service Workers) run on separate threads, Web Sockets are part of the main thread which gives them the ability to manipulate the DOM.

There are tools and services to help establish and maintain Web Socket connections, including: SocketCluster, AsyncAPI, cowboy, WebSocket King, Channels, and Gorilla WebSocket. MDN has a running list that includes other services.

More Web Sockets information

Web Workers

Consider a scenario where you need to perform a bunch of complex calculations while at the same time making changes to the DOM. JavaScript is a single-threaded application and running more than one script might disrupt the user interface you are trying to make changes to as well as the complex calculation being performed.

This is where the Web Workers come into play.

Web Workers allow scripts to run in the background in separate threads to prevent scripts from blocking one another on the main thread. That makes them great for enhancing the performance of applications that require intensive operations since those operations can be performed in the background on separate threads without affecting the user interface from rendering. But they’re not so great at accessing the DOM because, unlike Web Sockets, a web worker runs outside the main thread in its own thread.

A Web Worker is an object that executes a script file by using a Worker object to carry out the tasks. And when we talk about workers, they tend to fall into one of three types:

  • Dedicated Workers: A dedicated worker is only within reach by the script that calls it. It still executes the tasks of a typical web worker, such as its multi-threading scripts.
  • Shared Workers: A shared worker is the opposite of a dedicated worker. It can be accessed by multiple scripts and can practically perform any task that a web worker executes as long as they exist in the same domain as the worker.
  • Service Workers: A service worker acts as a network proxy between an app, the browser, and the server, allowing scripts to run even in the event when the network goes offline. We’re going to get to this in the next section.

More Web Workers information

Service Workers

There are some things we have no control over as developers, and one of those things is a user’s network connection. Whatever network a user connects to is what it is. We can only do our best to optimize our apps so they perform the best they can on any connection that happens to be used.

Service Workers are one of the things we can do to progressively enhance an app’s performance. A service worker sits between the app, the browser, and the server, providing a secure connection that runs in the background on a separate thread, thanks to — you guessed it — Web Workers. As we learned in the last section, Service Workers are one of three types of Web Workers.

So, why would you ever need a service worker sitting between your app and the user’s browser? Again, we have no control over the user’s network connection. Say the connection gives out for some unknown reason. That would break communication between the browser and the server, preventing data from being passed back and forth. A service worker maintains the connection, acting as an async proxy that is capable of intercepting requests and executing tasks — even after the network connection is lost.

A gear cog icon labeled Service Worker in between a browser icon labeled client and a cloud icon labeled server.

This is the main driver of what’s often referred to as “offline-first” development. We can store assets in the local cache instead of the network, provide critical information if the user goes offline, prefetch things so they’re ready when the user needs them, and provide fallbacks in response to network errors. They’re fully asynchronous but, unlike Web Sockets, they have no access to the DOM since they run on their own threads.

The other big thing to know about Service Workers is that they intercept every single request and response from your app. As such, they have some security implications, most notably that they follow a same-origin policy. So, that means no running a service worker from a CDN or third-party service. They also require a secure HTTPS connection, which means you’ll need a SSL certificate for them to run.

More Service Workers information

Wrapping up

That’s a super high-level explanation of the differences (and similarities) between Web Sockets, Web Workers, and Service Workers. Again, the terminology and concepts are similar enough to mix one up with another, but hopefully, this gives you a better idea of how to distinguish them.

We kicked things off with a quick reference table. Here’s the same thing, but slightly expanded to draw thicker comparisons.

FeatureWhat it isMultithreaded?HTTPS?DOM access?
Web SocketEstablishes an open and persistent two-way connection between the browser and server to send and receive messages over a single connection triggered by events.Runs on the main threadNot requiredYes
Web WorkerAllows scripts to run in the background in separate threads to prevent scripts from blocking one another on the main thread.Runs on a separate threadRequiredNo
Service WorkerA type of Web Worker that creates a background service that acts middleware for handling network requests between the browser and server, even in offline situations.Runs on a separate threadRequiredNo

The Difference Between Web Sockets, Web Workers, and Service Workers originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Collective #715






Collective 715 item image

Meet Web Push

WebKit now supports the W3C standards for Push API, Notifications API, and Service Workers to enable Web Push.

Read it





Collective 715 item image

GitNoter

GitNoter is a web application that allows users to store notes in their git repository.

Check it out


Collective 715 item image

Orbit Gallery

Infinite orbit gallery made with THREE.js by Michal Zalobny, based on Luis Bizarro’s Awwwards course. Code can be found .

Check it out


Collective 715 item image

Monorepos in JavaScript & TypeScript

A tutorial on how to use a monorepo architecture in frontend JavaScript and TypeScript with tools like npm/yarn/pnpm workspaces, Turborepo/NX/Lerna, Git Submodules and more.

Read it













Collective 715 item image

Redactle

A daily puzzle game where you have to find the title of a random Wikipedia article by guessing words to reveal them on the page.

Check it out


The post Collective #715 appeared first on Codrops.

Add a Service Worker to Your Site

One of the best things you can do for your website in 2022 is add a service worker, if you don’t have one in place already. Service workers give your website super powers. Today, I want to show you some of the amazing things that they can do, and give you a paint-by-numbers boilerplate that you can use to start using them on your site right away.

What are service workers?

A service worker is a special type of JavaScript file that acts like middleware for your site. Any request that comes from the site, and any response it gets back, first goes through the service worker file. Service workers also have access to a special cache where they can save responses and assets locally.

Together, these features allow you to…

  • Serve frequently accessed assets from your local cache instead of the network, reducing data usage and improving performance.
  • Provide access to critical information (or even your entire site or app) when the visitor goes offline.
  • Prefetch important assets and API responses so they’re ready when the user needs them.
  • Provide fallback assets in response to HTTP errors.

In short, service workers allow you to build faster and more resilient web experiences.

Unlike regular JavaScript files, service workers do not have access to the DOM. They also run on their own thread, and as a result, don’t block other JavaScript from running. Service workers are designed to be fully asynchronous.

Security

Because service workers intercept every request and response for your site or app, they have some important security limitations.

Service workers follow a same-origin policy.

You can’t run your service worker from a CDN or third party. It has to be hosted at the same domain as where it will be run.

Service workers only work on sites with an installed SSL certificate.

Many web hosts provide SSL certificates at no cost or for a small fee. If you’re comfortable with the command line, you can also install one for free using Let’s Encrypt.

There is an exception to the SSL certificate requirement for localhost testing, but you can’t run your service worker from the file:// protocol. You need to have a local server running.

Adding a service worker to your site or web app

To use a service worker, the first thing we need to do is register it with the browser. You can register a service worker using the navigator.serviceWorker.register() method. Pass in the path to the service worker file as an argument.

navigator.serviceWorker.register('sw.js');

You can run this in an external JavaScript file, but prefer to run it directly in a script element inline in my HTML so that it runs as soon as possible.

Unlike other types of JavaScript files, service workers only work for the directory in which they exist (and any of its sub-directories). A service worker file located at /js/sw.js would only work for files in the /js directory. As a result, you should place your service worker file inside the root directory of your site.

While service workers have fantastic browser support, it’s a good idea to make sure the browser supports them before running your registration script.

if (navigator && navigator.serviceWorker) {
  navigator.serviceWorker.register('sw.js');
}

After the service worker installs, the browser can activate it. Typically, this only happens when…

  • there is no service worker currently active, or
  • the user refreshes the page.

The service worker won’t run or intercept requests until it’s activated.

Listening for requests in a service worker

Once the service worker is active, it can start intercepting requests and running other tasks. We can listen for requests with self.addEventListener() and the fetch event.

// Listen for request events
self.addEventListener('fetch', function (event) {
  // Do stuff...
});

Inside the event listener, the event.request property is the request object itself. For ease, we can save it to the request variable.

Certain versions of the Chromium browser have a bug that throws an error if the page is opened in a new tab. Fortunately, there’s a simple fix from Paul Irish that I include in all of my service workers, just in case:

// Listen for request events
self.addEventListener('fetch', function (event) {

  // Get the request
  let request = event.request;

  // Bug fix
  // https://stackoverflow.com/a/49719964
  if (event.request.cache === 'only-if-cached' && event.request.mode !== 'same-origin') return;

});

Once your service worker is active, every single request is sent through it, and will be intercepted with the fetch event.

Service worker strategies

Once your service worker is installed and activated, you can intercept requests and responses, and handle them in various ways. There are two primary strategies you can use in your service worker:

  1. Network-first. With a network-first approach, you pass along requests to the network. If the request isn’t found, or there’s no network connectivity, you then look for the request in the service worker cache.
  2. Offline-first. With an offline-first approach, you check for a requested asset in the service worker cache first. If it’s not found, you send the request to the network.

Network-first and offline-first approaches work in tandem. You will likely mix-and-match approaches depending on the type of asset being requested.

Offline-first is great for large assets that don’t change very often: CSS, JavaScript, images, and fonts. Network-first is a better fit for frequently updated assets like HTML and API requests.

Strategies for caching assets

How do you get assets into your browser’s cache? You’ll typically use two different approaches, depending on the types of assets.

  1. Pre-cache on install. Every site and web app has a set of core assets that are used on almost every page: CSS, JavaScript, a logo, favicon, and fonts. You can pre-cache these during the install event, and serve them using an offline-first approach whenever they’re requested.
  2. Cache as you browser. Your site or app likely has assets that won’t be accessed on every visit or by every visitor; things like blog posts and images that go with articles. For these assets, you may want to cache them in real-time as the visitor accesses them.

You can then serve those cached assets, either by default or as a fallback, depending on your approach.

Implementing network-first and offline-first strategies in your service worker

Inside a fetch event in your service worker, the request.headers.get('Accept') method returns the MIME type for the content. We can use that to determine what type of file the request is for. MDN has a list of common files and their MIME types. For example, HTML files have a MIME type of text/html.

We can pass the type of file we’re looking for into the String.includes() method as an argument, and use if statements to respond in different ways based on the file type.

// Listen for request events
self.addEventListener('fetch', function (event) {

  // Get the request
  let request = event.request;

  // Bug fix
  // https://stackoverflow.com/a/49719964
  if (event.request.cache === 'only-if-cached' && event.request.mode !== 'same-origin') return;

  // HTML files
  // Network-first
  if (request.headers.get('Accept').includes('text/html')) {
    // Handle HTML files...
    return;
  }

  // CSS & JavaScript
  // Offline-first
  if (request.headers.get('Accept').includes('text/css') || request.headers.get('Accept').includes('text/javascript')) {
    // Handle CSS and JavaScript files...
    return;
  }

  // Images
  // Offline-first
  if (request.headers.get('Accept').includes('image')) {
    // Handle images...
  }

});

Network-first

Inside each if statement, we use the event.respondWith() method to modify the response that’s sent back to the browser.

For assets that use a network-first approach, we use the fetch() method, passing in the request, to pass through the request for the HTML file. If it returns successfully, we’ll return the response in our callback function. This is the same behavior as not having a service worker at all.

If there’s an error, we can use Promise.catch() to modify the response instead of showing the default browser error message. We can use the caches.match() method to look for that page, and return it instead of the network response.

// Send the request to the network first
// If it's not found, look in the cache
event.respondWith(
  fetch(request).then(function (response) {
    return response;
  }).catch(function (error) {
    return caches.match(request).then(function (response) {
      return response;
    });
  })
);

Offline-first

For assets that use an offline-first approach, we’ll first check inside the browser cache using the caches.match() method. If a match is found, we’ll return it. Otherwise, we’ll use the fetch() method to pass the request along to the network.

// Check the cache first
// If it's not found, send the request to the network
event.respondWith(
  caches.match(request).then(function (response) {
    return response || fetch(request).then(function (response) {
      return response;
    });
  })
);

Pre-caching core assets

Inside an install event listener in the service worker, we can use the caches.open() method to open a service worker cache. We pass in the name we want to use for the cache, app, as an argument.

The cache is scoped and restricted to your domain. Other sites can’t access it, and if they have a cache with the same name the contents are kept entirely separate.

The caches.open() method returns a Promise. If a cache already exists with this name, the Promise will resolve with it. If not, it will create the cache first, then resolve.

// Listen for the install event
self.addEventListener('install', function (event) {
  event.waitUntil(caches.open('app'));
});

Next, we can chain a then() method to our caches.open() method with a callback function.

In order to add files to the cache, we need to request them, which we can do with the new Request() constructor. We can use the cache.add() method to add the file to the service worker cache. Then, we return the cache object.

We want the install event to wait until we’ve cached our file before completing, so let’s wrap our code in the event.waitUntil() method:

// Listen for the install event
self.addEventListener('install', function (event) {

  // Cache the offline.html page
  event.waitUntil(caches.open('app').then(function (cache) {
    cache.add(new Request('offline.html'));
    return cache;
  }));

});

I find it helpful to create an array with the paths to all of my core files. Then, inside the install event listener, after I open my cache, I can loop through each item and add it.

let coreAssets = [
  '/css/main.css',
  '/js/main.js',
  '/img/logo.svg',
  '/img/favicon.ico'
];

// On install, cache some stuff
self.addEventListener('install', function (event) {

  // Cache core assets
  event.waitUntil(caches.open('app').then(function (cache) {
    for (let asset of coreAssets) {
      cache.add(new Request(asset));
    }
    return cache;
  }));

});

Cache as you browse

Your site or app likely has assets that won’t be accessed on every visit or by every visitor; things like blog posts and images that go with articles. For these assets, you may want to cache them in real-time as the visitor accesses them. On subsequent visits, you can load them directly from cache (with an offline-first approach) or serve them as a fallback if the network fails (using a network-first approach).

When a fetch() method returns a successful response, we can use the Response.clone() method to create a copy of it.

Next, we can use the caches.open() method to open our cache. Then, we’ll use the cache.put() method to save the copied response to the cache, passing in the request and copy of the response as arguments. Because this is an asynchronous function, we’ll wrap our code in the event.waitUntil() method. This prevents the event from ending before we’ve saved our copy to cache. Once the copy is saved, we can return the response as normal.

/explanation We use cache.put() instead of cache.add() because we already have a response. Using cache.add() would make another network call.

// HTML files
// Network-first
if (request.headers.get('Accept').includes('text/html')) {
  event.respondWith(
    fetch(request).then(function (response) {

      // Create a copy of the response and save it to the cache
      let copy = response.clone();
      event.waitUntil(caches.open('app').then(function (cache) {
        return cache.put(request, copy);
      }));

      // Return the response
      return response;

  }).catch(function (error) {
      return caches.match(request).then(function (response) {
        return response;
      });
    })
  );
}

Putting it all together

I’ve put together a copy-paste boilerplate for you on GitHub. Add your core assets to the coreAssets array, and register it on your site to get started.

If you do nothing else, this will be a huge boost to your site in 2022.

But there’s so much more you can do with service workers. There are advanced caching strategies for APIs. You can provide an offline page with critical information if a visitor loses their network connection. You can clean up bloated caches as the user browses.

Jeremy Keith’s book, Going Offline, is a great primer on service workers. If you want to take things to the next level and dig into progressive web apps, Jason Grigsby’s book dives into the various strategies you can use.

And for a pragmatic deep dive you can complete in about an hour, I also have a course and ebook on service workers with lots of code examples and a project you can work on.

How To Optimize Progressive Web Apps: Going Beyond The Basics

Progressive web applications (PWA) are still gathering popularity in 2020. It doesn’t come as a surprise, considering the benefits of higher conversion rates, customer engagement, decreased page loading speed, and lower costs on development and overhead.

We can see respected companies also enjoying success with their PWAs, such as Twitter, Uber, Tinder, Pinterest, and Forbes. And they all boast about massive benefits from implementing the progressive apps.

The good news is that developing a PWA isn’t something that only big-budget companies can afford. These applications serve small and average businesses equally and are not that complicated to create.

You can find a comprehensive Beginner’s Guide To Progressive Web Apps on Smashing Magazine that focuses on building the core of PWAs.

However, let’s take a step further and learn how to deploy modern qualities to PWAs, such as offline functionality, network-based optimizing, cross-device user experience, SEO capabilities, and non-intrusive notifications and requests. You’ll also find example code or references to more specific guides so you can implement these tips to your PWA.

A Quick Overview Of Progressive Web Applications (PWA)

Let’s not skip the basics and quickly go over the heart of PWAs.

What Is A PWA?

“Progressive Web Apps (PWA) are built and enhanced with modern APIs to deliver enhanced capabilities, reliability, and installability while reaching anyone, anywhere, on any device with a single codebase.”

Google’s developers

In other words, PWAs are websites that users can use as stand-alone applications. They are different from native apps mainly because PWAs don’t require installation and can be used with various devices — native apps are primarily built for mobile devices.

How Do PWAs Work?

The core of a PWA consists of three components: a web app manifest, service workers, and an application shell. You can find detailed instructions for building those in the beginner’s guide mentioned above.

Here’s what these components do.

Web App Manifest

The web app manifest is the core for making a website run as a stand-alone application in full-screen mode. You can define how the PWA looks, optimize it for different devices, and assign an icon that’s displayed after the application’s installation.

Service Workers

The service workers enable the offline usage of the PWA by fetching cached data or informing the user about the absence of an Internet connection. The service workers also retrieve the latest data once the server connection is restored.

Application shell architecture

The application shell is what the users see when they access a PWA. It’s the minimum HTML, CSS, and JavaScript that’s required to power the user interface. When developing a PWA, you can cache the application shell’s resources and assets in the browser.

Deploying Modern Characteristics To Your PWA

On top of the core features, modern PWAs embrace additional traits that further drive their users towards a more extraordinary user experience.

Let’s look at some specific modern characteristics of a PWA and learn about adding these to your PWA. The following qualities are considered great additions to the basic PWA by Google developers.

The Application Works Offline As It Does Online

When building your PWA, you can also develop a custom offline page as part of the core. However, it’s much more user-friendly if your PWA continues to function even without an Internet connection — up to a certain point where the connection becomes necessary. Otherwise, the user experience could be as frustrating as Ankita Masand’s ordeal with ordering a cake as she describes in her article about the pain points of PWAs.

You can achieve a more significant user experience by using cached content, background syncing, and skeleton screens. Let’s look at each one.

Cached content with IndexedDB

IndexedDB is an in-browser NoSQL storage system that you can use to cache and retrieve the required data to make your PWA work offline.

However, not all browsers support IndexedDB, so the first thing you want to do is check if the user’s browser supports it.

if (!('indexedDB' in window)) {
  console.log('This browser doesn\'t support IndexedDB');
  return;
}

After this, you can create cached content with the IndexedDB API. Here’s an example from Google developers of opening a database, adding an object store, and adding an item to this store.

var db;

var openRequest = indexedDB.open('test_db', 1);

openRequest.onupgradeneeded = function(e) {
  var db = e.target.result;
  console.log('running onupgradeneeded');
  if (!db.objectStoreNames.contains('store')) {
    var storeOS = db.createObjectStore('store',
      {keyPath: 'name'});
  }
};
openRequest.onsuccess = function(e) {
  console.log('running onsuccess');
  db = e.target.result;
  addItem();
};
openRequest.onerror = function(e) {
  console.log('onerror!');
  console.dir(e);
};

function addItem() {
  var transaction = db.transaction(['store'], 'readwrite');
  var store = transaction.objectStore('store');
  var item = {
    name: 'banana',
    price: '$2.99',
    description: 'It is a purple banana!',
    created: new Date().getTime()
  };

 var request = store.add(item);

 request.onerror = function(e) {
    console.log('Error', e.target.error.name);
  };
  request.onsuccess = function(e) {
    console.log('Woot! Did it');
  };
}
Background sync

If your PWA syncs data in the background, the user can take actions while offline, which are then executed when the Internet connection restores. A simple example is a messaging app. A user can send a message when offline without the need to wait until it is sent — the background sync automatically sends the message when the connection restores.

Here’s an example of how to develop a background sync feature by Jake Archibald.

// Register your service worker:
navigator.serviceWorker.register('/sw.js');

// Then later, request a one-off sync:
navigator.serviceWorker.ready.then(function(swRegistration) {
  return swRegistration.sync.register('myFirstSync');
});

Then listen for the event in /sw.js:

self.addEventListener('sync', function(event) {
  if (event.tag == 'myFirstSync') {
    event.waitUntil(doSomeStuff());
  }
});
Skeleton screens

One of the main perks of using skeleton screens is that users perceive the application working rather than sitting idle. While the user doesn’t have a connection, the skeleton screen draws the interface out without content — which then fills once the connection restores.

Code My UI has some excellent code snippets available that you can use to create a skeleton screen for your PWA.

Optimizing Based On Network Usage

A central benefit of a PWA is that it provides a faster experience for users. You can further optimize the loading speed by having the PWA use cache-first networking, prioritizing resources, and use adaptive loading based on network quality.

Let’s look at how you can develop these to your PWA.

Cache first, then network

Using the cached content first further enables your PWA to function offline and paves the way for users to access the content even on low network coverage areas. You can do so by creating a service worker to cache the content and then fetching it.

Here’s an example from Jeff Posnick on caching static HTML using service workers.

self.addEventListener('fetch', event => {
  if (event.request.mode === 'navigate') {
    // See /web/fundamentals/getting-started/primers/async-functions
    // for an async/await primer.
    event.respondWith(async function() {
      // Optional: Normalize the incoming URL by removing query parameters.
      // Instead of https://example.com/page?key=value,
      // use https://example.com/page when reading and writing to the cache.
      // For static HTML documents, it's unlikely your query parameters will
      // affect the HTML returned. But if you do use query parameters that
      // uniquely determine your HTML, modify this code to retain them.
      const normalizedUrl = new URL(event.request.url);
      normalizedUrl.search = '';

      // Create promises for both the network response,
      // and a copy of the response that can be used in the cache.
      const fetchResponseP = fetch(normalizedUrl);
      const fetchResponseCloneP = fetchResponseP.then(r => r.clone());

      // event.waitUntil() ensures that the service worker is kept alive
      // long enough to complete the cache update.
      event.waitUntil(async function() {
        const cache = await caches.open('my-cache-name');
        await cache.put(normalizedUrl, await fetchResponseCloneP);
      }());

      // Prefer the cached response, falling back to the fetch response.
      return (await caches.match(normalizedUrl)) || fetchResponseP;
    }());
  }
});
Prioritizing resources

By default, PWAs have greater performance over similar native apps due to their lite nature. Furthermore, since PWAs use the browser’s cache, it’s possible to indicate which resources take priority and need to be rendered even before they are used. This mainly works with static elements as dynamic content needs updating before it’s fetched.

You can specify the priority of elements by using the <link> string in the HTML. You can also specify third-party server files by using rel=”preconnect” and rel=”dns-prefetch.”

Maximiliano Firtman gives a simple example of this by prioritizing Web Fonts within the browser’s engine:

<link rel=”preload” as=”font” href=”font.woff” crossorigin>
Implementing adaptive loading

The Internet speeds of WiFi and 4G aren’t accessible everywhere, and users still get on the Internet with 2G and 3G connections. Since you want your PWA to be accessible by as many people as possible, you might want to optimize it to perform at lower Internet speeds as well.

You can achieve this by implementing adaptive loading, which loads the PWAs elements based on the connection type that the user has.

The most straightforward way is to use Google’s Workbox tool, which includes numerous ready-made plugins for caching strategies.

Suppose you want to define a custom caching strategy. Here’s how you can do it as an example from Demian Renzulli and Jeff Posnick:

const adaptiveLoadingPlugin = {
  requestWillFetch: async ({request}) => {
    const urlParts = request.url.split('/');
    let imageQuality;

    switch (
      navigator && navigator.connection
        ? navigator.connection.effectiveType
        : ''
    ) {
      //...
      case '3g':
        imageQuality = 'q_30';
        break;
      //...
    }

    const newUrl = urlParts
      .splice(urlParts.length - 1, 0, imageQuality)
      .join('/')
      .replace('.jpg', '.png');
    const newRequest = new Request(newUrl.href, {headers: request.headers});

    return newRequest;
  },
};

Next, pass the plugin to a cacheFirst strategy containing a regular expression to match image URLs (e.g. /img/):

workbox.routing.registerRoute(
  new RegExp('/img/'),
  workbox.strategies.cacheFirst({
    cacheName: 'images',
    plugins: [
      adaptiveLoadingPlugin,
      workbox.expiration.Plugin({
        maxEntries: 50,
        purgeOnQuotaError: true,
      }),
    ],
  }),
);

Excellent User Experience On All Platforms

An excellent PWA works seamlessly on browsers, mobile, and tablet. While using an Android device is the most popular way (with a 38.9% market share) of accessing the Internet, optimizing your application for all platforms is part of developing the PWAs core functions.

You can take further steps to increase usability and offer a great user experience, such as reducing jumpiness when your PWA loads and making sure your PWA works with any input method.

Here’s how you can approach each of those aspects.

Reducing “jumpy” content loading

Even with high-speed Internet, the site’s content can shift while loading because the site’s elements load in order. This effect is even worse with slower connection speeds and badly hurts the user’s experience.

The most common elements causing the content to shift while loading are images since these are generally larger and not a priority when loading content. You can tackle this issue with “lazy loading” by using smaller placeholder images that you can prioritize after the HTML structure is rendered.

Here’s an example by Mozilla’s developers on how you can add a lightweight image that’s loaded first before the actual image in JavaScript:

<img src='data/img/placeholder.png' data-src='data/img/SLUG.jpg' alt='NAME'>

The app.js file processes the data-src attributes like so:

let imagesToLoad = document.querySelectorAll('img[data-src]');
const loadImages = (image) => {
  image.setAttribute('src', image.getAttribute('data-src'));
  image.onload = () => {
    image.removeAttribute('data-src');
  };
};

And then create a loop:

imagesToLoad.forEach((img) => {
  loadImages(img);
});

You can also check out a thorough guide on Smashing Magazine about reducing content jumping with other elements.

PWA works with any input method

We’ve covered how PWAs should work with a variety of different devices. To take a step further, you also need to account for other input methods the users can use on these devices, such as touch, mouse, and stylus.

Adding the Pointer Events API to your PWA mainly solves this question. Here’s how you can approach it according to Google’s developers.

First, check if the browser supports the Pointer Events:

if (window.PointerEvent) {
  // Yay, we can use pointer events!
} else {
  // Back to mouse and touch events, I guess.
}

Next, you can define the actions various input methods can take:

switch(ev.pointerType) {
  case 'mouse':
    // Do nothing.
    break;
  case 'touch':
    // Allow drag gesture.
    break;
  case 'pen':
    // Also allow drag gesture.
    break;
  default:
    // Getting an empty string means the browser doesn't know
    // what device type it is. Let's assume mouse and do nothing.
    break;
}

Since most browsers already have touch-enabled features, you won’t need to add anything else.

Discoverable Through Search

One of PWAs key benefits over a native application is that the PWA is a website by nature and search engines can index them. This allows you to deploy SEO strategies to make your PWA content more discoverable.

You can start by ensuring that each URL in your PWA has a unique, descriptive title and meta description, which is the baseline of any SEO optimization activity.

Let’s look at some other steps you can take to make your PWA searchable.

Analyze the findability of your PWA

Google has an excellent tool in its Search Console that analyses your site (PWA) and reports the results. You can use it to run a basic scan of your site and uncover any weak spots that you can then start fixing.

Alternatively, you can use Lighthouse in the Chrome browser to run an SEO audit.

First, navigate to the target URL. Then Press Control+Shift+J (or Command+Option+J on Mac) that opens the developer’s tools menu. Choose the Lighthouse tab, tick the SEO category box, and generate the report.

Use structured data

Google’s search engine uses structured data to understand the content’s purpose on your webpage.

Structured data is a standardized format for providing information about a page and classifying the page content; for example, on a recipe page, what are the ingredients, the cooking time and temperature, the calories, and so on.
Google

Before you start coding, Google has also put together a useful list of common structured data errors and relevant guidelines to fix them. Studying this material should give you a good baseline of what to avoid.

Frederick O’Brien has written an excellent guide on Smashing Magazine, Baking Structured Data Into The Design Process, which describes how to build structured data from the start.

User-friendly Notifications And Permission Requests

Last but not least, you can increase the user experience by optimizing the notifications and permission requests, so they serve your users — as opposed to just being confusing and annoying.

While you can generally rely on common sense, there are practical tips that you can implement as well, such as creating non-intrusive push notifications and giving the user an option to unsubscribe from messages.

Subtle permission requests for communication

There are two modern ways to automate the communication between a website and a user — chatbots and notifications.

In a PWAs context, the main perk of a chatbot is that it doesn’t require the user’s permission to interact with the user. However, depending on which chatbot application you use, the user might miss the subtle message. Notifications, on the other hand, require the user’s permission but are much more visible.

Since you can add a chatbot as a separate third-party application, let’s focus on creating a user-friendly push notification. In case you need a guide on how to create a push notification in the first place, here’s a great one by Indrek Lasn.

The most straightforward way to create a non-intrusive permission request is by using a double request. This means that you include a custom interaction to your site on top of the default one from the user’s OS.

Matt Gaunt offers perfect illustrations for this effect with the following images.

Here’s the default notification permission request that provides no context:

And here’s the custom interaction added before the default notification permission described above:

By adding a custom alert before the OS’s default one, you can describe the purpose of the notification more clearly to the user. This increases the chance of the user opting in for your site’s notifications.

Enable the user to opt-out from notifications

For a user, disabling the push notifications of a site is quite annoying regardless of the device they’re using. Therefore, giving the user an option to opt-out from the messages goes a long way in terms of user experience.

Here’s an example from Matt Gaunt on how to implement an unsubscribing function into your code and UI:

First, define the pushButton’s click listener:

pushButton.addEventListener('click', function() {
  pushButton.disabled = true;
  if (isSubscribed) {
    unsubscribeUser();
  } else {
    subscribeUser();
  }
});

Then, add a new function:

function unsubscribeUser() {
  swRegistration.pushManager.getSubscription()
  .then(function(subscription) {
    if (subscription) {
    // TODO: Tell application server to delete subscription
      return subscription.unsubscribe();
    }
  })
  .catch(function(error) {
    console.log('Error unsubscribing', error);
  })
  .then(function() {
    updateSubscriptionOnServer(null);

    console.log('User is unsubscribed.');
    isSubscribed = false;

    updateBtn();
  });
}

Here’s how it looks in the console after successfully implementing the subscribe button to disable the notifications if the user chooses.

Conclusion

PWAs can be mighty powerful for increasing your site’s traffic and expanding on user experience. To further optimize your PWA, you can use these modern solutions described in this article to enhance performance and functionality.

You also don’t need to implement everything at once. Feel free to cherry-pick the most relevant options since each of these suggestions helps you further your PWA.

Further Resources

Creating Scheduled Push Notifications

Scheduled is the key word there — that’s a fairly new thing! When a push notification is scheduled (i.e. “Take your pill” or “You’ve got a flight in 3 hours”) that means it can be shown to the user even if they’ve gone offline. That’s an improvement from the past where push notification required the user being online. 

So how do scheduled push notifications work? There are four key parts we’re going to look at:

  • Registering a Service Worker
  • Adding and removing scheduled push notifications
  • Enhancing push notifications with action buttons
  • Handling push notifications in the Service Worker

First, a little background

Push notifications are a great way to inform site users that something important has happened and that they might want to open our (web) app again. With the Notifications API — in combination with the Push API and the HTTP Web Push Protocol — the web became an easy way to send a push notification from a server to an application and display it on a device.

You may have already seen this sort of thing evolve. For example, how often do you see some sort of alert to accept notifications from a website? While browser vendors are already working on solutions to make that less annoying (both Firefox and Chrome have outlined plans), Chrome 80 just started an origin trial for the new Notification Trigger API, which lets us create notifications triggered by different events rather than a server push alone. For now, however, time-based triggers are the only supported events we have. But other events, like geolocation-based triggers, are already planned.

Scheduling an event in JavaScript is pretty easy, but there is one problem. In our push notification scenario, we can’t be sure the application is running at the exact moment we want to show the notification. This means that we can’t just schedule it on an application layer. Instead, we’ll need to do it on a Service Worker level. That’s where the new API comes into play.

The Notification Trigger API is in an early feedback phase. You need to enable the #enable-experimental-web-platform-features flag in Chrome or you should register your application for an origin trial.

Also, the Service Worker API requires a secure connection over HTTPS. So, if you try it out on your machine, you’ll need to ensure that it’s served over HTTPS.

Setting things up

I created a very basic setup. We have one application.js file, one index.html file, and one service-worker.js file, as well as a couple of image assets.

/project-folder
├── index.html
├── application.js
├── service-worker.js
└── assets
   ├─ badge.png
   └── icon.png

You can find the full example of a basic Notification Trigger API demo on GitHub.

Registering a Service Worker

First, we need to register a Service Worker. For now, it will do nothing but log that the registration was successful.

// service-worker.js
// listen to the install event
self.addEventListener('install', event => console.log('ServiceWorker installed'));
<!-- index.html -->
<script>
  if ('serviceWorker' in navigator) {
    navigator.serviceWorker.register('/service-worker.js');
  }
</script>

Setting up the push notification

Inside our application, we need to ask for the user’s permission to show notifications. From there, we’ll get our Service Worker registration and register a new notification for this scope. So far, nothing new.

The cool part is the new showTrigger property. This lets us define the conditions for displaying a notification. For now, we want to add a new TimestampTrigger, which accepts a timestamp. And since everything happens directly on the device, it also works offline.

// application.js
document.querySelector('#notification-button').onclick = async () => {
  const reg = await navigator.serviceWorker.getRegistration();
  Notification.requestPermission().then(permission => {
    if (permission !== 'granted') {
      alert('you need to allow push notifications');
    } else {
      const timestamp = new Date().getTime() + 5 * 1000; // now plus 5000ms
      reg.showNotification(
        'Demo Push Notification',
        {
          tag: timestamp, // a unique ID
          body: 'Hello World', // content of the push notification
          showTrigger: new TimestampTrigger(timestamp), // set the time for the push notification
          data: {
            url: window.location.href, // pass the current url to the notification
          },
          badge: './assets/badge.png',
          icon: './assets/icon.png',
        }
      );
    }
  });
};

Handling the notification

Right now, the notification should show up at the specified timestamp. But now we need a way to interact with it, and that’s where we need the Service Worker notificationclick and notificationclose events.

Both events listen to the relevant interactions and both can use the full potential of the Service Worker. For example, we could open a new window:

// service-worker.js
self.addEventListener('notificationclick', event => {
  event.waitUntil(self.clients.openWindow('/'));
});

That’s a pretty straightforward example. But with the power of the Service Worker, we can do a lot more. Let’s check if the required window is already open and only open a new one if it isn’t.

// service-worker.js
self.addEventListener('notificationclick', event => {
  event.waitUntil(self.clients.matchAll().then(clients => {
    if (clients.length){ // check if at least one tab is already open
      clients[0].focus();
    } else {
      self.clients.openWindow('/');
    }
  }));
});

Notification actions

Another great way to facilitate interaction with users is to add predefined actions to the notifications. We could, for example, let them choose if they want to dismiss the notification or open the app.

// application.js
reg.showNotification(
  'Demo Push Notification',
  {
    tag: timestamp, // a unique ID
    body: 'Hello World', // content of the push notification
    showTrigger: new TimestampTrigger(timestamp), // set the time for the push notification
    data: {
      url: window.location.href, // pass the current url to the notification
    },
    badge: './assets/badge.png',
    icon: './assets/icon.png',
    actions: [
      {
        action: 'open',
        title: 'Open app’
      },
      {
        action: 'close',
        title: 'Close notification',
      }
    ]
  }
);

Now we use those notifications inside the Service Worker.

// service-worker.js
self.addEventListener('notificationclick', event => {
  if (event.action === 'close') {
    event.notification.close();
  } else {
    self.clients.openWindow('/');
  }
});

Cancelling push notifications

It’s also possible to cancel pending notifications. In this case, we need to get all pending notifications from the Service Worker and then close them before they are sent to the device.

// application.js
document.querySelector('#notification-cancel').onclick = async () => {
  const reg = await navigator.serviceWorker.getRegistration();
  const notifications = await reg.getNotifications({
    includeTriggered: true
  });
  notifications.forEach(notification => notification.close());
  alert(`${notifications.length} notification(s) cancelled`);
};

Communication

The last step is to set up the communication between the app and the Service Worker using the postMessage method on the Service Worker clients. Let’s say we want to notify the tab that’s already active that a push notification click happened.

// service-worker.js
self.addEventListener('notificationclick', event => {
  event.waitUntil(self.clients.matchAll().then(clients => {
    if(clients.length){ // check if at least one tab is already open
      clients[0].focus();
      clients[0].postMessage('Push notification clicked!');
    } else {
      self.clients.openWindow('/');
    }
  }));
});
// application.js
navigator.serviceWorker.addEventListener('message', event => console.log(event.data));

Summary

The Notification API is a very powerful feature to enhance the mobile experience of web applications. Thanks to the arrival of  the Notification Trigger API, it just got a very important improvement. The API is still under development, so now is the perfect time to play around with it and give feedback to the developers.

If you are working with Vue or React, I’d recommend you take a look at my own Progressive Web App demo. It includes a documented example using the Notification Trigger API for both frameworks that looks like this:

The post Creating Scheduled Push Notifications appeared first on CSS-Tricks.

Client-Side Image Editing on Mobile

Michael Scharnagl:

Ever wanted to easily convert an image to a grayscale image on your phone? I do sometimes, and that's why I build a demo using the Web Share Target API to achieve exactly that.

For this I used the Service Worker way to handle the data. Once the data is received on the client, I use drawImage from canvas to draw the image in canvas, use the grayscale filter to convert it to a grayscale image and output the final image.

So you "install" the little microsite like a PWA, then you natively "share" an image to it and it comes back edited. Clever. Android on Chrome only at the moment.

Reminds me of this "Browser Functions" idea in reverse. That was a server that did things a browser can do, this is a browser doing things a server normally does.

Direct Link to ArticlePermalink

The post Client-Side Image Editing on Mobile appeared first on CSS-Tricks.

Collective #579










Collective item image

EscherSketch

A really nice tool for drawing and exploring symmetrical patterns and designs. It can export pictures, pattern tiles for fabric and wallpaper design, and SVG for further editing.

Check it out




Collective item image

Publish

A static site generator built specifically for Swift developers. It enables entire websites to be built using Swift, supporting themes, plugins and more customization options.

Check it out


Collective item image

Whack-a-mole!

Andrew Burton coded this offline-first PWA version of Whack-a-mole for his children. The source code can be found here.

Check it out





Collective item image

22120

An archivist browser controller that caches everything you browse, a library server with full text search to serve your archive.

Check it out






Collective #579 was written by Pedro Botelho and published on Codrops.

Going Buildless

I'm in a long distance relationship. That means I’m on a plane to England every few weeks, and every time I'm on that plane, I think about how nice it would be to read some Reddit posts. What I could do is find a Reddit app that lets me cache posts for offline (I’m sure there is one out there), or I could take the opportunity to write something myself and have fun using the latest and greatest technologies and web standards out there!

On top of that, there has been a lot of discussion around what I like to call going buildless, which I think is really fascinating development in which production projects are created without using a build process (like a bundler).

This post is also a homage to a couple of awesome people in the web community who are making some great things possible. I'll be linking to all that stuff as we move along. Do note that this won't be a step-by-step tutorial, but if you want to check out the code, you can find the finished project on GitHub.

Our end result should look something like this:

Let's dive in and install a few dependencies

npm i @babel/core babel-loader @babel/preset-env @babel/preset-react webpack webpack-cli react react-dom redux react-redux html-webpack-plugin are-you-tired-yet html-loader webpack-dev-server

I'm kidding.

We're not gonna use any of that.

We're going to try and avoid as much tooling and dependencies as we can to keep the entry barrier low. What we will be using is:

  • LitElement - LitElement is our component model. It's easy to use, lightweight, close to the metal, and leverages web components.
  • @vaadin/router - This is a really small (< 7kb) router that has an awesome developer experience that I cannot recommend enough.
  • @pika/web - This will help us get our modules together for easy development.
  • es-dev-server - This is a simple dev server for modern web development workflows, made by us at open-wc. Although any HTTP server will doc, feel free to bring your own.

That's it! We'll also be using a few browser standards, namely: es modules, web components, import-maps, kv-storage and service-worker.

Let's go ahead and install our dependencies:

npm i -S lit-element @vaadin/router
npm i -D @pika/web es-dev-server

We'll also add a postinstall hook to our package.json that's going to run Pika for us:

"scripts": {
  "start": "es-dev-server",
  "postinstall": "pika-web"
}

🐭 Pika

Pika is a project by Fred K. Schott that aims to bring that nostalgic 2014 simplicity to 2019 web development. Fred is up to all sorts of awesome stuff. For one, he made pika.dev, which lets you easily search for modern JavaScript packages on npm. He also recently gave his talk Reimagining the Registry at DinosaurJS 2019, which I highly recommend you watch.

Pika takes things even one step further. If we run pika-web, it'll install our dependencies as single JavaScript files to a new web_modules/ directory. If your dependency exports an ES "module" entrypoint in its package.json manifest, Pika supports it. If you have any transitive dependencies, Pika will create separate chunks for any shared code among your dependencies.

What this means, is that in our case our output will look something like:

└─ web_modules/
    ├─ lit-element.js
    └─ @vaadin
        └─ router.js

Sweet! That's it. We have our dependencies ready to go as single JavaScript module files, and this is going to make things really convenient for us later on in this post, so stay tuned!

📥 Import maps

Alright! Now that we've got our dependencies sorted out, let's get to work. We'll make an index.html that'll look something like this:

<html>
  <!-- head, etc. -->
  <body>
    <reddit-pwa-app></reddit-pwa-app>
    <script src="./src/reddit-pwa-app.js" type="module"></script>
  </body>
</html>

And reddit-pwa-app.js:

import { LitElement, html } from 'lit-element';

class RedditPwaApp extends LitElement {

  // ...

  render() {
    return html`
      <h1>Hello world!</h1>
    `;
  }
}

customElements.define('reddit-pwa-app', RedditPwaApp);

We're off to a great start. Let's try and see how this looks in the browser so far, so lets start our server, open the browser and... What's this? An error?

Oh boy.

And we've barely even started. Alright, let's take a look. The problem here is that our module specifiers are bare. They are bare module specifiers. What this means is that there are no paths specified, no file extensions, they're just... pretty bare. Our browser has no idea on what to do with this, so it'll throw an error.

import { LitElement, html } from 'lit-element'; // <-- bare module specifier
import { Router } from '@vaadin/router'; // <-- bare module specifier

import { foo } from './bar.js'; // <-- not bare!
import { html } from 'https://unpkg.com/lit-html'; // <-- not bare!

Naturally, we could use some tools for this, like webpack, or rollup, or a dev server that rewrites the bare module specifiers to something meaningful to browsers, so we can load our imports. But that means we have to bring in a bunch of tooling, dive into configuration, and we're trying to stay minimal here. We just want to write code! In order to solve this, we're going to take a look at import maps.

Import maps is a new proposal that lets you control the behavior of JavaScript imports. Using an import map, we can control what URLs get fetched by JavaScript import statements and import() expressions, and allows this mapping to be reused in non-import contexts. This is great for several reasons:

  • It allows our bare module specifiers to work.
  • It provides a fallback resolution so that import $ from "jquery"; can try to go to a CDN first, but fall back to a local version if the CDN server is down.
  • It enables polyfilling of (or other control over) built-in modules. (More on that later, hang on tight!)
  • Solves the nested dependency problem. (Go read that blog!)

Sounds pretty sweet, no? Import maps are currently available in Chrome 75+ behind a flag, and with that knowledge in mind, let's go to our index.html, and add an import map to our <head>:

<head>
  <script type="importmap">
    {
      "imports": {
        "@vaadin/router": "/web_modules/@vaadin/router.js",
        "lit-element": "/web_modules/lit-element.js"
      }
    }
  </script>
</head>

If we go back to our browser, and refresh our page, we'll have no more errors, and we should see our <h1>Hello world!</h1> on our screen.

Import maps is an incredibly interesting new standard, and definitely something you should be keeping your eyes on. If you're interested in experimenting with them, and generate your own import map based on a yarn.lock file, you can try our open-wc import-maps-generate package and play around. Im really excited to see what people will develop in combination with import maps.

📡 Service Worker

Alright, we're going to skip ahead in time a little bit. We've got our dependencies working, we have our router set up, and we've done some API calls to get the data from Reddit and display it on our screen. Going over all of the code is a bit out of scope for this post, but remember that you can find all the code in the GitHub repo if you want to read the implementation details.

Since we're making this app so we can read reddit threads on the airplane it would be great if our application worked offline, and if we could somehow save some posts to read.

Service workers are a kind of JavaScript Worker that runs in the background. You can visualize it as sitting in between the web page, and the network. Whenever your web page makes a request, it goes through the service worker first. This means that we can intercept the request, and do stuff with it! For example, we can let the request go through to the network to get a response, and cache it when it returns so we can use that cached data later when we might be offline. We can also use a service worker to precache our assets. What this means is that we can precache any critical assets our application may need in order to work offline. If we have no network connection, we can simply fall back to the assets we cached, and still have a working (albeit offline) application.

If you're interested in learning more about Progressive Web Apps and service worker, I highly recommend you read The Offline Cookbook, by Jake Archibald, as well as this video tutorial series by Jad Joubran.

Let's go ahead and implement a service worker. In our index.html, we'll add the following snippet:

<script>
  if ('serviceWorker' in navigator) {
    window.addEventListener('load', () => {
      navigator.serviceWorker.register('./sw.js').then(() => {
        console.log('ServiceWorker registered!');
      }, (err) => {
        console.log('ServiceWorker registration failed: ', err);
      });
    });
  }
</script>

We'll also add a sw.js file to the root of our project. So we're about to precache the assets of our app, and this is where Pika just made life really easy for us. If you'll take a look at the install handler in the service worker file:

self.addEventListener('install', (event) => {
  event.waitUntil(
    caches.open(CACHENAME).then((cache) => {
      return cache.addAll([
        '/',
        './web_modules/lit-element.js',
        './web_modules/@vaadin/router.js',
        './src/reddit-pwa-app.js',
        './src/reddit-pwa-comment.js',
        './src/reddit-pwa-search.js',
        './src/reddit-pwa-subreddit.js',
        './src/reddit-pwa-thread.js',
        './src/utils.js',
      ]);
    })
  );
});

You'll find that we're totally in control of our assets, and we have a nice, clean list of files we need in order to work offline.

📴 Going offline

Right. Now that we've cached our assets to work offline, it would be excellent if we could actually save some posts that we can read while offline. There are many ways that lead to Rome, but since we're living on the edge a little bit, we're going to go with: Kv-storage!

📦 Built-in Modules

There are a few things to talk about here. Kv-storage is a built-in module. Built-in modules are very similar to regular JavaScript modules, except they ship with the browser. It's good to note that while built-in modules ship with the browser, they are not exposed on the global scope, and are namespaced with std: (Yes, really.). This has a few advantages: they won't add any overhead to starting up a new JavaScript runtime context (e.g. a new tab, worker, or service worker), and they won't consume any memory or CPU unless they're actually imported, as well as avoid naming collisions with existing code.

Another interesting, if not somewhat controversial, proposal as a built-in module is the std-toast element, and the std-switch element.

🗃 Kv-storage

Alright, with that out of the way, lets talk about kv-storage. Kv-storage (or "key value storage") is layered on top of IndexedDB and fairly similar to localStorage, except for only a few major differences.

The motivation for kv-storage is that localStorage is synchronous, which can lead to bad performance and syncing issues. It's also limited exclusively to String key/value pairs. The alternative, IndexedDB, is... hard to use. The reason it's so hard to use is that it predates promises, and this leads to a, well, pretty bad developer experience. Not fun. Kv-storage, however, is a lot of fun, asynchronous, and easy to use! Consider the following example:

import { storage, /* StorageArea */ } from "std:kv-storage";

(async () => {
  await storage.set("mycat", "Tom");
  console.log(await storage.get("mycat")); // Tom
})();

Notice how we're importing from std:kv-storage? This import specifier is bare as well, but in this case it's okay because it actually ships with the browser.

Pretty neat. We can perfectly use this for adding a 'save for offline' button, and simply store the JSON data for a Reddit thread, and get it when we need it.

// reddit-pwa-thread.js:52:
const savedPosts = new StorageArea("saved-posts");

// ...

async saveForOffline() {
  await savedPosts.set(this.location.params.id, this.thread); // id of the post + thread as json
  this.isPostSaved = true;
}

So now if we click the “save for offline" button, and we go to the DevTools “Application" tab, we can see a kv-storage:saved-posts that holds the JSON data for this post:

And if we go back to our search page, we'll have a list of saved posts with the post we just saved:

🔮 Polyfilling

Excellent. However, we're about to run into another problem here. Living on the edge is fun, but also dangerous. The problem that we're hitting here is that, at the time of writing, kv-storage is only implemented in Chrome behind a flag. That's not great. Fortunately, there's a polyfill available, and at the same time we get to show off yet another really useful feature of import-maps; polyfilling!

First things first, lets install the kv-storage-polyfill:

npm i -S kv-storage-polyfill

Note that our postinstall hook will run Pika for us again.

Let’s also add the following to our import map in our index.html:

<script type="importmap">
  {
    "imports": {
      "@vaadin/router": "/web_modules/@vaadin/router.js",
      "lit-element": "/web_modules/lit-element.js",
      "/web_modules/kv-storage-polyfill.js": [
        "std:kv-storage",
        "/web_modules/kv-storage-polyfill.js"
      ]
    }
  }
</script>

What happens here is that whenever /web_modules/kv-storage-polyfill.js is requested or imported, the browser will first try to see if std:kv-storage is available; however, if that fails, it'll load /web_modules/kv-storage-polyfill.js instead.

So in code, if we import:

import { StorageArea } from '/web_modules/kv-storage-polyfill.js';

This is what will happen:

"/web_modules/kv-storage-polyfill.js": [                 // when I'm requested
    "std:kv-storage",                      // try me first!
  "/web_modules/kv-storage-polyfill.js"    // or fallback to me
]

🎉 Conclusion

And we should now have a simple, functioning PWA with minimal dependencies. There are a few nitpicks to this project that we could complain about, and they'd all likely be fair. For example, we probably could've gone without using Pika, but it does make life really easy for us. You could have made the same argument about adding a webpack configuration, but you'd have missed the point. The point here is to make a fun application, while using some of the latest features, drop some buzzwords, and have a low barrier for entry. As Fred Schott would say: "In 2019, you should use a bundler because you want to, not because you need to."

If you're interested in nitpicking, however, you can read this great discussion about using webpack vs. Pika vs. buildless, and you'll get some great insights from Sean Larkinn of the webpack core team himself, as well as Fred K. Schott, creator of Pika.

I hope you enjoyed this blog post, and I hope you learned something, or discovered some new interesting people to follow. There are lots of exciting developments happening in this space right now, and I hope I got you as excited about them as I am. If you have any questions, comments, feedback, or nitpicks, feel free to reach out to me on twitter at @passle_ or @openwc and don't forget to check out open-wc.org 😉.

Honorable Mentions

I'd like to give a few shout-outs to some very interesting people that are doing some great stuff, and you may want to keep an eye on.

The post Going Buildless appeared first on CSS-Tricks.

Inline SVG… Cached

I wrote that using inline <svg> icons makes for the best icon system. I still think that's true. It's the easiest possible way to drop an icon onto a page. No network request, perfectly styleable.

But inlining code has some drawbacks, one of which is that it doesn't take advantage of caching. You're making the browser read and process the same code over and over as you browse around. Not that big of a deal. There are much bigger performance fish to fry, right? But it's still fun to think about more efficient patterns.

Scott Jehl wrote that just because you inline something doesn't mean you can't cache it. Let's see if Scott's idea can extend to SVG icons.

Starting with inline SVG

Like this...

<!DOCTYPE html>
<html lang="en">

<head>
  <title>Inline SVG</title>
  <link rel="stylesheet" href="/styles/style.css">
</head>

<body>

  ...
 
  <svg width="24" height="24" viewBox="0 0 24 24" class="icon icon-alarm" xmlns="http://www.w3.org/2000/svg">
    <path id="icon-alarm" d="M11.5,22C11.64,22 11.77,22 11.9,21.96C12.55,21.82 13.09,21.38 13.34,20.78C13.44,20.54 13.5,20.27 13.5,20H9.5A2,2 0 0,0 11.5,22M18,10.5C18,7.43 15.86,4.86 13,4.18V3.5A1.5,1.5 0 0,0 11.5,2A1.5,1.5 0 0,0 10,3.5V4.18C7.13,4.86 5,7.43 5,10.5V16L3,18V19H20V18L18,16M19.97,10H21.97C21.82,6.79 20.24,3.97 17.85,2.15L16.42,3.58C18.46,5 19.82,7.35 19.97,10M6.58,3.58L5.15,2.15C2.76,3.97 1.18,6.79 1,10H3C3.18,7.35 4.54,5 6.58,3.58Z"></path>
  </svg>

It's weirdly easy to toss text into browser cache as a file

In the above HTML, the selector .icon-alarm will fetch us the entire chunk of <svg> for that icon.

const iconHTML = document.querySelector(".icon-alarm").outerHTML;

Then we can plunk it into the browser's cache like this:

if ("caches" in window) {
  caches.open('static').then(function(cache) {
    cache.put("/icons/icon-wheelchair.svg", new Response(
      iconHTML,
      { headers: {'Content-Type': 'image/svg+xml'} }
    ));
  }
}

See the file path /icons/icon-wheelchair.svg? That's kinda just made up. But it really will be put in the cache at that location.

Let's make sure the browser grabs that file out of the cache when it's requested

We'll register a Service Worker on our pages:

if (navigator.serviceWorker) {   
  navigator.serviceWorker.register('/sw.js', {
    scope: '/'
  });
}

The service worker itself will be quite small, just a cache matcher:

self.addEventListener("fetch", event => {
  let request = event.request;

  event.respondWith(
    caches.match(request).then(response => {
      return response || fetch(request);
    })
  );
});

But... we never request that file, because our icons are inline.

True. But what if other pages benefitted from that cache? For example, an SVG icon could be placed on the page like this:

<svg class="icon">
  <use xlink:href="/icons/icon-alarm.svg#icon-alarm" /> 
</svg>

Since /icons/icon-alarm.svg is sitting there ready in cache, the browser will happily pluck it out of cache and display it.

(I was kind of amazed this works. Edge doesn't like <use> elements that link to files, but that'll be over soon enough.)

And even if the file isn't in the cache, assuming we actually chuck this file on the file system likely the result of some kind of "include" (I used Nunjucks on the demo).

But... <use> and inline SVG aren't quite the same

True. What I like about the above is that it's making use of the cache and the icons should render close to immediately. And there are some things you can style this way — for example, setting the fill on the parent icon should go through the shadow DOM that the <use> creates and colorize the SVG elements within.

Still, it's not the same. The shadow DOM is a big barrier compared to inline SVG.

So, enhance them! We could asynchronously load a script that finds each SVG icon, Ajaxs for the SVG it needs, and replaces the <use> stuff...

const icons = document.querySelectorAll("svg.icon");

icons.forEach(icon => {
  const url = icon.querySelector("use").getAttribute("xlink:href"); // Might wanna look for href also
  fetch(url)
    .then(response => response.text())
    .then(data => {
      // This is probably a bit layout-thrashy. Someone smarter than me could probably fix that up.

      // Replace the <svg><use></svg> with inline SVG
      const newEl = document.createElement("span");
      newEl.innerHTML = data;
      icon.parentNode.replaceChild(newEl, icon);

      // Remove the <span>s
      const parent = newEl.parentNode;
      while (newEl.firstChild) parent.insertBefore(newEl.firstChild, newEl);
      parent.removeChild(newEl);
    });
});

Now, assuming this JavaScript executes correctly, this page has inline SVG available just like the original page did.

Demo & Repo

Implementing Push Notifications: The Back End

In the first part of this series we set up the front end with a Service Worker, a `manifest.json` file, and initialized Firebase. Now we need to create our database and watcher functions.

Article Series:

  1. Setting Up & Firebase
  2. The Back End (You are here)

Creating a Database

Log into Firebase and click on Database in the navigation. Under Data you can manually add database references and see changes happen in real-time.

Make sure to adjust the rule set under Rules so you don't have to fiddle with authentication during testing.

{
  "rules": {
    ".read": true,
    ".write": true
  }
}

Watching Database Changes with Cloud Functions

Remember the purpose of all this is to send a push notification whenever you publish a new blog post. So we need a way to watch for database changes in those data branches where the posts are being saved to.

With Firebase Cloud Functions we can automatically run backend code in response to events triggered by Firebase features.

Set up and initialize Firebase SDK for Cloud Functions

To start creating these functions we need to install the Firebase CLI. It requires Node v6.11.1 or later.

npm i firebase-tools -g

To initialize a project:

  1. Run firebase login
  2. Authenticate yourself
  3. Go to your project directory
  4. Run firebase init functions

A new folder called `functions` has been created. In there we have an `index.js` file in which we define our new functions.

Import the required Modules

We need to import the Cloud Functions and Admin SDK modules in `index.js` and initialize them.

const admin     = require('firebase-admin'),
      functions = require('firebase-function')

admin.initializeApp(functions.config().firebase)

The Firebase CLI will automatically install these dependencies. If you wish to add your own, modify the `package.json`, run npm install, and require them as you normally would.

Set up the Watcher

We target the database and create a reference we want to watch. In our case, we save to a posts branch which holds post IDs. Whenever a new post ID is added or deleted, we can react to that.

exports.sendPostNotification = functions.database.ref('/posts/{postID}').onWrite(event => {
  // react to changes    
}

The name of the export, sendPostNotification, is for distinguishing all your functions in the Firebase backend.

All other code examples will happen inside the onWrite function.

Check for Post Deletion

If a post is deleted, we probably shouldn't send a push notification. So we log a message and exit the function. The logs can be found in the Firebase Console under Functions → Logs.

First, we get the post ID and check if a title is present. If it is not, the post has been deleted.

const postID    = event.params.postID,
      postTitle = event.data.val()

if (!postTitle) return console.log(`Post ${postID} deleted.`)

Get Devices to show Notifications to

In the last article we saved a device token in the updateSubscriptionOnServer function to the database in a branch called device_ids. Now we need to retrieve these tokens to be able to send messages to them. We receive so called snapshots which are basically data references containing the token.

If no snapshot and therefore no device token could be retrieved, log a message and exit the function since we don't have anybody to send a push notification to.

const getDeviceTokensPromise = admin.database()
  .ref('device_ids')
  .once('value')
  .then(snapshots => {

      if (!snapshots) return console.log('No devices to send to.')

      // work with snapshots  
}

Create the Notification Message

If snapshots are available, we need to loop over them and run a function for each of them which finally sends the notification. But first, we need to populate it with a title, body, and an icon.

const payload = {
  notification: {
    title: `New Article: ${postTitle}`,
    body: 'Click to read article.',
    icon: 'https://mydomain.com/push-icon.png'
  }
}

snapshots.forEach(childSnapshot => {
  const token = childSnapshot.val()

  admin.messaging().sendToDevice(token, payload).then(response => {
    // handle response
  }
}

Handle Send Response

In case we fail to send or a token got invalid, we can remove it and log out a message.

response.results.forEach(result => {
  const error = result.error

  if (error) {
    console.error('Failed delivery to', token, error)

  if (error.code === 'messaging/invalid-registration-token' ||
      error.code === 'messaging/registration-token-not-registered') {

      childSnapshot.ref.remove()
      console.info('Was removed:', token)

  } else {
    console.info('Notification sent to', token)
  }

}

Deploy Firebase Functions

To upload your `index.js` to the cloud, we run the following command.

firebase deploy --only functions

Conclusion

Now when you add a new post, the subscribed users will receive a push notification to lead them back to your blog.

GitHub Repo Demo Site

Article Series:

  1. Setting Up & Firebase
  2. The Back End (You are here)

Implementing Push Notifications: The Back End is a post from CSS-Tricks