Vision Transformers: Natural Language Processing (NLP) Increases Efficiency and Model Generality

Transformers Are for Natural Language Processing (NLP), Right?

There has been no shortage of developments vying for a share of your attention over the last year or so. However, if you regularly follow the state of machine learning research you may recall a loud contender for a share of your mind in OpenAI’s GPT-3 and accompanying business strategy development from the group. GPT-3 is the latest and by far the largest in OpenAI’s general-purpose transformer lineage working on models for natural language processing.

Of course, GPT-3 and GPTs may grab headlines, but it belongs to a much larger superfamily of transformer models, including a plethora of variants based on the Bidirectional Encoder Representations from Transformers (BERT) family originally created by Google, as well as other smaller families of models from Facebook and Microsoft. For an expansive but still not exhaustive overview of major NLP transformers, the leading resource is probably the Apache 2.0 licensed Hugging Face () library.

Oops, We’re Multi-Cloud: A Hitchhiker’s Guide to Surviving

Over the last few years, enterprises have adopted multi-cloud strategies in an effort to increase flexibility and choice and reduce vendor lock-in. According to Flexera's 2020 State of the Cloud Report most companies embrace multi-cloud, with 93% of enterprises having a multi-cloud strategy. In a recent Gartner survey of public cloud users, 81% of respondents said they are working with two or more providers. Multi-cloud makes so many things more complicated that you need a damn good reason to justify this. At Humanitec, we see hundreds of ops and platform teams a year, and I am often surprised that there are several valid reasons to go multi-cloud. I also observe that those teams which succeed are those that take the remodeling of workflows and tooling setups seriously.

What Is Multi-Cloud Computing?

Put simply, multi-cloud means: an application or several parts of it are running on different cloud-providers. These may be public or private, but typically include at least one or more public providers. It may mean data storage or specific services are running on one cloud providers and others on another. Your entire setup can run on different cloud providers in parallel. This is distinct from hybrid cloud services where one component is running on-premise and other parts of your application are running in the cloud.

Change Data Capture With Debezium: A Simple How-To, Part 1

One question always comes up as organizations moving towards being cloud-native, twelve-factor, and stateless: How do you get an organization’s data to these new applications? There are many different patterns out there, but one pattern we will look at today is change data capture. This post is a simple how-to on how to build out a change data capture solution using Debezium within an OpenShift environment. Future posts will also add to this and add additional capabilities.

What Is Change Data Capture?

Another Red Hatter, Sadhana Nandakumar, sums it up well in one of her posts around change data capture:

Re-Creating the Porky Pig Animation from Looney Tunes in CSS

You know, Porky Pig coming out of those red rings announcing the end of a Looney Tunes cartoon. We’ll get there, but first we need to cover some CSS concepts.

Everything in CSS is a box, or rectangle. Rectangles stack, and can be displayed on top of, or below, other rectangles. Rectangles can contain other rectangles and you can style them such that the inner rectangle is visible outside the outer rectangle (so they overflow) or that they’re clipped by the outer rectangle (using overflow: hidden). So far, so good.

What if you want a rectangle to be visible outside its surrounding rectangle, but only on one side. That’s not possible, right?

The first rectangle contains an inner element that overflows both the top and bottom edges with the text "Possible" below it. The second rectangle clips the inner element on both sides, with "Also possible" below it. The third rectangle clips the inner element on the bottom, but shows it overflowing at the top, with the text "...Not possible" below it.

Perhaps, when you look at the image above, the wheels start turning: What if I copy the inner rectangle and clip half of it and then position it exactly?. But when it comes down to it, you can’t choose to have an element overflow at the top but clip at the bottom.

Or can you?

3D transforms

Using 3D transforms you can rotate, transform, and translate elements in 3D space. Here’s a group of practical examples I gathered showcasing some possibilities.

For 3D transforms to do their thing, you need two CSS properties:

  • perspective, using a value in pixels, to determine how pronounced the 3D effect is
  • transform-style: preserve-3d, to tell the browser to keep elements positioned in 3D space.

Even with the good support that 3D transforms have, you don’t see 3D transforms ‘in the wild’ all that much, sadly. Websites are still a “2D” thing, a flat page that scrolls. But as I started playing around with 3D transforms and scouting examples, I found one that was by far the most interesting as far as 3D transforms go:

Three planes floating above each other in 3D space

The image clearly shows three planes but this effect is achieved using a single <div>. The two other planes are the ::before and ::after pseudo-elements that are moved up and down respectively, using translate(), to stack on top of each other in 3D space. What is noticeable here is how the ::after element, that normally would be positioned on top of an element, is behind that element. The creator was able to achieve this by adding transform: translateZ(-1px);.

Even though this was one of many 3D transforms I had seen at this point, it was the first one that made me realize that I was actually positioning elements in 3D space. And if I can do that, I can also make elements intersect:

Two planes intersecting each other in 3D space

I couldn’t think of how this sort of thing would be useful, but then I saw the Porky Pig cartoon animation. He emerges from behind the bottom frame, but his face overlaps and stacks on top of the top edge of the same frame — the exact same sort of clipping situation we saw earlier. That’s when my wheels started turning. Could I replicate that effect using just CSS? And for extra credit, could I replicate it using a single <div>?

I started playing around and relatively quickly had this to show for it:

An orange rectangle that intersects through a blue frame. At the top of the image it's above the frame and at the bottom of the image it's below the blue frame.

Here we have a single <div> with its ::before and an ::after pseudo-elements. The div itself is transparent, the ::before has a blue border and the ::after has been rotated along the x-axis. Because the div has perspective, everything is positioned in 3D and, because of that, the ::after pseudo-element is above the border at the top edge of the frame and behind the border at the bottom edge of the frame.

Here’s that in code:

div {
  transform: perspective(3000px);
  transform-style: preserve-3d;
  position: relative;
  width: 200px;
  height: 200px;
}

div::before {
  content: "";
  width: 100%;
  height: 100%;
  border:10px solid darkblue;
}

div::after {
  content: "";
  position: absolute;
  background: orangered;
  width: 80%;
  height: 150%;
  display: block;
  left: 10%;
  bottom: -25%;
  transform: rotateX(-10deg);
}

With perspective, we can determine how far a viewer is from “z=0” which we can consider to be the “horizon” of our CSS 3D space. The larger the perspective, the less pronounced the 3D effect, and vice versa. For most 3D scenes, a perspective value between 500 and 1,000 pixels works best, though you can play around with it to get the exact effect you want. You can compare this with perspective drawing: If you draw two horizon points close together, you get a very strong perspective; but if they’re far apart, then things appear flatter.

From rectangles to cartoons

Rectangles are fun, but what I really wanted to build was something like this:

A film cell of Porky Pig coming out of a circle with the text "That's all folks."

I couldn‘t find or create a nicely cut-out version of Porky Pig from that image, but the Wikipedia page contains a nice alternative, so we’ll use that.

First, we need to split the image up into three parts:

  • <div>: the blue background behind Porky
  • ::after: all the red circles that form a sort of tunnel
  • ::before: Porky Pig himself in all his glory, set as a background image

We’ll start with the <div>. That will be the background as well as the base for the rest of the elements. It’ll also contain the perspective and transform-style properties I called out earlier, along with some sizes and the background color:

div {
  transform: perspective(3000px);
  transform-style:preserve-3d;
  position: relative;
  width: 200px;
  height: 200px;
  background: #4992AD;
}

Alright, next up, we‘ll move to the red circles. The element itself has to be transparent because that’s the opening where Porky emerges. So how shall we go about it? We can use a border just like the example earlier in this article, but we only have one border and that can have a solid color. We need a bunch of circles that can accept gradients. We can use box-shadow instead, chaining multiple shadows in the property values. This gets us all of the circles we need, and by using a blur radius value of 0 with a large spread radius, we can create the appearance of multiple “borders.”

box-shadow: <x-offset> <y-offset> <blur-radius> <spread-radius> <color>;

We‘ll use a border-radius that‘s as large as the <div> itself, making the ::before a circle. Then we’ll add the shadows. When we add a few red circles with a large spread and add blurry white, we get an effect that looks very similar to the Porky’s tunnel.

box-shadow: 0 0 20px   0px #fff, 0 0 0  30px #CF331F,
            0 0 20px  30px #fff, 0 0 0  60px #CF331F,
            0 0 20px  60px #fff, 0 0 0  90px #CF331F,
            0 0 20px  90px #fff, 0 0 0 120px #CF331F,
            0 0 20px 120px #fff, 0 0 0 150px #CF331F;

Here, we’re adding five circles, where each is 30px wide. Each circle has a solid red background. And, by using white shadows with a blur radius of 20px on top of that, we create the gradient effect.

The background and circles in pure CSS without Porky

With the background and the circles sorted, we’re now going to add Porky. Let’s start with adding him at the spot we want him to end up, for now above the circles.

div::before {
  position: absolute;
  content: "";
  width: 80%;
  height: 150%;
  display: block;
  left: 10%;
  bottom: -12%;
  background: url("Porky_Pig.svg") no-repeat center/contain;
}

You might have noticed that slash in “center/contain” for the background. That’s the syntax to set both the position (center) and size (contain) in the background shorthand CSS property. The slash syntax is also used in the font shorthand CSS property where it’s used to set the font-size and line-height like so: <font-size>/<line-height>.

The slash syntax will be used more in future versions of CSS. For example, the updated rgb() and hsl() color syntax can take a slash followed by a number to indicate the opacity, like so: rgb(0 0 0 / 0.5). That way, there’s not need to switch between rgb() and rgba(). This already works in all browsers, except Internet Explorer 11.

Porky Pig positioned above the circles

Both the size and positioning here is a little arbitrary, so play around with that as you see fit. We’re a lot closer to what we want, but now need to get it so the bottom portion of Porky is behind the red circles and his top half remains visible.

The trick

We need to transpose both the circles as well as Porky in 3D space. If we want to rotate Porky, there are a few requirements we need to meet:

  • He should not clip through the background.
  • We should not rotate him so far that the image distorts.
  • His lower body should be below the red circles and his upper body should be above them.

To make sure Porky doesn‘t clip through the background, we first move the circles in the Z direction to make them appear closer to the viewer. Because preserve-3d is applied it means they also zoom in a bit, but if we only move them a smidge, the zoom effect isn’t noticeable and we end up with enough space between the background and the circles:

transform: translateZ(20px);

Now Porky. We’re going to rotate him around the X-axis, causing his upper body to move closer to us, and the lower part to move away. We can do this with:

transform: rotateX(-10deg);

This looks pretty bad at first. Porky is partially hidden behind the blue background, and he’s also clipping through the circles in a weird way.

Porky Pig partially clipped by the background and the circles

We can solve this by moving Porky “closer” to us (like we did with the circles) using translateZ(), but a better solution is to change the position of our rotation point. Right now it happens from the center of the image, causing the lower half of the image to rotate away from us.

If we move the starting point of the rotation toward the bottom of the image, or even a little bit below that, then the entirety of the image rotates toward us. And because we already moved the circles closer to us, everything ends up looking as it should:

transform: rotateX(-10deg);
transform-origin: center 120%;
Porky Pig emerges from the circle, with his legs behind the circles but his head above them.

To get an idea of how everything works in 3D, click “show debug” in the following Pen:

Animation

If we keep things as they are — a static image — then we wouldn’t have needed to go through all this trouble. But when we animate things, we can reveal the layering and enhance the effect.

Here‘s the animation I’m going for: Porky starts out small at the bottom behind the circles, then zooms in, emerging from the blue background over the red circles. He stays there for a bit, then moves back out again.

We’ll use transform for the animation to get the best performance. And because we’re doing that, we need to make sure we keep the rotateX in there as well.

@keyframes zoom {
  0% {
    transform: rotateX(-10deg) scale(0.66);
  }
  40% {
    transform: rotateX(-10deg) scale(1);
  }
  60% {
    transform: rotateX(-10deg) scale(1);
  }
  100% {
    transform: rotateX(-10deg) scale(0.66);
  }
}

Soon, we’ll be able to directly set different transforms, as browsers have started implementing them as individual CSS properties. That means that repeating that rotateX(-10deg) will eventually be unnecessary; but for now, we have a little bit of duplication.

We zoom in and out using the scale() function and, because we’ve already set a transform-origin, scaling happens from the center-bottom of the image, which is precisely the effect we want! We’re animating the scale up to 60% of Porky’s actual size, we have the little break at the largest point, where he fully pops out of the circle frame.

The animation goes on the ::before pseudo-element. To make the animation look a little more natural, we’re using an ease-in-out timing function, which slows down the animation at the start and end.

div::before {
  animation-name: zoom;
  animation-duration: 4s;
  animation-iteration-count: infinite;
  animation-fill-mode:forwards;
  animation-timing-function: ease-in-out;
}

What about reduced motion?

Glad you asked! For people who are sensitive to animations and prefer reduced or no motion, we can reach for the prefers-reduced-motion media query. Instead of removing the full animation, we’ll target those who prefer reduced motion and use a more subtle fade effect rather than the full-blown animation.

@media (prefers-reduced-motion: reduce) {
   @keyframes zoom {
    0% {
      opacity:0;
    }
    100% {
      opacity: 1;
    }
  }

  div::before {
    animation-iteration-count: 1;
  }
}

By overwriting the @keyframes inside a media query, the browser will automatically pick it up. This way, we still accentuate the effect of Porky emerging from the circles. And by setting animation-iteration-count to 1, we still let people see the effect, but then stop to prevent continued motion.

Finishing touches

Two more things we can do to make this a bit more fun:

  • We can create more depth in the image by adding a shadow behind Porky that grows as he emerges and appears to zoom in closer to the view.
  • We can turn Porky as he moves, to embellish the pop-out effect even further.

That second part we can implement using rotateZ() in the same animation. Easy breezy.

But the first part requires an additional trick. Because we use an image for Porky, we can’t use box-shadow because that creates a shadow around the box of the ::before pseudo-element instead of around the shape of Porky Pig.

That’s where filter: drop-shadow() comes to the rescue. It looks at the opaque parts of the element and adds a shadow to that instead of around the box.

@keyframes zoom {
  0% {
    transform: rotateX(-10deg) scale(0.66);
    filter: drop-shadow(-5px 5px 5px rgba(0,0,0,0));
  }
  40% {
    transform: rotateZ(-10deg) rotateX(-10deg) scale(1);
    filter: drop-shadow(-10px 10px 10px rgba(0,0,0,0.5));
  }

  60% {
    transform: rotateZ(-10deg) rotateX(-10deg) scale(1);
    filter: drop-shadow(-10px 10px 10px rgba(0,0,0,0.5));
  }

  100% {
    transform: rotateX(-10deg) scale(0.66);
    filter: drop-shadow(-5px 5px 5px rgba(0,0,0,0));
  }
}

And that‘s how I re-created the Looney Tunes animation of Porky Pig. All I can say now is, “That’s all Folks!”


The post Re-Creating the Porky Pig Animation from Looney Tunes in CSS appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

How to Keep Mobile Phones Secure

Smartphones are an inseparable part of our lives. If our phone is taken away from us for even a day, for most of us it’s like being deprived of a basic need. We store all kinds of data in our phone – contacts, photos, videos, personal data, documents; we use numerous apps for making our lives easy – banking, insurance, online shopping, stocks, real estate; we rely on our phone for communication and socialising – chatting and video calling, social media, emails, professional groups; the list is endless with what phones can be used for.

Needless to say, if one loses their phone or it gets stolen, all that data and information is at risk. If there was no screen lock on the phone, the person who has or finds the phone can do serious damage if they want to. For instance:

Creating an API Story With Mike Amundsen: Podcast

Applying an API-first approach means designing an API so that it has consistency, as well as adaptability, regardless of the types of development projects to which it's applied. Designing with this approach means building an API that is more than just a byproduct of an internal system.

Developers should be able to quickly and easily understand how your API works and how it can integrate with other applications. Only then can they write the kind of elegant code that will allow them to efficiently interact with other systems.

Twisted Colorful Spheres with Three.js

I love blobs and I enjoy looking for interesting ways to change basic geometries with Three.js: bending a plane, twisting a box, or exploring a torus (like in this 10-min video tutorial). So this time, my love for shaping things will be the excuse to see what we can do with a sphere, transforming it using shaders. 

This tutorial will be brief, so we’ll skip the basic render/scene setup and focus on manipulating the sphere’s shape and colors, but if you want to know more about the setup check out these steps.

We’ll go with a more rounded than irregular shape, so the premise is to deform a sphere and use that same distortion to color it.

Vertex displacement

As you’ve probably been thinking, we’ll be using noise to deform the geometry by moving each vertex along the direction of its normal. Think of it as if we were pushing each vertex from the inside out with different strengths. I could elaborate more on this, but I rather point you to this article by The Spite aka Jaume Sanchez Elias, he explains this so well! I bet some of you have stumbled upon this article already.

So in code, it looks like this:

varying vec3 vNormal;

uniform float uTime;
uniform float uSpeed;
uniform float uNoiseDensity;
uniform float uNoiseStrength;

#pragma glslify: pnoise = require(glsl-noise/periodic/3d)

void main() {
  float t = uTime * uSpeed;
  // You can also use classic perlin noise or simplex noise,
  // I'm using its periodic variant out of curiosity
  float distortion = pnoise((normal + t), vec3(10.0) * uNoiseDensity) * uNoiseStrength;

  // Disturb each vertex along the direction of its normal
  vec3 pos = position + (normal * distortion);

  vNormal = normal;

  gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
}

And now we should see a blobby sphere:

See the Pen Vertex displacement by Mario (@marioecg) on CodePen.

You can experiment and change its values to see how the blob changes. I know we’re going with a more subtle and rounded distortion, but feel free to go crazy with it; there are audio visualizers out there that deform a sphere to the point that you don’t even think it’s based on a sphere.

Now, this already looks interesting, but let’s add one more touch to it next.

Noitation

…is just a word I came up with to combine noise with rotation (ba dum tss), but yes! Adding some twirl to the mix makes things more compelling.

If you’ve ever played with Play-Doh as a child, you have surely molded a big chunk of clay into a ball, grab it with each hand, and twisted in opposite directions until the clay tore apart. This is kind of what we want to do (except for the breaking part).

To twist the sphere, we are going to generate a sine wave from top to bottom of the sphere. Then, we are going to use this top-bottom wave as a rotation for the current position. Since the values increase/decrease from top to bottom, the rotation is going to oscillate as well, creating a twist:

varying vec3 vNormal;

uniform float uTime;
uniform float uSpeed;
uniform float uNoiseDensity;
uniform float uNoiseStrength;
uniform float uFrequency;
uniform float uAmplitude;

#pragma glslify: pnoise = require(glsl-noise/periodic/3d)
#pragma glslify: rotateY = require(glsl-rotate/rotateY)

void main() {
  float t = uTime * uSpeed;
  // You can also use classic perlin noise or simplex noise,
  // I'm using its periodic variant out of curiosity
  float distortion = pnoise((normal + t), vec3(10.0) * uNoiseDensity) * uNoiseStrength;

  // Disturb each vertex along the direction of its normal
  vec3 pos = position + (normal * distortion);

  // Create a sine wave from top to bottom of the sphere
  // To increase the amount of waves, we'll use uFrequency
  // To make the waves bigger we'll use uAmplitude
  float angle = sin(uv.y * uFrequency + t) * uAmplitude;
  pos = rotateY(pos, angle);    

  vNormal = normal;

  gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
}

Notice how the waves emerge from the top, it’s soothing. Some of you might find this movement therapeutic, so take some time to appreciate it and think about what we’ve learned so far…

See the Pen Noitation by Mario (@marioecg) on CodePen.

Alright! Now that you’re back let’s get on to the fragment shader.

Colorific

If you take a close look at the shaders before, you see, almost at the end, that we’ve been passing the normals to the fragment shader. Remember that we want to use the distortion to color the shape, so first let’s create a varying where we pass that distortion to:

varying float vDistort;

uniform float uTime;
uniform float uSpeed;
uniform float uNoiseDensity;
uniform float uNoiseStrength;
uniform float uFrequency;
uniform float uAmplitude;

#pragma glslify: pnoise = require(glsl-noise/periodic/3d)
#pragma glslify: rotateY = require(glsl-rotate/rotateY)

void main() {
  float t = uTime * uSpeed;
  // You can also use classic perlin noise or simplex noise,
  // I'm using its periodic variant out of curiosity
  float distortion = pnoise((normal + t), vec3(10.0) * uNoiseDensity) * uNoiseStrength;

  // Disturb each vertex along the direction of its normal
  vec3 pos = position + (normal * distortion);

  // Create a sine wave from top to bottom of the sphere
  // To increase the amount of waves, we'll use uFrequency
  // To make the waves bigger we'll use uAmplitude
  float angle = sin(uv.y * uFrequency + t) * uAmplitude;
  pos = rotateY(pos, angle);    

  vDistort = distortion; // Train goes to the fragment shader! Tchu tchuuu

  gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
}

And use vDistort to color the pixels instead:

varying float vDistort;

uniform float uIntensity;

void main() {
  float distort = vDistort * uIntensity;

  vec3 color = vec3(distort);

  gl_FragColor = vec4(color, 1.0);
}

We should get a kind of twisted, smokey black and white color like so:

See the Pen Colorific by Mario (@marioecg) on CodePen.

With this basis, we’ll take it a step further and use it in conjunction with one of my favorite color functions out there.

Cospalette

Cosine palette is a very useful function to create and control color with code based on the brightness, contrast, oscillation of cosine, and phase of cosine. I encourage you to watch Char Stiles explain this further, which is soooo good. Final s/o to Inigo Quilez who wrote an article about this function some years ago; for those of you who haven’t stumbled upon his genius work, please do. I would love to write more about him, but I’ll save that for a poem.

Let’s use cospalette to input the distortion and see how it looks:

varying vec2 vUv;
varying float vDistort;

uniform float uIntensity;

vec3 cosPalette(float t, vec3 a, vec3 b, vec3 c, vec3 d) {
  return a + b * cos(6.28318 * (c * t + d));
}   

void main() {
  float distort = vDistort * uIntensity;

  // These values are my fav combination, 
  // they remind me of Zach Lieberman's work.
  // You can find more combos in the examples from IQ:
  // https://iquilezles.org/www/articles/palettes/palettes.htm
  // Experiment with these!
  vec3 brightness = vec3(0.5, 0.5, 0.5);
  vec3 contrast = vec3(0.5, 0.5, 0.5);
  vec3 oscilation = vec3(1.0, 1.0, 1.0);
  vec3 phase = vec3(0.0, 0.1, 0.2);

  // Pass the distortion as input of cospalette
  vec3 color = cosPalette(distort, brightness, contrast, oscilation, phase);

  gl_FragColor = vec4(color, 1.0);
}

¡Liiistoooooo! See how the color palette behaves similar to the distortion because we’re using it as input. Swap it for vUv.x or vUv.y to see different results of the palette, or even better, come up with your own input!

See the Pen Cospalette by Mario (@marioecg) on CodePen.

And that’s it! I hope this short tutorial gave you some ideas to apply to anything you’re creating or inspired you to make something. Next time you use noise, stop and think if you can do something extra to make it more interesting and make sure to save Cospalette in your shader toolbelt.

Explore and have fun with this! And don’t forget to share it with me on Twitter. If you got any questions or suggestions, let me know.

I hope you learned something new. Till next time! 

References and Credits

Thanks to all the amazing people that put knowledge out in the world!

The post Twisted Colorful Spheres with Three.js appeared first on Codrops.

From Design To Developer-Friendly React Code In Minutes With Anima

The promise of seamless design to code translation goes back to the early WYSIWYG page builders. Despite the admirable goal, their biggest flaw (among many) was the horrible code that they generated. Skepticism remains to this day and whenever this idea reappears, the biggest concerns are always related to the quality and maintainability of the code.

This is about to change as new products have made great leaps in the right direction. Their ultimate goal is to automate the design to code process, but not at the cost of code quality. One of these products, Anima, is trying to finally bridge the gap by providing a fully-fledged design to development platform.

What’s Anima?

Anima is a design-to-development tool. It aims to turn the design handoff process into a continuous collaboration. Designers can use Anima to create fully responsive prototypes that look and work exactly like the finished product (no coding required). Developers, in turn, can take these designs and export them into developer-friendly React/HTML code. Instead of coding UI from scratch, they are free to focus on logic and architecture.

It does that with the help of a plugin that connects directly to your design tool and allows you to configure designs and sync them to Anima’s web platform. That’s where the rest of the team can access the prototype, discuss it, and pick useful specs or assets. Aside from the collaboration functionality, it gives developers a headstart thanks to the generated code.

This could make a big difference in the traditional back and forth dance that goes between designers and developers. It keeps everything in one place, in sync, and allows both sides to make changes using either code or design tools.

Installing The Plugin And Setting Up A Project

Getting started with Anima is simple. You first need to create an account and then install the plugin. While I’ll be using Figma for this walkthrough, Anima supports all of the major design tools: Sketch, Figma, and Adobe XD.

Once this is done, make sure you create a project on Anima’s platform — that’s where our designs will appear when we sync them.

The plugin itself is separated into three main sections, each with a list of options. Most of what we’ll be doing is simply selecting one of those options and then applying a specific layer or frame in Figma.

Creating A Responsive Prototype

For the purpose of the article, we have designed an onboarding experience that will be transformed into an interactive prototype. So far we have prepared screens for the three most common breakpoints and we have linked them together using Figma’s prototyping features.

One of the interesting things we can achieve with Anima is making prototypes that fit all screen sizes. Traditional prototypes made of clickable images are static and often fail under different screen sizes.

To do that, click on "Breakpoints" option and Anima will ask you for the frames that you want to connect. Select all of the frames to add them as breakpoints. Then confirm your selection by clicking on "Done".

Once you are ready, click on "Preview in browser" to see the result. That’s when Anima will convert your designs into code.

The first thing you’ll notice is that the prototype is now transformed into HTML and CSS. All the content is selectable and reflows as the screen is resized. This is most visible when you select the "Responsive" mode in the prototype previewer and play with different screen sizes.

To achieve smoother transitions, it’s important to use Figma’s constraint features when designing your components. Make sure to also check the box "Use Figma Constraints" in the "Layout" section of the plugin.

Bring Your Designs To Life With Smart Layers

We can take things a little bit further. Since Anima converts designs into code, the possibilities are endless for the things we can add to make our prototype more realistic.

Animations and hover effects would be a great way to make the prototype more alive and to impress stakeholders. Anima offers a variety of options that can be applied to any layer or component. In our case, we’ll select the headline layer, then choose the "Entrance animation" and "Fade In". In the delay field, we’ll add 0.5.

For each field, we’ll add a glow effect on hover. Select the field layer, then "Hover effect" and choose "Glow". Repeat the same for the button.

Now that we have applied all the changes, we can see that the prototype starts to feel like a real product.

One of the unique features that Anima offers is the ability to add live fields and forms to prototypes. Since we are designing an onboarding experience, this will actually be really useful for us. Data entry is one of the biggest churn points in any product experience and it’s really hard to test out ideas without taking it into account.

Similar to how we added the previous effects, we now select the field component and choose "Text field". From there, we’ll have to choose the type of field that we need. If we choose a password field, for example, input will be hidden and Anima will add a show/hide functionality to the field.

As you can see, fields now work as intended. It’s also possible to gather all the data collected from those fields in a spreadsheet. Select the "Continue" button and then click on the "Submit Button" option in Anima. This will open an additional dialog, where we need to check the box "Add to Spreadsheet" and select redirect destinations in case of success or failure.

Next, we’ll add a Lottie animation for our success screen as it will be a great way to make the experience a bit more engaging. For that, we need to add a placeholder layer in the place of the animation, then select it and choose the "Video / GIF / Lottie" option in the plugin.

Then we’ll paste the URL of our Lottie animation and check the boxes of "Autoplay" and "No controls". In our case, we don’t want to have any video player controls, since this is a success animation.

Apply the changes and open the preview mode to see the results. As you can see, when we fill out the fields and submit the form, we get redirected to our success page, with a looping animation.

Share Designs With The Rest Of The Team

Up until that point, we were working on a draft that was visible only to us. Now it’s time to share it with the rest of the team. The way to do this in the app is by clicking on “Preview in browser”, check how it looks, and if you’re satisfied, continue with "Sync".

Everyone invited to the project will now have access to the designs and will be able to preview, leave comments, and inspect code.

Developers Can Get Reusable React Code

As mentioned earlier, as developers, we are usually skeptical of tools that generate code, mostly because writing something from scratch is always faster than refactoring something that was poorly written. To avoid this, Anima has adopted some best practices to keep the code clean, reusable, and concise.

When we switch to the "Code" mode, we can hover and inspect elements of our design. Whenever we select an element, we’ll see the generated code underneath. The default view is React, but we can also switch to HTML and CSS. We can also adjust preferences in the syntax and naming conventions.

The classes reuse the names of the layers within your design tool, but both designers and developers can rename the layers, too. Still, it’s important to agree on unified naming conventions that would be clear and straightforward to both designers and developers.

Even if we have left some layers unnamed, developers can actually override them and make changes when necessary. This experience reminds me of Chrome’s Inspect element feature, and all the changes are saved and synced with the project.

If you are using Vue or Angular, it’s expected that Anima will start supporting these frameworks in the near future as well.

Looking Forward

As we can see, the gap between design and code keeps bridging. For those who write code, using such a tool is very practical as it can reduce a lot of repetitive work in frontend. For those who design, it allows prototyping, collaboration and syncing that would be difficult to achieve with sending static images back-and-forth.

What’s already certain is that Anima eliminates a lot of wasteful activities in the hand-off process and allows both designers and developers to focus on what matters: building better products. I look forward to seeing what comes up next in Anima!

How to Add New Users and Authors to Your WordPress Blog

Do you want to add new users and authors to your blog?

WordPress comes with a built-in user management system. This lets you add users with different roles and permission levels.

In this article, we will show you how to add new users and authors to your WordPress website.

Adding new users and authors to your WordPress website

Adding a New User or Author on Your WordPress Website

There are 3 ways to add new users to your WordPress website. You can add users manually, let users register themselves for free, or create a paid membership site where users pay to register.

Here’s what we’re going to cover in this article. Simply click on the quick links to jump straight to the section you need.

Manually Adding a New User or Author to Your Website

If you want to add a small number of people to your website, then this is easy to do with WordPress’s built-in user management system.

This method is ideal for:

  • Small businesses that have several different employees managing their website.
  • Organizations such as churches and nonprofits that have volunteers updating their website.
  • Blogs with multiple authors, such as a fashion blog that you are writing with some friends.
  • Online stores that have several people managing inventory, shipping items, etc.

You simply need to go to the Users » Add New page in your WordPress admin area. Next, you just have to fill out the form to create a new user.

Fill out the form to add a new user to your website

On the form, you first need to enter a username. The user can use this or their email address to login.

Tip: The WordPress username can’t be easily changed later, but all the other details can.

Next, enter the user’s email address. Double-check that you are using the correct email address. Users will need this in order to reset their passwords and receive email notifications.

After that, you can enter the first name, last name, and website URL. Since these are optional fields, you can also leave them blank. Users can edit their own profiles to complete these fields later.

In the next step, you will need to choose a password. We recommend using an online strong password generator for this purpose.

Tip: You can use the ‘Generate password’ button to automatically create a strong password.

Below the password field, you will see a checkbox to send the user an email. If you check this, the user will receive an email letting them know how to log in. This will also have a link, so they can set a different password if they want.

The last option on the page is to choose a WordPress user role from the dropdown list.

The dropdown list of default user roles in WordPress

Each user role comes with a different set of capabilities. Subscriber is the least powerful role, and Administrator is the most powerful role. You need to choose a role depending on what tasks a user will be performing on your website.

You may already know what role you want to give your user. If so, select the role, then click the ‘Add New User’ button at the bottom of the screen.

Entering the details for your new user in WordPress

If you’re unsure about the role, don’t worry. We have a detailed explanation of the roles in the next section of this article.

Tip: Some plugins create additional user roles. For instance, WooCommerce adds ‘Customer’ and ‘Shop Manager’ roles. All in One SEO adds the ‘SEO Manager’ and ‘SEO Editor’ roles. Simply check the plugin’s documentation to find out about any additional roles that you see in this list.

Additional user roles created by WooCommerce and All in One SEO

Understanding User Roles in WordPress

WordPress comes with these default user roles:

  • Administrator
  • Editor
  • Author
  • Contributor
  • Subscriber

Tip: If you have a multisite installation of WordPress, there is also a ‘Super Admin’ role. These users can manage all the websites, whereas regular Administrators manage just one site.

Administrator

An administrator can perform all tasks on your WordPress site.

You should only assign this role to users who you fully trust. You should also feel confident about their technical skills.

With the administrator user role, a user can install plugins, change themes, delete content, and even delete other users. This includes other administrators.

You can learn more about the Administrator role here.

Editor

An editor can add, edit, publish, and delete their own WordPress posts. They can also do all of these actions for posts by all other users.

They cannot access website settings, plugins, themes, and other admin features.

This role is useful if you have an editor for your site who manages a team of authors and publishes content on a regular basis.

You can learn more about the Editor role here.

Author

Authors can add, edit, and publish their own posts. They can upload files, too.

They can’t edit or publish other people’s posts or access features like plugins, themes, settings, and tools.

You may want to use a plugin to restrict authors so they can only write in a specific category.

You could also let authors revise their published posts. Again, you will need to use a plugin to extend the Author user role.

You can learn more about the Author role here.

Contributor

A contributor can add and edit their own posts but cannot publish them.

However, they cannot edit other users’ posts or access features like plugins, themes, settings, and tools.

It’s important to note that contributors cannot upload media files, such as images. The easiest way to get around this is to get contributors to upload their post’s image(s) through a file upload form.

That way, the image(s) can be saved straight to the WordPress media library. This makes it easy for an editor or administrator to add them to the post.

You can learn more about the Contributor role here.

Subscriber

The subscriber role does not let users add or edit posts in any way.

With the default settings, subscribers can create a profile and save their details. This lets them enter them more quickly when leaving comments.

You can also use a membership plugin or LMS plugin to create members-only content that is available to subscribers. We will come onto that later in this article.

You can learn more about the Subscriber role here.

To find out more about all the different user roles in WordPress and how they relate to one another, check out our beginner’s guide to WordPress user roles and permissions.

Managing Users in WordPress

As an administrator, you can add and remove users from your WordPress site at any time. After you have added a user, you can edit their profile at any time and change any information including passwords.

Simply click on the Users tab in your WordPress admin to go to the user page. You can edit or delete a user at any time.

Managing users in WordPress

You can edit the user’s profile to change their password, change their role, and more. You can also bulk edit users to change their role, if you want to upgrade or downgrade several users’ role at the same time.

Users can also edit their own profile by going to Users » Profile in the WordPress dashboard. They can add a profile picture and change most of their details, but they cannot change their role.

Open Your WordPress Site for Anyone to Register for Free

What if you want to let users register on your site for free?

It would be a lot of work to add each user manually. Instead, you can let them create their own account.

First, you need to go to Settings » General in your WordPress admin and check the ‘Anyone can register’ box.

Enabling public registration for your website

By default, new users will be given the Subscriber role. Go ahead and change this to any role you want using the dropdown.

Warning: We recommend only letting users register as Subscribers or Contributors. If you let users register as Authors, they could publish a post without approval. Never use Administrator as the default setting.

Don’t forget to click the ‘Save Changes’ button at the bottom of the page to store your changes.

You also need to add a login form to your site. The best way to do this is with the WPForms plugin. Just follow our guide on how to allow user registration on your WordPress site for help with this.

Tip: You can also disable the WordPress admin bar for subscribers or other user roles.

Another way to add new users to your site is to create a paid membership program that users sign up for.

This allows you to sell members-only content, add premium content behind paywall, sell online courses, and more.

To do this, you need a WordPress membership plugin.

We recommend using MemberPress. It’s the best membership and course creation plugin with all the functionality and flexibility you need.

Just some of the setup options in MemberPress

MemberPress lets you lock specific posts and pages on your site so that only registered, paying users can access them. Many sites offer premium content like this as a way to make money online.

With MemberPress, it’s easy to create different access levels. For instance, you might offer a Bronze, Silver, and Gold plan. Or, you could create separate courses for users to sign up for.

You also get access to powerful tools such as MemberPress’s reports to show you your average member lifetime value, how many members you have in total, and more.

MemberPress allows you to add drip content to create evergreen membership site, and you can even sell group memberships in WordPress.

For a step by step tutorial on setting up MemberPress on your site, check out our ultimate guide to creating a WordPress membership site.

We hope this article helped you learn how to add new users and authors to your WordPress website. You may also want to see our comparison of the best email marketing services and how to add push notifications, so you can connect with your users after they leave your website.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post How to Add New Users and Authors to Your WordPress Blog appeared first on WPBeginner.

Smashing Podcast Episode 34 With Harry Roberts: What’s The State Of Web Performance?

In this episode, we’re talking about Web Performance. What does the performance landscape look like in 2021? I spoke with expert Harry Roberts to find out.

Show Notes

Harry is running a Web Performance Masterclass workshop with Smashing in May 2021. At the time of publishing, big earlybird discounts are still available.

Weekly Update

Transcript

Drew McLellan: He’s an independent Consultant Web Performance Engineer from Leeds in the UK. In his role, he helps some of the world’s largest and most respected organizations deliver faster and more reliable experiences to their customers. He’s an invited Google Developer Expert, a Cloudinary Media Developer Expert, an award-winning developer, and an international speaker. So we know he knows his stuff when it comes to web performance, but did you know he has 14 arms and seven legs? My Smashing friends, please welcome Harry Roberts. Hi Harry, how are you?

Harry Roberts: Hey, I’m smashing thank you very much. Obviously the 14 arms, seven legs... still posing its usual problems. Impossible to buy trousers.

Drew: And bicycles.

Harry: Yeah. Well I have three and a half bicycles.

Drew: So I wanted to talk to you today, not about bicycles unfortunately, although that would be fun in itself. I wanted to talk to you about web performance. It’s a subject that I’m personally really passionate about but it’s one of those areas where I worry, when I take my eye off the ball and get involved in some sort of other work and then come back to doing a bit of performance work, I worry that the knowledge I’m working with goes out of date really quick... Is web performance as fast-moving these days as I perceive?

Harry: This is... I’m not even just saying this to be nice to you, that’s such a good question because I’ve been thinking on this quite a bit lately and I’d say there are two halves of it. One thing I would try and tell clients is that actually it doesn’t move that fast. Predominantly because, and this is the soundbite I always use, you can bet on the browser. Browsers aren’t really allowed to change fundamentally how they work, because, of course, there’s two decades of legacy they have to uphold. So, generally, if you bet on the browser and you know how those internals work, and TCP/IP that’s never changing... So the certain things that are fairly set in stone, which means that best practice will, by and large, always be best practice where the fundamentals are concerned.

Harry: Where it does get more interesting is... The thing I’m seeing more and more is that we’re painting ourselves into corners when it comes to site-speed issues. So we actually create a lot of problems for ourselves. So what that means realistically is performance... it’s the moving goalpost, I suppose. The more the landscape or the topography of the web changes, and the way it’s built and the way we work, we pose ourself new challenges. So the advent of doing a lot more work on the client poses different performance issues than we’d be solving five years ago, but those performance issues still pertain to browser internals which, by and large, haven’t changed in five years. So a lot of it depends... And I’d say there’s definitely two clear sides to it... I encourage my clients lean on the browser, lean on the standards, because they can’t just be changed, the goalposts don’t really move. But, of course, that needs to meld with more modern and, perhaps slightly more interesting, development practices. So you keep your... Well, I was going to say "A foot in both camps" but with my seven feet, I’d have to... four and three.

Drew: You mentioned that the fundamentals don’t change and things like TCP/IP don’t change. One of the things that did change in... I say "recent years", this is actually probably going back a little bit now but, is HTTP in that we had this established protocol HTTP for communicating between clients and servers, and that changed and then we got H2 which is then all binary and different. And that changed a lot of the... It was for performance reasons, it was to take away some of the existing limitations, but that was a change and the way we had to optimize for that protocol changed. Is that now stable? Or is it going to change again, or...

Harry: So one thing that I would like to be learning more about is the latter half of the question, the changing again. I need to be looking more into QUIC and H3 but it’s a bit too far around of the corner to be useful to my clients. When it comes to H2, things have changed quite a lot but I genuinely think H2 is a lot of false promise and I do believe it was rushed over the line, which is remarkable considering H1 was launched... And I mean 1.1, was 1997, so we have a lot of time to work on H2.

Harry: I guess the primary benefit is web developers who understand it or perceive it is unlimited in flight requests now. So rather than six dispatched and/or six in-flight requests at a time, potentially unlimited, infinite. Brings really interesting problems though because... it’s quite hard to describe without visual aids but you’ve still got the same amount of bandwidth available, whether you’re running H1 or H2, the protocol doesn’t make your connection any faster. So it’s quite possible that you could flood the network by requesting 24 files at once, but you don’t have enough bandwidth for that. So you don’t actually get any faster because you can only manage, perhaps, a fraction of that at a time.

Harry: And also what you have to think about is how the files respond. And this is another pro-tip I go through on client workshops et cetera. People will look at an H2 waterfall and they will see that instead of the traditional six dispatch requests they might see 24. Dispatching 24 requests isn’t actually that useful. What is useful is when those responses are returned. And what you’ll notice is that you might dispatch 24 requests, so your left-hand side of your waterfall looks really nice and steep, but they all return in a fairly staggered, sequential manner because you need to limit the amount of bandwidth so you can’t fulfill all response at the same time.

Harry: Well, the other thing is if you were to fulfill all response at the same time, you’d be interleaving responses. So you night get the first 10% of each file and the next 20%... 20% of a JavaScript file is useless. JavaScript isn’t usable until 100% of it has arrived. So what you’ll see is, in actual fact, the way an H2 waterfall manifests itself when you look at the response... It looks a lot more like H1 anyway, it’s a lot more staggered. So, H2, I think it was oversold, or perhaps engineers weren’t led to believe that there are caps on how effective it could be. Because you’ll see people overly sharding their assets and they might have twenty... let’s keep the number 24. Instead of having two big JS files, you might have 24 little bundles. They’ll still return fairly sequentially. They won’t all arrive at the same time because you’ve not magic-ed yourself more bandwidth.

Harry: And the other problem is each request has a constant amount of latency. So let’s say you’re requesting two big files and it’s a hundred millisecond roundtrip and 250 milliseconds downloading, that’s two times 250 milliseconds. If you multiply up to 24 requests, you’ve still got constant latency, which we’ve decided 100 milliseconds, so now you’ve got 2400 milliseconds of latency and 24 times... instead of 250 milliseconds download let’s say its 25 milliseconds download, it’s actually taken longer because the latency stays constant and you just multiply that latency over more requests. So I’ll see clients who will have read that H2 is this magic bullet. They’ll shard... Oh! They couldn’t simplify the development process, we don’t need to do bundling or concatenation et cetera, et cetera. And ultimately it will end up slower because you’ve managed to spread your requests out, which was the promise, but your latency stays constant so you’ve actually just got n times more latency in the browser. Like I said, really hard, probably pointless trying to explain that without visuals, but it’s remarkable how H2 manifests itself compared to what people are hoping it might do.

Drew: Is there still benefit in that sharding process in that, okay, to get the whole lot still takes the same amount of time but by the time you get 100% of the first one 24th back you can start working on it and you can start executing it before the 24th is through.

Harry: Oh, man, another great question. So, absolutely, if things go correctly and it does manifest itself in a more H1 looking response, the idea would be file one returns first, two, three, four, and then they can execute in the order they arrive. So you can actually shorten the aggregate time by assuring that things arrive at the same time. If we have a look at a webpage instead of waterfall, and you notice that requests are interleaved, that’s bad news. Because like I said, 10% of a JavaScript file is useless.

Harry: If the server does its job properly and it sends, sends, sends, sends, send, then it will get faster. And then you’ve got knock-on benefits of your cacheing strategy can be more granular. So really annoying would be you update the font size on your date picker widget. In H1 world you’ve got to cache bust perhaps 200 kilowatts of your site’s wide CSS. Whereas now, you just cache bust datepicker.css. So we’ve got offshoot benefits like that, which are definitely, definitely very valuable.

Drew: I guess, in the scenario where you magically did get all your requests back at once, that would obviously bog down the client potentially, wouldn’t it?

Harry: Yeah, potentially. And then what would actually happen is the client would have to do a load of resource scheduling so what you’d end up with is a waterfall where all your responses return at the same time, then you’d have a fairly large gap between the last response arriving and its ability to execute. So ideally, when we’re talking about JavaScript, you’d want the browser to request them all in the request order, basically the order you defined them in, the server to return them all in the correct order so then the browser can execute them in the correct order. Because, as you say, if they all returned at the same time, you’ve just got a massive JavaScript to run at once but it still needs to be scheduled. So you could have a doubter of up to second between a file arriving and it becoming useful. So, actually, H1... I guess, ideally, what you’re after is H2 request scheduling, H1 style responses, so then things can be made useful as they arrive.

Drew: So you’re basically looking for a response waterfall that looks like you could ski down it.

Harry: Yeah, exactly.

Drew: But you wouldn’t need a parachute.

Harry: Yeah. And it’s a really difficult... I think to say it out loud it sounds really trivial, but given the way H2 was sold, I find it quite... not challenging because that makes my client sound... dumb... but it’s quite a thing to to explain to them... if you think about how H1 works, it wasn’t that bad. And if we get responses that look like that and "Oh yeah, I can see it now". I’ve had to teach performance engineers this before. People who do what I do. I’ve had to teach performance engineers that we don’t mind too much when requests were made, we really care about when responses become useful.

Drew: One of the reasons things seem to move on quite quickly, especially over the last five years, is that performance is a big topic for Google. And when Google puts weight behind something like this then it gains traction. Essentially though, performance is an aspect of user experience, isn’t it?

Harry: Oh, I mean... this is a really good podcast, I was thinking about this half an hour ago, I promise you I was thinking about this half an hour ago. Performance is applied accessibility. You’re guaranteeing or increasing the chances that someone can access your content and I think accessibility is always just... Oh it’s screen readers, right? It’s people without sight. The decisions to build a website rather than an app... the decisions are access more of an audience. So yeah, performance is applied accessibility, which is therefore the user experience. And that user experience could come down to "Could somebody even experience your site" full stop. Or it could be "Was that experience delightful? When I clicked a button, did it respond in a timely manner?". So I 100% agree and I think that’s a lot of the reason why Google are putting weight on it, is because it affects the user experience and if someone’s going to be trusting search results, we want to try and give that person a site that they’re not going to hate.

Drew: And it’s... everything that you think about, all the benefits you think about, user experience, things like increased engagement, it’s definitely true isn’t it? There’s nothing that sends the user away from a site more quickly than a sluggish experience. It’s so frustrating, isn’t it? Using a site where you know that maybe the navigation isn’t that clear and if you click through to a link and you think "Is this what I want? Is it not?" And just the cost of making that click, just waiting, and then you’ve got to click the back button and then that waiting, and it’s just... you give up.

Harry: Yeah, and it makes sense. If you were to nip to the supermarket and you see that it’s absolutely rammed with people, you’ll do the bare minimum. You’re not going to spend a lot of money there, it’s like "Oh I just need milk", in and out. Whereas if it’s a nice experience, you’ve got "Oh, well, while I’m here I’ll see if... Oh, yeah they’ve got this... Oh, I’ll cook this tomorrow night" or whatever. I think still, three decades into the web, even people who build for the web struggle, because it’s intangible. They struggle to really think that what would annoy you in a real store would annoy you online, and it does, and the stats show that it has.

Drew: I think that in the very early days, I’m thinking late 90s, showing my age here, when we were building websites we very much thought about performance but we thought about performance from a point of view that the connections that people were using were so slow. We’re talking about dial-up, modems, over phone lines, 28K, 56K modems, and there was a trend at one point with styling images that every other line of the image would blank out with a solid color to give this... if you can imagine it like looking through a venetian blind at the image. And we did that because it helped with the compression. Because every other line the compression algorithm could-

Harry: Collapse into one pointer.

Drew: Yeah. And so you’ve significantly reduced your image size while still being able to get... And it became a design element. Everybody was doing it. I think maybe Jeffrey Zeldman was one of the first who pioneered that approach. But what we were thinking about there was primarily how quickly could we get things down the wire. Not for user experience, because we weren’t thinking about... I mean I guess it was user experience because we didn’t want people to leave our sites, essentially. But we were thinking about not optimizing things to be really fast but trying to avoid them being really slow, if that makes sense.

Harry: Yeah, yeah.

Drew: And then, I think as speeds like ADSL lines became more prevalent, that we stopped thinking in those terms and started just not thinking about it at all. And now we’re at the situation where we’re using mobile devices and they’ve got constrained connections and perhaps slower CPUs and we’re having to think about it again, but this time in terms of getting an advantage. As well as the user experience side of things, it can have real business benefits in terms of costs and ability to make profit. Hasn’t it?

Harry: Yeah, tremendously. I mean, not sure how to word it. Not shooting myself in the foot here but one thing I do try and stress to clients is that site-speed is going to give you a competitive advantage but it’s only one thing that could give you some competitive advantage. If you’ve got a product no one wants to buy then it doesn’t matter how fast your site is. And equally, if someone genuinely wants the world’s fastest website, you have to delete your images, delete your CSS, delete your JavaScript, and then see how many products you tell, because I guarantee site-speed wasn’t the factor. But studies have shown that there’s huge benefits of being fast, to the order of millions. I’m working with a client as we speak. We worked out for them that if they could render a given page one second faster, or rather their largest content for paint was one second faster, it’s worth 1.8 mil a year, which is... that’s a big number.

Drew: That would almost pay your fee.

Harry: Hey! Yeah, almost. I did say to them "Look, after two years this’ll be all paid off. You’ll be breaking even". I wish. But yeah, does the client-facing aspect... sorry, the customer-facing aspect of if you’ve got an E-Com site, they’re going to spend more money. If you’re a publisher, they’re going to read more of an article or they will view more minutes of content, or whatever you do that is your KPI that you measure. It could be on the Smashing site, it could be they didn’t bounce, they actually click through a few more articles because we made it really easy and fast. And then faster sites are cheaper to run. If you’ve got your cacheing strategy sorted you’re going to keep people away from your servers. If you optimize your assets, anything that does have to come from your server is going to weight a lot less. So much cheaper to run.

Harry: The thing is, there’s a cost in getting there. I think Scott Jehl probably said one of the most... And I heard it from him first, so I’m going to assume he came up with it but the saying is "It’s easy to make a fast website but it’s difficult to make a website fast". And that is just so succinct. Because the reason web perf might get pushed down the list of things to do is because you might be able to say to a client "If I make your site a second faster you’ll make an extra 1.8 mil a year" or it can be "If you just added Apple Pay to your checkout, you’re going to make an extra five mil." So it’s not all about web perf and it isn’t the deciding factor, it is one part of a much bigger strategy, especially for E-Com online. But the evidence is that I’ve measured it firsthand with my retail clients, my E-Com clients. The case for it is right there, you’re absolutely right. It’s competitive advantage, it will make you more money.

Drew: Back in the day, again, I’m harping back to a time past, but people like Steve Souders were some of the first people to really start writing and speaking about web performance. And people like Steve were basically saying "Forget the backend infrastructure, where all the gains to be had are in the browser, in the front end, that’s where everything slow happens." Is that still the case 15 years on?

Harry: Yeah, yeah. He reran the test in between way back then and now, and the gap had actually widened, so it’s actually more costly over the wire. But there is a counter to that, which is if you’ve got really bad backend performance, if you set out of the gate slowly, there’s only so much you can claw back on the front end. I got a client at the moment, their time to first byte is 1.5 seconds. We can never render faster than 1.5 seconds therefore, so that’s going to be a cap. We can still claw time back on the front end but if you’ve got a really, really bad time to first byte, you have got backend slow downs, there’s a limit on how much faster your front end performance efforts could get you. But absolutely.

Harry: That is, however, changing because... Well, no it’s not changing I guess, it’s getting worse. We’re pushing more onto the client. It used to be a case of "Your server is as fast as it is but then after that we’ve got a bunch of question marks." because I hear this all the time "All our users run on WiFi. They’ve all got desktop machines because they all work from our office." Well, no, now they’re all working from home. You don’t get to choose. So, that’s where all the question marks come in which is where the slow downs happen, where you can’t really control it. After that, the fact that now we are tending to put more on the client. By that I mean, entire run times on the client. You’ve moved all your application logic off of a server anyway so your time to first byte should be very, very minimal. It should be a case of sending some bundles from a CDM to my... but you’ve gone from being able to spec to your own servers to hoping that somebody’s not got Netflix running on the same machine they’re trying to view your website on.

Drew: It’s a really good point about the way that we design sites and I think the traditional best practice has always been you should try and cater for all sorts of browsers, all sorts of connection speeds, all sorts of screen sizes, because you don’t know what the user is going to be expecting. And, as you said, you have these scenarios where people say "Oh no we know all our users are on their work-issued desktop machine, they’re running this browser, it’s the latest version, they’re hardwired into the LAN" but then things happen. One of the great benefits of having web apps is that we can do things like distribute our work force suddenly back all to their homes and they can keep working, but that only holds true if the quality of the engineering was such that then somebody who’s spinning up their home machine that might have IE11 on it or whatever, whether the quality of the work is there that actually means that the web fulfills its potential in being a truly accessible medium.

Drew: As you say, there’s this trend to shift more and more stuff into the browser, and, of course, then if the browser is slow, that’s where the slowness happens. You have to wonder "Is this a good trend? Should we be doing this?" I’ve got one site that I particularly think of, noticed that is almost 100% server rendered. There’s very little JavaScript and it is lightning fast. Every time I go to it I think "Oh, this is fast, who wrote this?" And then I realize "Oh yeah, it was me".

Harry: That’s because you’re on localhost, no wonder it feels fast. It’s your dev site.

Drew: Then, my day job, we’re building out our single page application and shifting stuff away from the server because the server’s the bottleneck in that case. Can you just say that it’s more performant to be in the browser? Or more performant to be on the server? Is it just a case of measuring and taking it on a case-by-case basis?

Harry: I think you need to be very, very, very aware of your context and... genuinely I think an error is... narcissism where people think "Oh, my blog deserves to be rendered in someone’s browser. My blog with a bounce rate of 89% needs its own runtime in the browser, because I need subsequent navigations to be fast, I just want to fetch a... basically a diff of the data." No one’s clicking onto your next article anyway, mate, don’t push a runtime down the pipe. So you need to be very aware of your context.

Harry: And I know that... if Jeremy Keith’s listening to this, he’s going to probably put a hit out on me, but there is, I would say, a difference between a website and a web app and the definition of that is very, very murky. But if you’ve got a heavily read and write application, so something where you’re inputting data, manipulating data, et cetera. Basically my site is not a web app, it’s a website, it’s read only, that I would firmly put in the website camp. Something like my accountancy software is a web app, I would say is a web app and I am prepared to suffer a bit of boot time cost, because I know I’ll be there for 20 minutes, an hour, whatever. So you need a bit of context, and again, maybe narcissism’s a bit harsh but you need to have a real "Do we need to make this newspaper a client side application?" No, you don’t. No, you don’t. People have got ad-blocker on, people don’t like commuter newspaper sites anyway. They’re probably not even going to read the article and rant about it on Facebook. Just don’t build something like that as a client rendered application, it’s not suitable.

Harry: So I do think there is definitely a point at which moving more onto the client would help, and that’s when you’ve got less sensitivity to churn. So any com type, for example, I’m doing an audit for a moment for a site who... I think it’s an E-Com site but it’s 100% on the client. You disable JavaScript and you see nothing, just an empty div id="app". E-Com is... you’re very sensitive to any issues. Your checkout flow is even subtly wrong, I’m off somewhere else. It’s too slow, I’m off somewhere else. You don’t have the context where someone’s willing to bed in to that app for a while.

Harry: Photoshop. I pop open Photoshop and I’m quite happy to know that it’s going to take 45 seconds of splash screen because I’m going to be in there for... basically the 45 seconds is worth the 45 minutes. And it’s so hard to define, which is why I really struggle to convince clients "Please don’t do this" because I can’t just say "How long do you think your user’s going to be there for". And you can prox it from... if your bounce rate’s 89% don’t optimize for a second page view. Get that bounce rate down first. I do think there’s definitely a split but what I would say is that most people fall on the wrong side of that line. Most people put stuff in the client that shouldn’t be there. CNN, for example, you cannot read a single headline on the CNN website until it is fully booted a JavaScript application. The only thing server rendered is the header and footer which is the only thing people don’t care about.

Harry: And I feel like that is just... I don’t know how we arrive at that point. It’s never going to be the better option. You deliver a page that is effectively useless which then has to say "Cool, I’ll go fetch what would have been a web app but we’re going to run it in the browser, then I’ll go and ask for a headline, then you can start to... oh, you’re gone." That really, really irks me.

Harry: And it’s no one’s fault, I think it’s the infancy of this kind of JavaScript ecosystem, the hype around it, and also, this is going to sound really harsh but... It’s basically a lot of naïve implementation. Sure, Facebook have invented React and whatever, it works for them. Nine times out of 10 you’re not working at Facebook scale, 95 times out of 100 you’re probably not the smartest Facebook engineers, and that’s really, really cruel and it sounds horrible to say, but you can only get... None of these things are fast by default. You need a very, very elegant implementation of these things to make them correct.

Harry: I was having this discussion with my old... he was a lead engineer on the squad that I was on 10 years ago at Sky. I was talking to him the other day about this and he had to work very hard to make a client rendered app fast, whereas making a server rendered app fast, you don’t need to do anything. You just need to not make it slow again. And I feel like there’s a lot of rose tinted glasses, naivety, marketing... I sound so bleak, we need to move on before I start really losing people here.

Drew: Do you think we have the tendency, as an industry, to focus more on developer experience than user experience sometimes?

Harry: Not as a whole, but I think that problem crops up in a place you’d expect. If you look at the disparity... I don’t know if you’ve seen this but I’m going to presume you have, you seem to very much have your finger on the pulse, the disparity between HTTP archive’s data about what frameworks and JavaScript libraries are used in the wild versus the state of JavaScript survey, if you follow the state of JavaScript survey it would say "Oh yes, 75% of developers are using React" whereas fewer than 5% of sites are using React. So, I feel like, en masse, I don’t think it’s a problem, but I think in the areas you’d expect it, heavy loyalty to one framework for example, developer experience is... evangelized probably ahead of the user. I don’t think developer experience should be overlooked, I mean, everything has a maintenance cost. Your car. There was a decision when it was designed that "Well, if we hide this key, that functionality, from a mechanic, it’s going to take that mechanic a lot longer to fix it, therefore we don’t do things like that". So there does need to be a balance of ergonomics and usability, I think that is important. I think focusing primarily on developer experience is just baffling to me. Don’t optimize for you, optimize for your customer, your customer pays you it’s not the other way around.

Drew: So the online echo chamber isn’t exactly representative of reality when you see everybody saying "Oh you should be using this, you should be doing that" then that’s actually only a very small percentage of people.

Harry: Correct, and that’s a good thing, that’s reassuring. The echo chamber... it’s not healthy to have that kind of monoculture perhaps, if you want to call it that. But also, I feel like... and I’ve seen it in a lot of my own work, a lot of developers... As a consultant, I work with a lot of different companies. A lot of people are doing amazing work in WordPress. And WordPress powers 24% of the web. And I feel like it could be quite easy for a developer like that working in something like WordPress or PHP on the backend, custom code, whatever it is, to feel a bit like "Oh, I guess everyone’s using React and we aren’t" but actually, no. Everyone’s talking about React but you’re still going with the flow, you’re still with the majority. It’s quite reassuring to find the silent majority.

Drew: The trend towards static site generators and then hosting sites entirely on a CDN, sort of JAMstack approach, I guess when we’re talking about those sorts of publishing type sites, rather than software type sites, I guess that’s a really healthy trend, would you think?

Harry: I love that, absolutely. You remember when we used to call SSG "flap file", right?

Drew: Yeah.

Harry: So, I built CSS Wizardry on Jekyll back when Jekyll was called a flap file website. But now we service our generator, huge, huge fan of that. There’s no disadvantage to it really, you pay maybe a slightly larger up front compute cost of pre-compiling the site but then your compute cost is... well, Cloudflare fronts it, right? It’s on a CDN so your application servers are largely shielded from that.

Harry: Anything interactive that does need doing can be done on the client or, if you want to get fancy, what one really nice approach, if you are feeling ambitious, is use Edge Side Includes so you can keep your shopping cart server rendered, but at the edge. You can do stuff like that. Tremendous performance benefits there. Not appropriate for a huge swathe of sites, but, like you say, if we’re thinking publishing... an E-Com site it wouldn’t work, you need realtime stock levels, you need... search that doesn’t just... I don’t know you just need far more functionality. But yeah, I think the Smashing site, great example, my site is an example, much smaller than Smashing but yeah, SSG, flap filers, I’m really fond of it.

Drew: Could it work going deeper into the JAMstack approach of shifting everything into the client and building an E-Commerce site? I think the Smashing E-Commerce site is essentially using JavaScript in the client and server APIs to do the actual functionality as service functions or what have you.

Harry: Yeah. I’ve got to admit, I haven’t done any stuff with serverless. But yeah, that hybrid approach works. Perhaps my E-Commerce example was a bit clunky because you could get a hybrid between statically rendering a lot of the stuff, because most things on an E-Com site don’t really change. You filter what you can do on the client. Search, a little more difficult, stock levels does need to go back to an API somewhere, but yeah you could do a hybrid for a definite, for an E-Com site.

Drew: Okay, so then it’s just down to monitoring all those performance metrics again, really caring about the network, about latency, about all these sorts of things, because you’re then leaning on the network a lot more to fetch all those individual bits of data. It hosts a new set of problems.

Harry: Yeah, I mean you kind of... I wouldn’t say "Robbing Peter to pay Paul" but you are going to have to keep an eye on other things elsewhere. I’ve not got fully to the bottom of it, before anyone tweets it at us, but a client recently moved to an E-Commerce client. I worked with them two years ago and that site was already pretty fast. It was built on... I can’t remember which E-Com platform, it was .net, hosted on IIS, server rendered, obviously, and it was really fast because of that. It was great and we just wanted to maintain, maybe find a couple of hundred milliseconds here and there, but really good. Half way through last year, they moved to client side React for key pages. PP... product details page, product listing page, and stuff just got marketable slower lower, much slower. To the point they got back in touch needing help again.

Harry: And one of the interesting things I spotted when they were putting a case for "We need to actually revert this". I was thinking about all the...what’s slower, obviously it’s slower, how could doing more work ever be faster, blah blah blah. One of their own bullet points in the audit was: based on projections, their yearly hosting costs have gone up by a factor of 10 times. Because all of a sudden they’ve gone from having one application server and a database to having loads of different gateways, loads of different APIs, loads of different microservers they’re calling on. It increased the surface area of their application massively. And the basic reason for this, I’ll tell you exactly why this happened. The developer, it was a very small team, the developer who decided "I’m going to use React because it seems like fun" didn’t do any business analysis. It was never expected of them to actually put forward a case of how much is it going to cost the dude, how much is it going to return, what’s the maintenance cost of this?

Harry: And that’s a thing I come up against really frequently in my work and it’s never the developer’s fault. It’s usually because the business keeps financials away from the engineering team. If your engineers don’t know the cost or value of their work then they’re not informed to make those decisions so this guy was never to know that that was going to be the outcome. But yeah, interestingly, moving to a more microservice-y approach... And this is an outlier, and I’m not going to say that that 10 times figure is typical, it definitely seems atypical, but it’s true that there is at least one incident I’m aware of when moving to this approach, because they just had to use more providers. It 10x’ed their... there’s your 10 times engineer, increased hosting by 10 times.

Drew: I mean, it’s an important point, isn’t it? Before starting out down any particular road with architectural changes and things about doing your research and asking the right questions. If you were going to embark on some big changes, say you’ve got a really old website and you’re going to structure it and you want it to be really fast and you’re making all your technology choices, I mean it pays, doesn’t it, to talk to different people in the business to find out what they want to be doing. What sort of questions should you be asking other people in the business as a web developer or as a performance engineer? Who should you be talking to you and what should you be asking them?

Harry: I’ve got a really annoying answer to the "Who should you be talking to?" And the answer is everyone should be available to you. And it will depend on the kind of business, but you should be able to speak to marketing "Hey, look, we’re using this AB testing tool. How much does that cost as a year and how much do you think it nets as a year?" And that developer should feel comfortable. I’m not saying developers need to change their attitude, what I mean is the company should make the developers able to ask those kind of questions. How much does Optimizely cost as a year? Right, well that seems like a lot, does it make that much in return? Okay, whatever we can make a decision based on that. That’s who you should be talking to and then questions you should ask, it should be things like...

Harry: The amount of companies I work will, they won’t give their own developers to Google Analytics. How are you meant to build a website if you don’t know who you’re building it for? So the question should be... I work a lot with E-Com clients so every developer should things like "What is our average order value? What is our conversion rate? What is our revenue, how much do we make?" These things mean that you can at least understand that "Oh, people spend a lot of money on this website and I’m responsible for a big chunk of that and I need to take that responsibility."

Harry: Beyond that, other things are hard to put into context, so for me, one of things that I, as a consultant, so this is very different to an engineer in the business, I need to know how sensitive you are to performance. So if a client gives me the average order value, monthly traffic, and their conversion rate, I can work out how much 100 milliseconds, 500 a second will save them a year, or return them, just based on those three numbers I can work out roughly "Well a second’s worth 1.8 mil". It’s a lot harder for someone in the business to get all the back information because as a performance engineer it’s second nature to me. But if you can work that kind of stuff out, it unlocks a load of doors. Okay, well if a second’s work this much to us, I need to make sure that I never lose a second and if I can, gain a second back. And that will inform a lot of things going forward. A lot of these developers are kept quite siloed. "Oh well, you don’t need to know about business stuff, just shut up and type".

Drew: I’ve heard you say, it is quite a nice soundbite, that nobody wants a faster website.

Harry: Yeah.

Drew: What do you mean by that?

Harry: Well it kind of comes back to, I think I’ve mentioned it already in the podcast, that if my clients truly wanted the world’s fastest website, they would allow me to go in and delete all their JavaScript, all their CSS, all their images. Give that customer a Times New Roman stack.

Harry: But fast for fast sake is... not chasing the wrong thing but you need to know what fast means to you, because, I see it all the time with clients. There’s a point at which you can stop. You might find that your customer’s only so sensitive to web perf that it might mean that getting a First Contentful Paint from four seconds to two seconds might give you a 10% increase in revenue, but getting from that two to a one, might only give you a 1% increase. It’s still twice as fast, but you get minimal gains. So what I need to do with my clients is work out "How sensitive are you? When can we take our foot off the gas?" And also, like I said, towards the top of the show... You need to have a product that people want to buy.

Harry: If people don’t want to buy your product, it doesn’t matter how quickly you show them it, it’ll just disgust them faster, I guess. Is your checkout flow really, really, really seamless on mobile, for example. So there’s a number of factors. For me, and my clients, it’ll be working out a sweet spot, to also working out "If getting from here to here is going to make you 1.8 mil a year, I can find you that second for a fraction of that cost." If you want me to get you an additional second on top of that, it’s going to get a lot harder. So my cost to you will probably go up, and that won’t be an extra 1.8, because it’s not lineal, you don’t get 1.8 mil for every one second.

Harry: It will tail off at some point. And clients will get to a point where... they’ll still be making gains but it might be a case of your engineering effort doubles, meaning your returns halve, you can still be in the green, hopefully it doesn’t get more expensive and you’re losing money on performance, but there’s a point where you need to slow down. And that’s usually things that I help clients find out because otherwise they will just keep chasing speed, speed, speed and get a bit blinkered.

Drew: Yeah, it is sort of diminishing returns, isn’t it?

Harry: That’s what I was look for-

Drew: Yeah.

Harry: ... diminishing returns, that’s exactly it. Yeah, exactly.

Drew: And in terms of knowing where to focus your effort... Say you’ve got the bulk of your users, 80% of your users are getting a response within two, three seconds, and then you’ve got 20% who may be in the long-tail that might end up with responses five, ten seconds. Is it better to focus on that 80% where the work’s really hard, or is it better to focus on the 20% that’s super slow, where the work might be easier, but it’s only 20%. How do you balance those sorts of things?

Harry: Drew, can you write all podcast questions for everyone else? This is so good. Well, a bit of a shout out to Tim Kadlec, he’s done great talks on this very topic and he calls it "The Long-Tail of Web Performance" so anyone listening who wants to look at that, Tim’s done a lot of good firsthand work there. The 80, 20, let’s just take those as good example figures, by the time you’re dealing with the 80th percentile, you’re definitely in the edge cases. All your crooks and web file data is based around 75th percentile. I think there’s a lot of value investing in that top 20th percentile, the worst 20%. Several reasons for this.

Harry: First thing I’m going to start with is one of the most beautiful, succinct soundbites I’ve ever heard. And the guy who told me it, I can guarantee, did not mean it to be this impactful. I was 15 years old and I was studying product design, GCSE. Finally, a project, it was a bar stool so it was a good sign of things to come. And we were talking about how you design furniture. And my teacher basically said... I don’t know if I should... I’m going to say his name, Mr. Brocklesby.

Harry: He commanded respect but he was one of the lads, we all really liked him. But he was massive in every dimension. Well over six foot tall, but just a big lad. Big, big, big, big man. And he said to us "If you were to design a doorway, would you design it for the average person?" And 15 year old brains are going "Well yeah, if everyone’s roughly 5’9 then yeah" He was like "Well, immediately, Harry can’t use that door." You don’t design for the average person, you design for the extremities because you want it to be useful to the most people. If you designed a chair for the average person, Mr. Brocklesby wasn’t going to fit in it. So he taught me from a really, really age, design to your extremities.

Harry: And where that becomes really interesting in web perf is... If you imagine a ladder, and you pick up the ladder by the bot... Okay I’ve just realized my metaphor might... I’ll stick with it and you can laugh at me afterwards. Imagine a ladder and you lift the ladder up by the bottom rungs. And that’s your worst experiences. You pick the bottom rung in the ladder to lift it up. The whole ladder comes with it, like a rising tide floats all boats. The reason that metaphor doesn’t work is if you pick a ladder up by the top rung, it all lifts as well, it’s a ladder. And the metaphor doesn’t even work if I turn it into a rope ladder, because a rope ladder then, you lift the bottom rung and nothing happens but... my point is, if you can improve experience for your 90th percentile, it’s got to get that up for your 10th percentile, right?

Harry: And this is why I tell clients, they’ll say to me things like "Oh well most of our users are on 4G on iPhones" so like all right, okay, and we start testing 3G on Android, like "No, no, most of our users are iPhones" okay... that means your average user’s going to have a better experience but anyone who isn’t already in the 50th percentile just gets left further behind. So set the bar pretty high for yourself by setting expectations pretty low.

Harry: Sorry, I’ve got a really bad habit of giving really long answers to short questions. But it was a fantastic question and, to try and wrap up, 100% definitely I agree with you that you want to look at that long-tail, you want to look at that... your 80th percentile because if you take all the experiences on the site and look at the median, and you improve the median, that means you’ve made it even better for people who were already quite satisfied. 50% of people being effectively ignored is not the right approach. And yeah, it always comes back to Mr Brocklesby telling me "Don’t design for the average person because then Harry can’t use the door". Oh, for anyone listening, I’m 193 centimeters, so I’m quite lanky, that’s what that is.

Drew: And all those arms and legs.

Harry: Yeah. Here’s another good one as well. My girlfriend recently discovered the accessibility settings in iOS... so everyone has their phone on silent, right? Nobody actually has a phone that actually rings, everyone’s got it on silent. She found that "Oh you know, you can set it so that when you get a message, the flash flashes. And if you tap the back of the phone twice, it’ll do a screenshot." And these are accessibility settings, these are designed for that 95th percentile. Yet she’s like "Oh, this is really useful".

Harry: Same with OXO Good Grips. OXO Good Grips, the kitchen utensils. I’ve got a load of them in the kitchen. They’re designed because the founder’s wife had arthritis and he wanted to make more comfortable utensils. He designed for the 99th percentile, most people don’t have arthritis. But by designing for the 99th percentile, inadvertently, everyone else is like "Oh my God, why can’t all potato peelers be this comfortable?" And I feel like it’s really, really... I like a feel-good or anecdote that I like to wheel out in these sort of scenarios. But yeah, if you optimize for them... Well, a rising tide floats all boats and that therefore optimizes the tail-end of people and you’re going to capture a lot of even happier customers above that.

Drew: Do you have the OXO Good Grips manual hand whisk?

Harry: I don’t. I don’t, is it good?

Drew: Look into it. It’s so good.

Harry: I do have the OXO Good Grips mandolin slicer which took the end of my finger off last week.

Drew: Yeah, I won’t get near one of those.

Harry: Yeah, it’s my own stupid fault.

Drew: Another example from my own experience with catering for that long-tail is that, in the project I’m working on at the moment, that long-tail is right at the end, you’ve got people with the slowest performance, but if it turns out if you look at who those customers are, they’re the most valuable customers to the business-

Harry: Okay.

Drew: ... because they are the biggest organizations with the most amount of data.

Harry: Right.

Drew: And so they’re hitting bottlenecks because they have so much data to display on a page and those pages need to be refactored a little bit to help that use case. So they’re having the slowest experience and they’re, when it comes down to it, paying the most money and making so much more of a difference than all of the people having a really fast experience because they’re free users with a tiny amount of data and it all works nice and it is quick.

Harry: That’s a fascinating dimension, isn’t it? In fact, I had a similar... I had nowhere near the business impact as what you’ve just described, but I worked with a client a couple of years ago, and their CEO got in touch because their site was slow. Like, slow, slow, slow. Really nice guy as well, he’s just a really nice down to earth guy, but he’s mentored, like proper rich. And he’s got the latest iPhone, he can afford that. He’s a multimillionaire, he spends a lot of his time flying between Australia, where he is from, and Estonia, where he is now based.

Harry: And he’s flying first class, course he is. But it means most of his time on his nice, shiny iPhone 12 Pro Max whatever, whatever, is over airplane WiFi, which is terrible. And it was this really amazing juxtaposition where he owns the site and he uses it a lot, it’s a site that he uses. And he was pushing it... I mean easily their richest customer was their CEO. And he’s in this weirdly privileged position where he’s on a worse connection than Joe Public because he’s somewhere above Singapore on a Quantas flight getting champagne poured down his neck, and he’s struggling. And that was a really fascinating insight that... Oh yeah, because you’ve got your 95th percentile can basically can go in either direction.

Drew: Yeah, it’s when you start optimizing for using a site with a glass of champagne in one hand that you think "Maybe we’re starting to lose the way a bit."

Harry: Yeah, exactly.

Drew: We talked a little bit about measurement of performance, and in my own experience with performance work it’s really essential to measure everyhtin.g A so you can identify where problems are but B so that when you actually start tackling something you can tell if you’re making a different and how much of a difference you’re making. How should we be going about measuring the performance of our sites? What tools can we use and where should we start?

Harry: Oh man, another great question. So there’s a range of answers depending on how much time, resources, inclination there is towards fixing site speed. So what I will try and do with client is... Certain off the shelf metrics are really good. Load time, do not care about that anymore. It’s very, very, very... I mean, it’s a good proxy if your load time’s 120 seconds I’m going to guess you don’t have a very fast website, but it’s too obscure and it’s not really customer facing. I actually think vitals are a really good step in the right direction because they do measure user experience but they’re based on technical input. Largest Contentful Paint is a really nice thing to visual but the technical stuff there is unblock your critical path, make sure hero images arrive quickly and make sure your web font strategy is decent. There’s a technical undercurrent to these metrics. Those are really good off the shelf.

Harry: However, if clients have got the time, it’s usually time, because you want to capture the data but you need time to actually capture the data. So what I try and do with clients is let’s go "Look, we can’t work together for the next three months because I’m fully booked. So, what we can do is really quickly set you up with a free trial of Speedcurve, set up some custom metrics" so that means that for a publisher client, a newspaper, I’d be measuring "How quickly was the headline of the article rendered? How quickly was the lead image for the article rendered?" For an E-Commerce client I want to measure, because obviously you’re measuring things like start render passively. As soon as you start using any performance monitoring software, you’re capturing your actual performance metrics for free. So your First Contentful Paint, Largest Contentful, etc. What I really want to capture is things that matter to them as a business.

Harry: So, working with an E-Com client at the moment where we are able to correlate... The faster your start render, what is the probability to an adding to cart. If you can show them a product sooner, they’re more likely to buy it. And this is a lot of effort to set up, this is kind of the stretch goal for clients who are really ambition, but anything that you really want to measure, because like I say, you don’t really want to measure what your Largest Contentful Paint is, you want to measure your revenue and was that influenced by Large Contentful Paint? So the stretch goal, ultimate thing, would be anything you would see as a KPI for that business. It could be, on newspapers, how far down the article did someone scroll? And does that correlate in any way to first input delay? Did people read more articles if CLS was lower? But then before we start doing custom, custom metrics, I honestly think web vitals is a really good place to start and it’s also been quite well normalized. It becomes a... I don’t know what the word is. Lowest common denominator I guess, where everyone in the industry now can discuss performance on this level playing field.

Harry: One problem I’ve got, and I actually need to set up a meeting with the vitals team, is I also really think Lighthouse is great, but CLS is 33% of web vitals. You’ve got LCP, FID, CLS. CLS is 33% of your vitals. Vitals is what normally goes in front of your marketing team, your analytics department, because it pops up in search console, it’s mentioned in context of search results pages, whereas vitals is concerned, you’ve got heavy weighting, 33%, a third of vitals is CLS, it’s only 5% of our Lighthouse score. So what you’re going to get is developers who build around Lighthouse, because it can be integrated into tooling, it’s a lab metric. Vitals is field data, it’s rum.

Harry: So you’ve got this massive disconnect where you’ve got your marketing team saying "CLS is really bad" and developers are thinking "Well it’s 5% of the Lighthouse score that DevTools is giving me, it’s 5% of the score that Lighthouse CLI gives us in CircleCI" or whatever you’re using, yet for the marketing team its 33% of what they care about. So the problem there is a bit of a disconnect because I do think Lighthouse is very valuable, but I don’t know how they reconcile that fairly massive difference where in vitals, CLS is 33% of your score... well, not score because you don’t really have one, and Lighthouse is only 5%, and it’s things like that that still need ironing out before we can make this discussion seamless.

Harry: But, again, long answer to a short question. Vitals is really good. LCP is a good user experience metric which can be boiled down to technical solutions, same with CLS. So I think that’s a really good jump off point. Beyond that, it’s custom metrics. What I try and get my clients to is a point where they don’t really care how fast their site is, they just care that they make more money from yesterday, and if it did is that because it was running fast? If it made less is that because it was running slower? I don’t want them to chase a mystical two second LCP, I want them to chase the optimal LCP. And if that actually turns out to be slower than what you think, then whatever, that’s fine.

Drew: So, for the web developer who’s just interested in... they’ve not got budget to spend on tools like Speedcurve and things, they can obviously run tools like Lighthouse just within their browser, to get some good measurement... Are things like Google Analytics useful for that level?

Harry: They are and they can be made more useful. Analytics, for many years now, has captured rudimentary performance information. And that is going to be DNS time, TCP and TLS, time to first byte, page download time, which is a proxy... well, whatever, just page download time and load time. So fairly clunky metrics. But it’s a good jump off point and normally every project I start with a client, if they don’t have New Relic or Speedcurve or whatever, I’ll just say "Well let me have a look at your analytics" because I can at least proxy the situation from there. And it’s never going to be anywhere near as good as something like Speedcurve or New Relic or Dynatrace or whatever. You can send custom metrics really, really, really easily off to analytics. If anyone listening wants to be able to send... my site for example. I’ve got metrics like "How quickly can you read the heading of one of my articles? At what point was the About page image rendered? At what point was the call to action that implores you to hire me? How soon is that rendered to screen?" Really trivial to capture this data and almost as trivial to send it to analytics. So if anyone wants to view source on my site, scroll down to the closing body tag and find the analytics snippet, you will see just how easy it is for me to capture custom data and send that off to analytics. And, in the analytics UI, you don’t need to do anything. Normally you’d have to set up custom reports and mine the data and make it presentable. These are a first class citizen in Google Analytics. So the moment you start capturing custom analytics, there’s a whole section of the dashboard dedicated to it. There’s no setup, no heavy lifting in GA itself, so it’s really trivial and, if clients are on a real budget or maybe I want to show them the power of custom monitoring, I don’t want to say "Oh yeah, I promise it’ll be really good, can I just have 24 grand for Speedcurve?" I can start by just saying "Look, this is rudimentary. Let’s see the possibilities here, now we can maybe convince you to upgrade to something like Speedcurve."

Drew: I’ve often found that my gut instinct on how fast something should be, or what impact a change should have, can be wrong. I’ll make a change and think I’m making things faster and then I measure it and actually I’ve made things slower. Is that just me being rubbish at web perf?

Harry: Not at all. I’ve got a really pertinent example of this. Preload... a real quick intro for anyone who’s not heard of preload, loading certain assets on the web is inherently very slow and the two primary candidates here are background images in CSS and web fonts, because before you can download a background image, you have to download the HTML, which then downloads the CSS, and then the CSS says "Oh, this div on the homepage needs this background image." So it’s inherently very slow because you’ve got that entire chunk of CSS time in between. With preload, you can put one line in HTML in the head tag that says "Hey, you don’t know it yet but, trust me, you’ll need this image really, really, really soon." So you can put a preload in the HTML which preemptively fires off this download. By the time the CSS needs the background image, it’s like "Oh cool, we’ve already got it, that’s fast." And this is toutered as this web perf Messiah... Here’s the thing, and I promise you, I tweeted this last week and I’ve been proved right twice since. People hear about preload, and the promise it gives, and also it’s very heavily pushed by Lighthouse, in theory, it makes your site faster. People get so married to the idea of preload that even when I can prove it isn’t working, they will not remove it again. Because "No, but Lighthouse said." Now this is one of those things where the theory is sound. If you have to wait for your web font, versus downloading it earlier, you’re going to see stuff faster. The problem is, when you think of how the web actually works, any page you first hit, any brand new domain you hit, you’ve got a finite amount of bandwidth and the browser’s very smart spending that bandwidth correctly. It will look through your HTML really quickly and make a shopping list. Most important thing is CSS, then it’s this jQuery, then it’s this... and then next few things are these, these, and these less priority. As soon as you start loading your HTML with preloads, you’re telling the browser "No, no, no, this isn’t your shopping list anymore, buddy, this is mine. You need to go and get these." That finite amount of bandwidth is still finite but it’s not spent across more assets, so everything gets marginally slower. And I’ve had to boo this twice in the past week, and still people are like "Yeah but no it’s because it’s downloading sooner." No, it’s being requested sooner, but it’s stealing bandwidth from your CSS. You can literally see your web fonts are stealing bandwidth from your CSS. So it’s one of those things where you have to, have to, have to follow the numbers. I’ve done it before on a large scale client. If you’re listening to this, you’ve heard of this client, and I was quite insistent that "No, no, your head tags are in the wrong order because this is how it should be and you need to have them in this order because theoretically it clues in that..." Even in what I was to the client I knew that I was setting myself up for a fool. Because of how browsers work, it has to be faster. So I’m making the ploy, this change... to many millions of people, and it got slower. It got slower. And me sitting there, indignantly insisting "No but, browsers work like this" is useless because it’s not working. And we reverted it and I was like "Sorry! Still going to invoice you for that!" So it’s not you at all.

Drew: Follow these numbers.

Harry: Yeah, exactly. "I actually have to charge you more, because I spent time reverting it, took me longer." But yeah, you’re absolutely right, it’s not you, it’s one of those things where... I have done it a bunch of times on a much smaller scale, where I’ll be like "Well this theoretically must work" and it doesn’t. You’ve just got to follow what happens in the real world. Which is why that monitoring is really important.

Drew: As the landscape changes and technology develops, Google rolls out new technologies that help us make things faster, is there a good way that we can keep up with the changes? Is there any resources that we should be looking at to keep our skills up to date when it comes to web perf?

Harry: To quickly address the whole "Google making"... I know it’s slightly tongue in cheek but I’m going to focus on this. I guess right towards the beginning, bet on the browser. Things like AMP, for example, they’re at best a after thought catch of a solution. There’s no replacement for building a fast site, and the moment you start using things like AMP, you have to hold on to those non-standard standards, the mercy of the AMP team changing their mind. I had a client spend a fortune licensing a font from an AMP allow-listed font provider, then at some point, AMP decided "Oh actually no, that font provided, we’re going to block list them now" So I had a client who’s invested heavily in AMP and this font provider and had to choose "Well, do we undo all the AMP work or do we just waste this very big number a year on the web font" blah, blah, blah. So I’d be very wary of any one... I’m a Google Developer expert but I don’t know of any gagging-order... I can be critical, and I would say... avoid things that are hailed as a one-size-fits-all solution, things like AMP.

Harry: And to dump on someone else for a second, Cloudflare has a thing called Rocket Loader, which is AMP-esque in its endeavor. It’s designed like "Oh just turn this thing on your CDN, it’ll make your site faster." And actually it’s just a replacement for building your site properly in the first place. So... to address that aspect of it, try and remain as independent as possible, know how browsers work, which immediately means that Chrome monoculture, you’re back in Google’s lap, but know how browsers work, stick to some fundamental ideologies. When you’re building a site, look a the page. Whether that’s in Figma, or Sketch, or wherever it is, look at the design and say "Well, that is what a user wants to see first, so I’ll put nothing in the way of that. I won’t lazy load this main image because that’s daft, why would I do that?" So just think about "What would you want the user to be first?" On an E-Com site, it’s going to be that product image, probably nav at the same time, but reviews of the product, Q and A of the product, lazy load that. Tuck that behind JavaScript.

Harry: Certain fundamental ways of working that will serve you right no matter what technology you’re reading up on, which is "Prioritize what your customer prioritizes". Doing more work on that’d be faster, so don’t put things in the way of that, but then more tactical things for people to be aware of, keep abreast of... and again, straight back to Google, but web.dev is proving to be a phenomenal resource for framework agnostic, stack agnostic insights... So if you want to learn about vitals, you want to learn about PWAs, so web.dev’s really great.

Harry: There’s actually very few performance-centric publications. Calibre’s email is, I think its fortnightly perf email is just phenomenal, it’s a really good digest. Keep an eye on the web platform in general, so there’s the Performance Working Group, they’ve got a load of stuff on GitHub proposals. Again, back to Google, but no one knows about this website and its phenomenal: chromestatus.com. It tells you exactly what Chrome’s working on, what the signals are from other browsers, so if you want to see what the work is on priority hints, you can go and get links to all the relevant bug trackers. Chrome Status shows you milestones for each... "This is coming out in MAT8, this was released in ’67" or whatever, that’s a really good thing for quite technical insights.

Harry: But I keep coming back to this thing, and I know I probably sound like "Old man shouts at Cloud" but stick to the basics, nearly every single pound or dollar, euro, I’ve ever earned, has been teaching clients that "You know the browser does this already, right" or "You know that this couldn’t possible be faster" and that sounds really righteous of me... I’ve never made a cent off of selling extra technology. Every bit of money I make is about removing, subtracting. If you find yourself adding things to make your site faster, you’re in the wrong direction.

Harry: Case in point, I’m not going to name... the big advertising/search engine/browser company at all, not going to name them, and I’m not going to name the JavaScript framework, but I’m currently in discussions with a very, very big, very popular JavaScript framework about removing something that’s actively harming, or optionally removing something that would harm the performance of a massive number of websites. And they were like "Oh, we’re going to loop in..." someone from this big company, because they did some research... and it’s like "We need an option to remove this thing because you can see here, and here, and here it’s making this site slower." And their solution was to add more, like "Oh but if you do this as well, then you can sidestep that" and it’s like "No, no, adding more to make a site faster must be the wrong solution. Surely you can see that you’re heading in the wrong direction if it takes more code to end up with a faster site."

Harry: Because it was fast to start with, and everything you add is what makes it slower. And the idea of adding more to make it faster, although... it might manifest itself in a faster website, it’s the wrong way about it. It’s a race to the bottom. Sorry, I’m getting really het up, you can tell I’ve not ranted for a while. So that’s the other thing, if you find yourself adding features to make a site faster, you’re probably heading in the wrong direction, it’s far more effective to make a faster by removing things than it is to add them.

Drew: You’ve put together a video course called "Everything I Have Done to Make CSS Wizardry Fast".

Harry: Yeah!

Drew: It’s a bit different from traditional online video courses, isn’t it?

Harry: It is. I’ll be honest, it’s partly... I don’t want say laziness on my part, but I didn’t want to design a curriculum which had to be very rigid and take you from zero to hero because the time involved in doing that is enormous and time I didn’t know if I would have. So what I wanted to was have ready-to-go material, just screen cast myself talking through it so it doesn’t start off with "Here is a browser and here’s how it works" so you do need to be at least aware of web perf fundamentals, but it’s hacks and pro-tips and real life examples.

Harry: And because I didn’t need to do a full curriculum, I was able to slam the price way down. So it’s not a big 10 hour course that will take you from zero to hero, it’s nip in and out as you see fit. It’s basically just looking at my site which is an excellent playground for things that are unstable or... it’s very low risk for me to experiment there. So I’ve just done video series. It was a ton of fun to record. Just tearing down my own site and talking about "Well this is how this works and here’s how you could use it".

Drew: I think it’s really great how it’s split up into solving different problems. If I want to find out more about optimizing images or whatever, I can think "Right, what does my mate Harry have to say about this?", dip in to the video about images and off I go. It’s really accessible in that way, you don’t have to sit through hours and hours of stuff, you can just go to the bit you want and learn what you need to learn and then get out.

Harry: I think I tried to keep it more... The benefit of not doing a rigid curriculum is you don’t need to watch a certain video first, there’s no intro, it’s just "Go and look around and see what you find interesting" which meant that someone suffering with LTP issues they’re like "Oh well I’ve got to dive into this folder here" or if they’re suffering with CSS problems they can go dive into that folder. Obviously I have no stats, but I imagine there’s a high abandonment rate on courses, purely because you have to trudge through three hours of intro in case you do miss something, and it’s like "Oh, do you know what, I can’t keep doing this every day" and people might just abandon a lot of courses. So my thinking was just dive in, you don’t need to have seen the preceding three hours, you can just go and find whatever you want. And feedback’s been really, really... In fact, what I’ll do is, it doesn’t exist yet, but I’ll do it straight after the call, anybody who uses the discount code SMASHING15, they’ll get 15% off of it.

Drew: So it’s almost like you’ve performance optimized the course itself, because you can just go straight to the bit you want and you don’t have to do all the negotiation and-

Harry: Yeah, unintentional but I’ll take credit for that.

Drew: So, I’ve been learning all about web performance, what have you been learning about lately, Harry?

Harry: Technical stuff... not really. I’ve got a lot on my "to learn" list, so QUIC, H3 sort of stuff I would like to get a bit more working knowledge of that, but I wrote an E-Book during first lockdown in the UK so I learned how to make E-Books which was a ton of fun because they’re just HTML and CSS and I know my way around that so that was a ton of fun. I learnt very rudimentary video editing for the course, and what I liked about those is none of that’s conceptual work. Obviously, learning a programming language, you’ve got to wrestle concepts, whereas learning an E-Book was just workflows and... stuff I’ve never tinkered with before so it was interesting to learn but it didn’t require a change of career, so that was quite nice.

Harry: And then, non technical stuff... I ride a lot of bikes, I fall off a lot of bikes... and because I’ve not traveled at all since last March, nearly a year now, I’ve been doing a lot more cycling and focusing a lot more on... improving that. So I’ve been doing a load of research around power outputs and functional threshold powers, I’m doing a training program at the moment, so constantly, constantly exhausted legs but I’m learning a lot about physiology around cycling. I don’t know why because I’ve got no plans of doing anything with it other than keep riding. It’s been really fascinating. I feel like I’ve been very fortunate during lockdowns, plural, but I’ve managed to stay active. A lot of people will miss out on simple things like a daily commute to the office, a good chance to stretch legs. In the UK, as you’ll know, cycling has been very much championed, so I’ve been tinkering a lot more with learning more about riding bikes from a more physiological aspect which means... don’t know, just being a nerd about something else for a change.

Drew: Is there perhaps not all that much difference between performance optimization on the web and performance optimization in cycling, it’s all marginal gains, right?

Harry: Yeah, exactly. And the amount of graphs I’ve been looking at on the bike... I’ve got power data from the bike, I’ll go out on a ride and come back like "Oh if I had five more watts here but then saved 10 watts there, I could do this, this, and this the fastest ever" and... been a massive anorak about it. But yeah, you’re right. Do you know what, I think you’ve hit upon something really interest there. I think that kind of thing is a good sport/pastime for somebody who is a bit obsessive, who does like chasing numbers. There are things on, I mean you’ll know this but, Strava, you’ve got your KOMs. I bagged 19 of them last year which is, for me, a phenomenal amount. And it’s nearly all from obsessing over available data and looking at "This guy that I’m trying to beat, he was doing 700 watts at this point, if I could get up to 1000 and then tail off" and blah, blah, blah... it’s being obsessive. Nerdy. But you’re right, I guess it’s a similar kind of thing, isn’t it? If you could learn where you afford to tweak things from or squeeze last little drops out...

Drew: And you’ve still got limited bandwidth in both cases. You’ve got limited energy and you’ve got limited network connection.

Harry: Exactly, you can’t just magic some more bandwidth there.

Drew: If you, the listener, would like to hear more from Harry, you can find him on Twitter, where he’s @csswizardty, or go to his website at csswizardry.com where you’ll find some fascinating case studies of his work and find out how to hire him to help solve your performance problems. Harry’s E-Book, that he mentioned, and video course we’ll link up from the show notes. Thanks for joining us today, Harry, do you have any parting words?

Harry: I’m not one for soundbites and motivation quotes but I heard something really, really, really insightful recently. Everyone keeps saying "Oh well we’re all in the same boat" and we’re not. We’re all in the same storm and some people have got better boats than others. Some people are in little dinghies, some people have got mega yachts. Oh, is that a bit dreary to end on... don’t worry about Corona, you’ll be dead soon anyway!

Drew: Keep hold of your oars and you’ll be all right.

Harry: Yeah. I was on a call last night with some web colleagues and we were talking about this and missing each other a lot. The web is, by default, remote, that’s the whole point of the web. But... missing a lot of human connection so, chatting to you for this hour and a bit now has been wonderful, it’s been really nice. I don’t know what my parting words really are meant to be, I should have prepared something, but I just hope everyone’s well, hope everyone’s making what they can out of lockdown and people are keeping busy.