Collective #696




Collective 696 Item image

Crafting Component Libraries: The Elements

Jon Yablonski takes you through the process for crafting the foundational elements that make up a component library based on design fundamentals that can grow and evolve to fit the needs of any interface.

Read it






Collective 696 Item image

Pika

A free, open-source app to quickly generate beautiful screenshots.

Check it out





Collective 696 Item image

Frontend Predictions for 2022

Jay Freestone’s thoughts on what we might see in the coming year, including the return of micro-frontends, functional JavaScript & the demise of the Jamstack as we know it.

Read it


Collective 696 Item image

clay.css

Adrian Bece made this CSS library for adding inflated fluffy 3D claymorphism styles to any HTML element.

Check it out


Collective 696 Item image

Charm

Charming libraries and tools to make the command line glamorous.

Check it out







The post Collective #696 appeared first on Codrops.

A New Approach to Solve I/O Challenges in the Machine Learning Pipeline

Background

The drive for training accuracy leads companies to develop complicated training algorithms and collect a large amount of training data with which single-machine training takes an intolerable long time. Distributed training seems promising in meeting the training speed requirements but faces the challenges of data accessibility, performance, and storage system stability in dealing with I/O in the machine learning pipeline.

Solutions

The above challenges can be addressed in different ways. Traditionally, two solutions are commonly used to help resolve data access challenges in distributed training. Beyond that, Alluxio provides a different approach.

Context-Aware Web Components Are Easier Than You Think

Another aspect of web components that we haven’t talked about yet is that a JavaScript function is called whenever a web component is added or removed from a page. These lifecycle callbacks can be used for many things, including making an element aware of its context.

Article series

The four lifecycle callbacks of web components

There are four lifecycle callbacks that can be used with web components:

  • connectedCallback: This callback fires when the custom element is attached to the element.
  • disconnectedCallback: This callback fires when the element is removed from the document.
  • adoptedCallback: This callback fires when the element is added to a new document.
  • attributeChangedCallback: This callback fires when an attribute is changed, added or removed, as long as that attribute is being observed.

Let’s look at each of these in action.

Our post-apocalyptic person component

Two renderings of the web component side-by-side, the left is a human, and the right is a zombie.

We’ll start by creating a web component called <postapocalyptic-person>. Every person after the apocalypse is either a human or a zombie and we’ll know which one based on a class — either .human or .zombie — that’s applied to the parent element of the <postapocalyptic-person> component. We won’t do anything fancy with it (yet), but we’ll add a shadowRoot we can use to attach a corresponding image based on that classification.

customElements.define(
  "postapocalyptic-person",
  class extends HTMLElement {
    constructor() {
      super();
      const shadowRoot = this.attachShadow({ mode: "open" });
    }
}

Our HTML looks like this:

<div class="humans">
  <postapocalyptic-person></postapocalyptic-person>
</div>
<div class="zombies">
  <postapocalyptic-person></postapocalyptic-person>
</div>

Inserting people with connectedCallback

When a <postapocalyptic-person> is loaded on the page, the connectedCallback() function is called.

connectedCallback() {
  let image = document.createElement("img");
  if (this.parentNode.classList.contains("humans")) {
    image.src = "https://assets.codepen.io/1804713/lady.png";
    this.shadowRoot.appendChild(image);
  } else if (this.parentNode.classList.contains("zombies")) {
    image.src = "https://assets.codepen.io/1804713/ladyz.png";
    this.shadowRoot.appendChild(image);
  }
}

This makes sure that an image of a human is output when the <postapocalyptic-person> is a human, and a zombie image when the component is a zombie.

Be careful working with connectedCallback. It runs more often than you might realize, firing any time the element is moved and could (baffling-ly) even run after the node is no longer connected — which can be an expensive performance cost. You can use this.isConnected to know whether the element is connected or not.

Counting people with connectedCallback() when they are added

Let’s get a little more complex by adding a couple of buttons to the mix. One will add a <postapocalyptic-person>, using a “coin flip” approach to decide whether it’s a human or a zombie. The other button will do the opposite, removing a <postapocalyptic-person> at random. We’ll keep track of how many humans and zombies are in view while we’re at it.

<div class="btns">
  <button id="addbtn">Add Person</button>
  <button id="rmvbtn">Remove Person</button> 
  <span class="counts">
    Humans: <span id="human-count">0</span> 
    Zombies: <span id="zombie-count">0</span>
  </span>
</div>

Here’s what our buttons will do:

let zombienest = document.querySelector(".zombies"),
  humancamp = document.querySelector(".humans");

document.getElementById("addbtn").addEventListener("click", function () {
  // Flips a "coin" and adds either a zombie or a human
  if (Math.random() > 0.5) {
    zombienest.appendChild(document.createElement("postapocalyptic-person"));
  } else {
    humancamp.appendChild(document.createElement("postapocalyptic-person"));
  }
});
document.getElementById("rmvbtn").addEventListener("click", function () {
  // Flips a "coin" and removes either a zombie or a human
  // A console message is logged if no more are available to remove.
  if (Math.random() > 0.5) {
    if (zombienest.lastElementChild) {
      zombienest.lastElementChild.remove();
    } else {
      console.log("No more zombies to remove");
    }
  } else {
    if (humancamp.lastElementChild) {
      humancamp.lastElementChild.remove();
    } else {
      console.log("No more humans to remove");
    }
  }
});

Here’s the code in connectedCallback() that counts the humans and zombies as they are added:

connectedCallback() {
  let image = document.createElement("img");
  if (this.parentNode.classList.contains("humans")) {
    image.src = "https://assets.codepen.io/1804713/lady.png";
    this.shadowRoot.appendChild(image);
    // Get the existing human count.
    let humancount = document.getElementById("human-count");
    // Increment it
    humancount.innerHTML = parseInt(humancount.textContent) + 1;
  } else if (this.parentNode.classList.contains("zombies")) {
    image.src = "https://assets.codepen.io/1804713/ladyz.png";
    this.shadowRoot.appendChild(image);
    // Get the existing zombie count.
    let zombiecount = document.getElementById("zombie-count");
    // Increment it
    zombiecount.innerHTML = parseInt(zombiecount.textContent) + 1;
  }
}

Updating counts with disconnectedCallback

Next, we can use disconnectedCallback() to decrement the number as a humans and zombies are removed. However, we are unable to check the class of the parent element because the parent element with the corresponding class is already gone by the time disconnectedCallback is called. We could set an attribute on the element, or add a property to the object, but since the image’s src attribute is already determined by its parent element, we can use that as a proxy for knowing whether the web component being removed is a human or zombie.

disconnectedCallback() {
  let image = this.shadowRoot.querySelector('img');
  // Test for the human image
  if (image.src == "https://assets.codepen.io/1804713/lady.png") {
    let humancount = document.getElementById("human-count");
    humancount.innerHTML = parseInt(humancount.textContent) - 1; // Decrement count
  // Test for the zombie image
  } else if (image.src == "https://assets.codepen.io/1804713/ladyz.png") {
    let zombiecount = document.getElementById("zombie-count");
    zombiecount.innerHTML = parseInt(zombiecount.textContent) - 1; // Decrement count
  }
}

Beware of clowns!

Now (and I’m speaking from experience here, of course) the only thing scarier than a horde of zombies bearing down on your position is a clown — all it takes is one! So, even though we’re already dealing with frightening post-apocalyptic zombies, let’s add the possibility of a clown entering the scene for even more horror. In fact, we’ll do it in such a way that there’s a possibility any human or zombie on the screen is secretly a clown in disguise!

I take back what I said earlier: a single zombie clown is scarier than even a group of “normal” clowns. Let’s say that if any sort of clown is found — be it human or zombie — we separate them from the human and zombie populations by sending them to a whole different document — an <iframe> jail, if you will. (I hear that “clowning” may be even more contagious than zombie contagion.)

And when we move a suspected clown from the current document to an <iframe>, it doesn’t destroy and recreate the original node; rather it adopts and connects said node, first calling adoptedCallback then connectedCallback.

We don’t need anything in the <iframe> document except a body with a .clowns class. As long as this document is in the iframe of the main document — not viewed separately — we don’t even need the <postapocalyptic-person> instantiation code. We’ll include one space for humans, another space for zombies, and yes, the clowns’s jail… errr… <iframe> of… fun.

<div class="btns">
  <button id="addbtn">Add Person</button>
  <button id="jailbtn">Jail Potential Clown</button>
</div>
<div class="humans">
  <postapocalyptic-person></postapocalyptic-person>
</div>
<div class="zombies">
  <postapocalyptic-person></postapocalyptic-person>
</div>
<iframe class="clowniframeoffun” src="adoptedCallback-iframe.html">
</iframe>

Our “Add Person” button works the same as it did in the last example: it flips a digital coin to randomly insert either a human or a zombie. When we hit the “Jail Potential Clown” button another coin is flipped and takes either a zombie or a human, handing them over to <iframe> jail.

document.getElementById("jailbtn").addEventListener("click", function () {
  if (Math.random() > 0.5) {
    let human = humancamp.querySelector('postapocalyptic-person');
    if (human) {
      clowncollege.contentDocument.querySelector('body').appendChild(document.adoptNode(human));
    } else {
      console.log("No more potential clowns at the human camp");
    }
  } else {
    let zombie = zombienest.querySelector('postapocalyptic-person');
    if (zombie) {
      clowncollege.contentDocument.querySelector('body').appendChild(document.adoptNode(zombie));
    } else {
      console.log("No more potential clowns at the zombie nest");
    }
  }
});

Revealing clowns with adoptedCallback

In the adoptedCallback we’ll determine whether the clown is of the zombie human variety based on their corresponding image and then change the image accordingly. connectedCallback will be called after that, but we don’t have anything it needs to do, and what it does won’t interfere with our changes. So we can leave it as is.

adoptedCallback() {
  let image = this.shadowRoot.querySelector("img");
  if (this.parentNode.dataset.type == "clowns") {
    if (image.src.indexOf("lady.png") != -1) { 
      // Sometimes, the full URL path including the domain is saved in `image.src`.
      // Using `indexOf` allows us to skip the unnecessary bits. 
      image.src = "ladyc.png";
      this.shadowRoot.appendChild(image);
    } else if (image.src.indexOf("ladyz.png") != -1) {
      image.src = "ladyzc.png";
      this.shadowRoot.appendChild(image);
    }
  }
}

Detecting hidden clowns with attributeChangedCallback

Finally, we have the attributeChangedCallback. Unlike the the other three lifecycle callbacks, we need to observe the attributes of our web component in order for the the callback to fire. We can do this by adding an observedAttributes() function to the custom element’s class and have that function return an array of attribute names.

static get observedAttributes() {
  return [“attribute-name”];
}

Then, if that attribute changes — including being added or removed — the attributeChangedCallback fires.

Now, the thing you have to worry about with clowns is that some of the humans you know and love (or the ones that you knew and loved before they turned into zombies) could secretly be clowns in disguise. I’ve set up a clown detector that looks at a group of humans and zombies and, when you click the “Reveal Clowns” button, the detector will (through completely scientific and totally trustworthy means that are not based on random numbers choosing an index) apply data-clown="true" to the component. And when this attribute is applied, attributeChangedCallback fires and updates the component’s image to uncover their clownish colors.

I should also note that the attributeChangedCallback takes three parameters:

  • the name of the attribute
  • the previous value of the attribute
  • the new value of the attribute

Further, the callback lets you make changes based on how much the attribute has changed, or based on the transition between two states.

Here’s our attributeChangedCallback code:

attributeChangedCallback(name, oldValue, newValue) {
  let image = this.shadowRoot.querySelector("img");
  // Ensures that `data-clown` was the attribute that changed,
  // that its value is true, and that it had an image in its `shadowRoot`
  if (name="data-clown" && this.dataset.clown && image) {
    // Setting and updating the counts of humans, zombies,
    // and clowns on the page
    let clowncount = document.getElementById("clown-count"),
    humancount = document.getElementById("human-count"),
    zombiecount = document.getElementById("zombie-count");
    if (image.src.indexOf("lady.png") != -1) {
      image.src = "https://assets.codepen.io/1804713/ladyc.png";
      this.shadowRoot.appendChild(image);
      // Update counts
      clowncount.innerHTML = parseInt(clowncount.textContent) + 1;
      humancount.innerHTML = parseInt(humancount.textContent) - 1;
    } else if (image.src.indexOf("ladyz.png") != -1) {
      image.src = "https://assets.codepen.io/1804713/ladyzc.png";
      this.shadowRoot.appendChild(image);
      // Update counts
      clowncount.innerHTML = parseInt(clowncount.textContent) + 1;
      zombiecount.innerHTML = parseInt(zombiecount.textContent) - 1;
    }
  }
}

And there you have it! Not only have we found out that web component callbacks and creating context-aware custom elements are easier than you may have thought, but detecting post-apocalyptic clowns, though terrifying, is also easier that you thought. What kind of devious, post-apocalyptic clowns can you detect with these web component callback functions?


Context-Aware Web Components Are Easier Than You Think originally published on CSS-Tricks. You should get the newsletter and become a supporter.

The Search For a Fixed Background Effect With Inline Images

I was working on a client project a few days ago and wanted to create a certain effect on an <img>. See, background images can do the effect I was looking for somewhat easily with background-attachment: fixed;. With that in place, a background image stays in place—even when the page scrolls. It isn’t used all that often, so the effect can look unusual and striking, especially when used sparingly.

It took me some time to figure out how to achieve the same effect only with an inline image, rather than a CSS background image. This is a video of the effect in action:

The exact code for the above demo is available in this Git repo. Just note that it’s a Next.js project. We’ll get to a CodePen example with raw HTML in a bit.

Why use <img> instead of background-image?

The are a number of reasons I wanted this for my project:

  • It’s easier to lazy load (e.g. <img loading="lazy"… >.
  • It provides better SEO (not to mention accessibility), thanks to alt text.
  • It’s possible to use srcset/sizes to improve the loading performance.
  • It’s possible to use the <picture> tag to pick the best image size and format for the user’s browser.
  • It allows users to download save the image (without resorting to DevTools).

Overall, it’s better to use the image tag where you can, particularly if the image could be considered content and not decoration. So, I wound up landing on a technique that uses CSS clip-path. We’ll get to that in a moment, right after we first look at the background-image method for a nice side-by-side comparison of both approaches.

1. Using CSS background-image

This is the “original” way to pull off a fixed scrolling effect. Here’s the CSS:

.hero-section {
  background-image: url("nice_bg_image.jpg");
  background-repeat: no-repeat;
  background-size: cover;
  background-position: center; 
  background-attachment: fixed;
}

But as we just saw, this approach isn’t ideal for some situations because it relies on the CSS background-image property to call and load the image. That means the image is technically not considered content—and thus unrecognized by screen readers. If we’re working with an image that is part of the content, then we really ought to make it accessible so it is consumed like content rather than decoration.

Otherwise, this technique works well, but only if the image spans the whole width of the viewport and/or is centered. If you have an image on the right or left side of the page like the example, you’ll run into a whole number of positioning issues because background-position is relative to the center of the viewport.

Fixing it requires a few media queries to make sure it is positioned properly on all devices.

2. Using the clip-path trick on an inline image

Someone on StackOverflow shared this clip-path trick and it gets the job done well. You also get to keep using the<img> tag, which, as we covered above, might be advantageous in some circumstances, especially where an image is part of the content rather than pure decoration.

Here’s the trick:

.image-container {
  position: relative;
  height: 200px;
  clip-path: inset(0);
}

.image {
  object-fit: cover;
  position: fixed;
  left: 0;
  top: 0;
  width: 100%;
  height: 100%;
}

Check it out in action:

Now, before we rush out and plaster this snippet everywhere, it has its own set of downsides. For example, the code feels a bit lengthy to me for such a simple effect. But, even more important is the fact that working with clip-path comes with some implications as well. For one, I can’t just slap a border-radius: 10px; in there like I did in the earlier example to round the image’s corners. That won’t work—it requires making rounded corners from the clipping path itself.

Another example: I don’t know how to position the image within the clip-path. Again, this might be a matter of knowing clip-path really well and drawing it where you need to, or cropping the image itself ahead of time as needed.

Is there something better?

Personally, I gave up on using the fixed scrolling effect on inline images and am back to using a CSS background image—which I know is kind of limiting.

Have you ever tried pulling this off, particularly with an inline image, and managed it well? I’d love to hear!


The Search For a Fixed Background Effect With Inline Images originally published on CSS-Tricks. You should get the newsletter and become a supporter.

How To Price Projects And Manage Scope Screep

I am sure you have read unrealistic articles that suggest there is some scientific approach to pricing that will magically allow you to create an accurate quote. You may also have been led to believe that scope creep should be avoided at all costs, but in the real world, it will always happen.

It is time for us to stop playing this ridiculous game and start running projects in a way that is less like gambling and more like following a robust and reliable process.

Am I exaggerating? Possibly, but let’s look at where things so often go wrong with digital projects.

The Problems With Our Project Process

In my experience, most organizations across any industry run projects in approximately the same way:

  1. Somebody in management requests a project be completed. Unfortunately, this request often lacks details in terms of deliverables and tends only to have vague goals.
  2. A committee of stakeholders is assembled to define the project in detail and decide on the scope.
  3. The detailed scope is then taken to the team that will build it, and they are asked to estimate how long it will take and how much it will cost.
  4. The project is delivered to the specification, emphasizing delivery to time and within budget. As a result, scope creep becomes the enemy.
  5. The project is delivered, and everybody moves on to the next project in their task list.

This approach is far from ideal, especially for digital projects. Digital provides us with unprecedented feedback on user behavior and makes it relatively easy to implement changes (compared to physical products). Yet, once the scope has been defined and a quote provided, the project becomes locked down, with everybody reluctant to make changes as the project progresses.

Yet inevitably, the scope does end up changing, mainly because stakeholders have varying interpretations of what will be built or because they realize mid-project that critical elements are wrong.

In truth, there is nothing wrong with scope creep. Remaining flexible and adapting as you learn more is fundamental to creating excellent digital services. The problem is not with scope creep but the way we run projects.

Unfortunately, because deadlines and costs have been agreed upon, we attempt to deliver these changes within those constraints, leading to corners being cut.

Not that the timelines and costs were accurate in the first place. Digital projects are complicated, often involving specialists and stakeholders working together. As a result, they are notoriously hard to estimate accurately.

I have read many articles that propose methodologies for estimating accurately. However, they are impractical in the real world in almost all cases, mainly because they are too time-consuming to apply. Estimating a project comes down to intuition, experience, and an educated guess!

As anybody who has worked in the field for any time will tell you, most estimates are a work of fiction. We usually don’t know enough upfront even to determine what the right solution is or how users might respond to it. It is therefore impossible to accurately estimate an entire project upfront.

Unfortunately, this ambiguity often leads to unfairly distributing the blame when the project inevitably misses its deadline and goes over budget.

Fortunately, there is a way of providing accurate estimates, and managing scope creeps that involves changing the run projects. The secret lies in splitting projects into smaller chunks. This approach avoids taking on large and complex projects.

Break Projects Into a Series of Smaller Engagements

I need to be clear at this point. I am not suggesting that ambitious programs of work are wrong. I work for big clients on substantial websites and sprawling programs of work. However, I rarely treat these engagements as a single large project. Instead, I break them down into more manageable projects that I scope one at a time.

When a client approaches me wanting to undertake a digital project (be it large or small), I typically break it down into four stages that happen in the following order:

  1. Discovery,
  2. Alpha,
  3. Minimum viable product,
  4. Ongoing iteration and optimization.

Each stage is a separate engagement with clear deliverables. Therefore, I do not commit to the whole project but only to the first phase. That makes estimating and managing scope creep a lot easier.

For example, you only need to define the scope of the next stage. That allows you to manage scope creep better because you can accommodate it as you define the next stage, once the previous stage has been completed.

Instead of estimating an entire program of work, you are estimating the next project in that program. Also, you can use the previous stage to help you estimate more accurately.

Each stage helps to define and estimate the following stage, beginning with discovery.

1. Discovery

In the discovery phase, I work with stakeholders to validate the project. Depending on the project’s overall size, this could be as simple as a couple of meetings or could be an entire program of work.

Typically it includes elements such as:

  • carrying out user research;
  • analyzing the competition;
  • identifying key performance indicators;
  • defining what success looks like;
  • understanding constraints;
  • collating stakeholder opinions.

The idea is that the discovery phase will deliver a more informed definition of the project, including user needs, business objectives, and what needs to be created.

Most importantly, it will validate that the project will deliver the required value.

We can then use this deliverable to define and estimate the work required in an alpha phase. Doing so makes our estimates more accurate and also adjusts the scope based on what we have learned.

2. Alpha

The alpha phase is where we define how the digital service (whether that is a web app or site) will work and ensure that users have a positive experience using it.

That is typically done through the creation of a prototype. On smaller projects, this might be nothing more than some design mockups. On larger projects, it could be a functional prototype that people can try out.

Whatever the case, the idea is to visualize the digital service you are building.

We do this for three reasons.

  • First, a visualization will ensure all stakeholders have a shared vision of what you are creating. A document can be interpreted in many different ways, but that is much harder to do with a prototype.
  • Second, a prototype will make it much easier to identify anything that might have been overlooked so avoid scope creep further down the line when it is more expensive to address.
  • Finally, if we have something tangible, we can test it with users to ensure that it will be fit for purpose before we go to the expense of building the real thing.

If it tests poorly, we still have room before the next phase to adapt without breaking the budget or messing up the timeline.

As with the discovery phase, we can use the alpha to estimate the work required in the next stage. Having a visualization of what needs building makes estimating the work required a lot easier for all of the stakeholders involved. They can see what they are being asked to build.

In addition, we can use the lessons learned from testing the alpha to adapt what we are going to create, once again creating space for changes in scope without derailing the project.

Once we have the alpha, we can then move confidently into the build, knowing that we will create the right thing and that users will respond positively to it.

3. Minimum Viable Product

I used to refer to this stage as the ‘build’. However, I found that stakeholders associated finishing the build with completing the project. In reality, digital services are never done, as they need to be constantly iterated upon to ensure they are as effective as possible.

To avoid this problem, I started to refer to this stage as the minimum viable product (MVP). In this stage, we build and launch the initial version of the digital service.

By referring to it as the minimal viable product, we emphasize that there will be a post-launch iteration. That provides us with a way of dealing with scope creep and unanticipated complexity by pushing it back until post-launch. That ensures the project stays on track and maintains its momentum.

Inevitably during the build, we will shelve some things until post-launch. These elements then become the basis for defining our final stage, enabling us to make an initial estimate for post-launch optimization.

4. Ongoing Iteration and Optimization

The post-launch phase deals with the functionality, complexity, and other issues that we did not address in the MVP. This list of improvements is relatively easy to scope by this point and can be estimated with reasonable accuracy.

However, in addition to this work, there should be an ongoing process of monitoring, iteration, and testing that further refines the digital services’ effectiveness.

Estimating how much of this work you undertake should be based on the size and complexity of the digital service. Your estimate should also be proportional to the investment in the rest of the project.

By breaking down your projects into these four phases and completing each separately, you will remove many of the challenges we face when using traditional project management approaches.

Why Breaking Projects Down Works

Four significant benefits come out of breaking projects down in this way:

  • Each phase is better defined.
    Because the deliverables of the previous phase define each stage, it means that there is a clear vision of direction. That helps stakeholders understand where things are going and avoids nasty surprises later.
  • Projects are more accurately estimated.
    For example, instead of having to guess how long it will take to deliver a significant, nebulous project with a substantial number of unknowns, you are only estimating the next phase and doing so based on the deliverables of the previous phase.
  • It results in better digital services.
    Because project ideas are validated and tested with users, you can be more confident that the final product will fit the purpose. It also allows room to adapt the scope and functionality between phases to ensure you deliver the best result possible.
  • It is a less risky approach.
    The company commissioning the digital service does not need to commit to the whole project upfront. If the discovery phase fails to validate the project’s viability, it can be dropped with minor loss. Equally, if the alpha prototype does not test well with users, it can be adapted before things become too expensive.

This final point is reassuring if an outside supplier is being used for the first time. Instead of signing up an agency for a substantial project without knowing if they can deliver, the client can engage them for a discovery phase to see what they are like. If they are good, they can continue working with them. If not, they can take the findings to another agency with nothing lost.

If you run an agency or are a freelancer, you might think this sounds like a bad idea. Understandably you would prefer to sign up a client for the whole project. However, I have avoided many competitive tenders with this approach because the client didn’t feel they were taking a risk on such a small initial investment. In addition, they did not feel the need to speak to various suppliers because if they did not like me, they could easily switch.

In addition, using this phased approach will make scoping and pricing your next fixed-price project a lot easier. Sure, it will not magically provide an estimate or prevent scope creep. However, it will make estimating more manageable because you are only scoping one part at a time. It will also, allow you to work with scope creep, rather than trying to suppress it.

So my advice, whether you work in-house or externally, on big or small sites, is to stop trying to estimate and scope projects without breaking them down into phases. Instead, tackle one phase at a time and use what you learn to inform the next. If you do, you will estimate more accurately, have room for adapting based on what you learn, and find that project management is more straightforward.

Improving Core Web Vitals, A Smashing Magazine Case Study

“Why are my Core Web Vitals failing?” Many developers have been asking themselves that question lately. Sometimes it’s easy enough to find the answer to that question and the site just needs to invest in performance. Sometimes though, it’s a little trickier and, despite thinking your site was great on the performance for some reason it still fails. That’s what happened to our very own smashingmagazine.com and figuring out, and fixing, the issue took a bit of digging.

A Cry For Help

It all started with a series of tweets last March with a cry for help:

Well, this piqued my interest! I’m a big fan of Smashing Magazine and am very interested in web performance and the Core Web Vitals. I’ve written a few articles here before on Core Web Vitals, and am always interested to see what’s in their annual Web Performance Checklist. So, Smashing Magazine knows about web performance, and if they were struggling, then this could be an interesting test case to look at!

A few of us made some suggestions on that thread as to what the problem might be after using some of our favorite web performance analysis tools like WebPageTest or PageSpeed Insights.

Investigating The LCP Issue

The issue was that LCP was too slow on mobile. LCP, or Largest Contentful Paint, is one of the three Core Web Vitals that you must “pass” to get the full search ranking boost from Google as part of their Page Experience Update. As its name suggests, LCP aims to measure when the largest content of the page is drawn (or “painted”) to the screen. Often this is the hero image or the title text. It is intended to measure what the site visitor likely came here to see.

Previous metrics measured variations of the first paint to screen (often this was a header or background color); incidental content that isn’t really what the user actually wants to get out of the page. The largest content is often a good indicator or what’s most important. And the “contentful” part of the name shows this metric is intended to ignore (e.g. background colors); they may be the largest content, but they are not “contentful”, so don’t count for LCP and instead the algorithm tries to find something better.

LCP only looks at the initial viewport. As soon as you scroll down or otherwise interact with the page the LCP element is fixed and we can calculate how long it took to draw that element from when the page first started loading — and that’s your LCP!

There are many ways of measuring your Core Web Vitals, but the definitive way — even if it’s not the best way, as we’ll see soon — is in Google Search Console (GSC). From an SEO perspective, it doesn’t really matter what other tools tell you, GSC is what Google Search sees. Of course, it matters what your users experience rather than what some search engine crawler sees, but one of the great things about the Core Web Vitals initiative is that it does measure real user experience rather than what Google Bot sees! So, if GSC says you have bad experiences, then you do have bad experiences according to your users.

Search Console told Smashing Magazine that their LCP on mobile for most of their pages needed improving. A standard enough output of that part of GSC and pretty easily addressed: just make your LCP element draw faster! This shouldn’t take too long. Certainly not six months (or so we thought). So, first up is finding out what the LCP element is.

Running a failing article page through WebPageTest highlighted the LCP element:

Improving The LCP Image

OK, so the article author photo is the LCP element. The first instinct is to ask what could we do to make that faster? This involves delving into waterfalls, seeing when the image is requested, how long it takes to download, and then deciding how to optimize that. Here, the image was well optimized in terms of size and format (usually the first, and easiest option for improving the performance of images!). The image was served from a separate assets domain (often bad for performance), but it wasn’t going to be possible to change that in the short term, and Smashing Magazine had already added a preconnect resource hint to speed that up as best they could.

As I mentioned before, Smashing Magazine knows about web performance, had only recently worked on improving their performance, and had done everything right here, but still were failing. Interesting…

Other suggestions rolled in, including reducing load, delaying the service worker (to avoid contention), or investigating HTTP/2 priorities, but they didn’t have the necessary impact on the LCP timing. So we had to reach into our web performance toolbag for all the tips and tricks to see what else we could do here.

If a resource is critical to the page load, you can inline it, so it’s included in the HTML itself. That way, the page includes everything necessary to do the initial paint without delays. For example, Smashing Magazine already inlines critical CSS to allow a quick first paint but did not inline the author's image. Inlining is a double-edged sword and must be used with caution. It beefs up the page and means subsequent page views do not benefit from the fact that data is already downloaded. I’m not a fan of over-inlining because of this and think it must be used with caution.

So, it’s not normally recommended to inline images. However, here the image was causing us real problems, was reasonably small, and was directly linked to the page. Yes, if you read a lot of articles by that one author it’s a waste to redownload the same image multiple times instead of downloading the author's image once and reusing it, but in all likelihood, you’re here to read different articles by different authors.

There have been a few advances in image formats recently, but AVIF is causing a stir as it’s here already (at least in Chrome and Firefox), and it has impressive compression results over the old JPEG formats traditionally used for photographs. Vitality didn’t want to inline the JPEG version of the author images, but investigated whether inlining the AVIF version would work. Compressing the author image using AVIF, and then base64-ing the image into the HTML led to a 3 KB increase to the HTML page weight — which is tiny and so was acceptable.

Since AVIF was only supported in Chrome at the time (it came to Firefox after all this), and since Core Web Vitals is a Google initiative, it did feel slightly “icky” optimizing for a Google browser because of a Google edict. Chrome is often at the forefront of new feature support and that’s to be commended, but it always feels a little off when those two sides of its business impact each other. Still, this was a new standard image format rather than some proprietary Chrome-only format (even if it was only supported in Chrome initially), and was a progressive enhancement for performance (Safari users still get the same content, just not quite as fast), so with the addition of the AVIF twist Smashing took the suggestion and inlined the image and did indeed see impressive results in lab tools. Problem solved!

An Alternative LCP

So, we let that bed in and waited the usual 28 days or so for the Core Web Vitals numbers to all turn green… but they didn’t. They flitted between green and amber so we’d certainly improved things, but hadn’t solved the issue completely. After staying a long stretch in the amber “needs improvement” section, Vitaly reached out to see if there were any other ideas.

The image was drawing quickly. Not quite instantly (it still takes time to process an image after all) but as near as it could be. To be honest, I was running out of ideas but took another look with fresh eyes. And then an alternative idea struck me — were we optimizing the right LCP element? Authors are important of course, but is that really what the reader came here to see? Much as our egos would like to say yes, and that our beautiful shining mugs are much more important than the content we write, the readers probably don’t think that (readers, huh — what can you do!).

The reader came for the article, not the author. So the LCP element should reflect that, which might also solve the LCP image drawing issue. To do that we just put the headline above the author image, and increased the font size on mobile a bit. This may sound like a sneaky trick to fool the Core Web Vital SEO Gods at the expense of the users, but in this case, it helps both! Although many sites do try to go for the quick and easy hack or optimize for GoogleBot over real users, this was not a case of that and we were quite comfortable with the decision here. In fact, further tweaks remove the author image completely on mobile view, where there’s limited space and that article currently looks like this on mobile, with the LCP element highlighted:

Here we show the title, the key information about the article and the start of the summary — much more useful to the user, than taking up all the precious mobile screen space with a big photo!

And that’s one of the main things I like about the Core Web Vitals: they are measuring user experience.

To improve the metrics, you have to improve the experience.

And NOW we were finally done. Text draws much quicker than images so that should sort out the LCP issue. Thank you all very much and good night!

I Hate That CWV Graph In Google Search Console…

Again we were disappointed. That didn’t solve the issue and it wasn’t long before the Google Search Console graph returned to amber:

At this point, we should talk a little more about page groupings and Core Web Vitals. You might have noticed from the above graph that pretty much the whole graph swings at once. But there was also a core group of about 1,000 pages that stayed green most of the time. Why is that?

Well, Google Search Console categorizes pages into page groupings and measures the Core Web Vitals metrics of those page groupings. This is an attempt to fill in missing data for those pages that don’t get enough traffic to have meaningful user experience data. There’s a number of ways that they could have tackled this: they could have just not given any ranking boost to such pages, or maybe assumed the best and given a full boost to pages without any data. Or they could have fallen back to origin-level core web vitals data. Instead, they tried to do something more clever, which was an attempt to be helpful, but is in many ways also more confusing: Page groupings.

Basically, every page is assigned a page grouping. How they do this isn’t made clear, but URLs and technologies used on the page have been mentioned before. You also can’t see what groupings Google has chosen for each of your pages, and if their algorithm got it right, which is another frustrating thing for website owners, though they do give sample URLs for each different Core Web Vitals score below the graph in Google Search Console from which the grouping can sometimes be implied.

Page groupings can work well for sites like Smashing Magazine. For other sites, page groupings may be less clear, and many sites may just have one grouping. The Smashing site, however, has several different types of pages: articles, author pages, guides, and so on. If an article page is slow because the author image is the LCP image is slow to load, then that will likely be the case for all article pages. And the fix will likely be the same for all article pages. So grouping them together there makes sense (assuming Google can accurately figure out the page groupings).

However, where it can get confusing is when a page does get enough visitors to get its own Core Web Vitals score and it passes, but it’s lumped in with a failing group. You can call the CrUX API for all the pages in your site, see most of them are passing, then be confused when those same pages are showing as failing in Search Console because they’ve been lumped in a group with failing URLs and most of the traffic for that group is for failing. I still wonder if Search Console should use page-level Core Web Vital data when it has, rather than always using the grouping data.

Anyway, that accounts for the large swings. Basically, all the articles (of which there are about 3,000) appear to be in the same page grouping (not unreasonably!) and that page grouping is either passing or failing. When it switches, the graph moves dramatically.

You can also get more detailed data on the Core Web Vitals through the CrUX API. This is available at an origin-level (i.e. for the whole site), or for individual URLs (where enough data exists), but annoyingly not at the page grouping level. I’d been tracking the origin level LCP using the CrUX API to get a more precise measure of the LCP and it showed a depressing story too:

We can see we’ve never really “solved” the issue and the amount of “Good” pages (the green line above) still hovered too close to the 75% pass rate. Additionally the p75 LCP score (the dotted line which uses the right-hand axis) never really moved far enough away from the 2500 milliseconds threshold. It was no wonder the pages passing and failing were flipping back and forth. A bit of a bad day, with a few more slow page loads, was enough to flip the whole page grouping into the “needs improvement” category. We needed something more to give us some headroom to be able to absorb these “bad days”.

At this point, it was tempting to optimize further. We know the article title was the LCP element so what could we do to further improve that? Well, it uses a font, and fonts have always been a bane of web performance so we could look into that.

But hold up a minute. Smashing Magazine WAS a fast site. Running it through web performance tools like Lighthouse and WebPageTest showed that — even on slower network speeds. And it was doing everything right! It was built as a static site generator so didn’t require any server-side generation to occur, it inlined everything for the initial paint so there were no resource loading constraints other than the HTML itself, it was hosted by Netlify on a CDN so should be near its users.

Sure, we could look at removing the font, but if Smashing Magazine couldn’t deliver a fast experience given all that, then how could anyone else? Passing Core Web Vitals shouldn’t be impossible, nor require you to only be on a plain site with no fonts or images. Something else was up here and it was time to find out a bit more about what was going on instead of just blindly attempting another round of optimizations.

Digging A Little Deeper Into The Metrics

Smashing Magazine didn’t have a RUM solution so instead we delved into the Chrome User Experience Report (CrUX) data that Google collects for the top 8 million or so websites and then makes that data available to query in various forms. It’s this CrUX data that drives the Google Search Console data and ultimately the ranking impact. We’d already been using the CrUX API above but decided to delve into other CrUX resources.

We used the sitemap and a Google Sheets script to look at all the CrUX data for the whole site where it was available (Fabian Krumbholz has since created a much more comprehensive tool to make this easier!) and it showed mixed results for pages. Some pages passed, while others, particularly older pages, were failing.

The CrUX dashboard didn’t really tell us much that we didn’t already know in this instance: the LCP was borderline, and unfortunately not trending down:

Digging into the other stats (TTFB, First Paint, Online, DOMContentLoaded) didn’t give us any hints. There was, however, a noticeable increase in mobile usage:

Was this part of a general trend in mobile adoption? Could that be what was affecting the mobile LCP despite the improvements we’d done? We had questions but no answers or solutions.

One thing I wanted to look at was the global distribution of the traffic. We’d noticed in Google Analytics a lot of traffic from India to old articles — could that be an issue?

The India Connection

Country-level CrUX data isn’t available in the CrUX dashboard but is available in the BigQuery CrUX dataset, and running a query in there at the www.smashingmagazine.com origin level shows a wide disparity in LCP values (the SQL is included on the second tab of that link btw in case you want to try the same thing on your own domain). Based on the top 10 countries in Google Analytics we have the following data:

Country Mobile p75 LCP value % of traffic
United States 88.34% 23%
India 74.48% 16%
United Kingdom 92.07% 6%
Canada 93.75% 4%
Germany 93.01% 3%
Philippines 57.21% 3%
Australia 85.88% 3%
France 88.53% 2%
Pakistan 56.32% 2%
Russia 77.27% 2%

India traffic is a big proportion for Smashing Magazine (16%) and it is not meeting the target for LCP at an origin level. That could be the problem and certainly was worth investigating further. There was also the Philippines and Pakistan data with very bad scores but that was a relatively small amount of traffic.

At this point, I had an inkling what might be going on here, and a potential solution so got Smashing Magazine to install the web-vitals library to collect RUM data and post it back to Google Analytics for analysis. After a few days of collecting, we used the Web Vitals Report to give us a lot at the data in ways we hadn’t been able to see before, in particular, the country-level breakdown:

And there it was. All the top countries in the analytics did have very good LCP scores, except one: India. Smashing Magazine uses Netlify which is a global CDN and it does have a Mumbai presence, so it should be as performant as other countries, but some countries are just slower than others (more on this later).

However, the mobile traffic for India was only just outside the 2500 limit, and it was only the second most visited country. Surely the good USA scores should have been enough to offset that? Well, the above two graphs show the countries order by traffic. But CrUX counts mobile and desktop traffic separately (and tablet btw, but no one ever seems to care about that!). What happens if we filter the traffic to just mobile traffic? And one step further — just mobile Chrome traffic (since only Chrome feeds CrUX and so only Chrome counts towards CWV)? Well then we get a much more interesting picture:

Country Mobile p75 LCP value % of mobile traffic
India 74.48% 31%
United States 88.34% 13%
Philippines 57.21% 8%
United Kingdom 92.07% 4%
Canada 93.75% 3%
Germany 93.01% 3%
Nigeria 37.45% 2%
Pakistan 56.32% 2%
Australia 85.88% 2%
Indonesia 75.34% 2%

India is actually the top mobile Chrome visitor, by quite some way — nearly triple the next highest visitor (USA)! The Philippines, with its poor score has also shot up there to the number three spot, and Nigeria and Pakistan with their poor scores are also registering in the top 10. Now the bad overall LCP scores on mobile were starting to make sense.

While the mobile has overtaken desktop as the most popular way to access the Internet in the, so-called, Western world, there still is a fair mix of mobile and desktop here — often tied to our working hours where many of us are sat in front of a desktop. The next billion users may not be the same, and mobile plays a much bigger part in those countries. The above stats show this is even true for sites like Smashing Magazine that you might consider would get more traffic from designers and developers sitting in front of desktops while designing and developing!

Additionally because CrUX only measures from Chrome users, that means countries with more iPhones (like the USA) will have a much smaller proportion of their mobile users represented in CrUX and so in Core Web Vitals, so additionally amplifying the effect of those countries.

Core Web Vitals Are Global

Core Web Vitals don’t have a different threshold per country, and it doesn’t matter if your site is visited by different countries — it simply registers all Chrome users the same. Google has confirmed this before, so Smashing Magazine will not get the ranking boost for the good USA scores, and not get it for the India users. Instead, all users go into the melting pot, and if the score for those page groupings do not meet the threshold, then the ranking signal for all users is affected.

Unfortunately, the world is not an even place. And web performance does vary hugely by country, and shows a clear divide between richer and poorer countries. Technology costs money, and many countries are more focused on getting their populations online at all, rather than on continually upgrading infrastructure to the latest and greatest tech.

The lack of other browsers (like Firefox or iPhones) in CrUX has always been known, but we’ve always considered it more of a blind spot for measuring Firefox or iPhone performance. This example shows the impact is much bigger, and for sites with global traffic, it skews the results significantly in favor of Chrome users, which often means poor countries, which often means worse connectivity.

Should Core Web Vitals Be Split By Country?

On the one hand, it seems unfair to hold websites to the same standard if the infrastructure varies so much. Why should Smashing Magazine be penalized or held to a higher standard than a similar website that is only read by designers and developers from the Western world? Should Smashing Magazine block Indian users to keep the Core Web Vitals happy (I want to be quite clear here that this never came up in discussion, so please do take this as the author making the point and not a sleight on Smashing!).

On the other hand, “giving up” on some countries by accepting their slowness risks permanently relegating them to the lower tier many of them are in. It’s hardly the average Indian reader of Smashing Magazine’s fault that their infrastructure is slower and in many ways, these are the people that deserve more highlighting and effort, rather than less!

And it’s not just a rich country versus poor country debate. Let’s take the example of a French website which is aimed at readers in France, funded by advertising or sales from France, and has a fast website in that country. However, if the site is read by a lot of French Canadians, but suffers because the company does not use a global CDN, then should that company suffer in French Google Search because it’s not as fast to those Canadian users? Should the company be “held to ransom” by the threat of Core Web Vitals and have to invest in the global CDN to keep those Canadian readers, and so Google happy?

Well, if a significant enough proportion of your viewers are suffering then that’s exactly what the Core Web Vital’s initiative is supposed to surface. Still, it’s an interesting moral dilemma which is a side effect of the Core Web Vitals initiative being linked to SEO ranking boost: money always changes things!

One idea could be to keep the limits the same, but measure them per country. The French Google Search site could give a ranking boost to those users in French (because those users pass CWV for this site), while Google Search Canada might not (because they fail). That would level the playing field and measure sites to each country, even if the targets are the same.

Similarly, Smashing Magazine could rank well in the USA and other countries where they pass, but be ranked against other Indian sites (where the fact they are in the “needs improvement” segment might actually still be better than a lot of sites there, assuming they all suffer the same performance constraints).

Sadly, I think that would have a negative effect, with some countries again being ignored while sites only justify web performance investment for more lucrative countries. Plus, as this example already illustrates, the Core Web Vitals are already complicated enough without bringing nearly 200 additional dimensions into play by having one for every country in the world!

So How To Fix It?

So we now finally knew why Smashing Magazine was struggling to pass Core Web Vitals but what, if anything, could be done about it? The hosting provider (Netlify) already has the Mumbai CDN, which should therefore provide a fast access for Indian users, so was this a Netlify problem to improve that? We had optimized the site as much as possible so was this just something they were going to have to live with? Well no, we now return to our idea from earlier: optimizing the web fonts a bit more.

We could take the drastic option of not delivering fonts at all. Or perhaps not delivering fonts to certain locations (though that would be more complicated, given the SSG nature of Smashing Magazine’s website). Alternatively, we could wait and load fonts in the front end, based on certain criteria, but that risked slowing down fonts for others while we assessed that criteria. If only there was some easy-to-use browser signal for when we should take this drastic action. Something like the SaveData header, which is intended exactly for this!

SaveData And prefers-reduced-data

SaveData is a setting that users can turn on in their browser when they really want to… well save data. This can be useful for people on restricted data plans, for those traveling with expensive roaming charges, or for those in countries where the infrastructure isn’t quite as fast as we’d like.

Users can turn on this setting in browsers that support it, and then websites can then use this information to optimize their sites even more than usual. Perhaps returning lower quality images (or turning images off completely!), or not using fonts. And the best thing about this setting is that you are acting upon the user's request, and not arbitrarily making a decision for them (many Indian users might have fast access and not want a restricted version of the website!).

The Save Data information is available in two (soon to be three!) ways:

  1. A SaveData header is sent on each HTTP request. This allows dynamic backends to change the HTML returned.
  2. The NetworkInformation.saveData JavaScript API. This allows front-end scripts to check this and act accordingly.
  3. The upcoming prefers-reduced-data media query, allowing CSS to set different options depending on this setting. This is available behind a flag in Chrome, but not yet on by default while it finishes standardization.

So the question is, do many of the Smashing Magazine readers (and particularly those in the countries struggling with Core Web Vitals) use this option and is this something we can therefore use to serve them a faster site? Well, when we added the web-vitals script mentioned above, we also decided to measure that, as well as the Effective Connection Type. You can see the full script here. After a bit of time allowing it to collect we could display the results in a simple /Google Analytics dashboard, along with the Chrome browser version:

So, the good news was that a large proportion of mobile Indian users (about two-thirds) did have this setting set. The ECT was less useful with most showing as 4g. I’ve argued before that this API has gotten less and less useful as most users are classified under this 4g setting. Plus using this value effectively for initial loads is fraught with issues.

More good news as most users seem to be on an up-to-date Chrome so would benefit from newer features like the prefers-reduced-data media query when it becomes fully available.

Ilya from the Smashing team applied the JavaScript API version to their font-loader script so additional fonts are not loaded for these users. The Smashing folks also applied the prefers-reduce-data media query to their CSS so fallback fonts are used rather than custom web fonts for the initial render, but this will not be taking effect for most users until that setting moves out of the experimental stage.

I Love That Graph In Google Search Console

And did it work? Well, we’ll let Google Search Console tell that store as it showed us the good news a couple of weeks later:

Additionally, since this was introduced in mid-November, the original level LCP score has steadily ticked downwards:

There’s still not nearly enough headroom to make me comfortable, but I’m hopeful that this will be enough for now, and will only improve when the prefers-reduced-data media query comes into play — hopefully soon.

Of course, a surge in traffic from mobile users with bad connectivity could easily be enough to flip the site back into the amber category, which is why you want that headroom, so I’m sure the Smashing team will be keeping a close eye on their Google Search Console graph for a bit longer, but I feel we’ve made the best efforts basis to improve the experience of users so I am hopeful it will be enough.

Impact Of The User Experience Ranking Factor

The User Experience ranking factor is supposed to be a small differentiator at the moment, and maybe we worried too much about a small issue that is, in many ways outside of our control? If Smashing Magazine is borderline, and the impact is small, then maybe the team should worry about other issues instead of obsessing over this one? But I can understand that and, as I said, Smashing Magazine are knowledgeable in performance and so understand why they wanted to solve — or at the very least understand! — this issue.

So, was there any impact? Interestingly we did see a large uptick in search impression in the last week at the same time as it flipped to green:

It’s since reverted back to normal, so this may have been an unrelated blip but interesting nonetheless!

Conclusions

So, an interesting case study with a few important points to take away:

  • When RUM (including CrUX or Google Search Console) tells you there’s a problem, there probably is! It’s all too easy to try to compare your experiences and then blame the metric.
  • Implementing your own RUM solution gives you access to much more valuable data than the high-level data CrUX is intended to provide, which can help you drill down into issues, plus also give you potentially more information about the devices your site visitors are using to visit your site.
  • Core Web Vitals are global, and that causes some interesting challenges for global sites like Smashing Magazine. This can make it difficult to understand CrUX numbers unless you have a RUM solution and perhaps Google Search Console or CrUX could help surface this information more?
  • Chrome usage also varies throughout the world, and on mobile is biased towards poorer countries where more expensive iPhones are less prevalent.
  • Core Web Vitals are getting much better at measuring User Experience. But that doesn’t mean every user has to get the same user experience — especially if they are telling you (through things like the Save Data option) that they would actually prefer a different experience.

I hope that this case study helps others in a similar situation, who are struggling to understand their Core Web Vitals. And I hope you can use the information here to make the experience better for your website visitors.

Happy optimizing!

Note: It should be noted that Vitaly, Ilya and others at the Smashing team did all the work here, and a lot more performance improvements were not covered in the above article. I just answered a few queries for them on this specific problem over the last 6 months and then suggested this article might make an interesting case study for other readers to learn from.

A Guide To Shared Hosting: What Are The Benefits and is it Right For You?

Shared hosting is a popular choice when it comes to hosting websites. This article examines everything you need to know to determine if shared hosting is right for you.

We’ll take a look at what exactly shared hosting is, how it works, its advantages and disadvantages, and check out some shared hosting providers.

illustration showing advantages and disadvantages of shared hosting.
You’ll learn all about the advantages — and some disadvantages — of shared hosting.

This article isn’t meant to persuade you to switch to shared hosting; however, it will offer you insight about it that you can use to make a logical decision when it comes to hosting. That way, you can know all the benefits, limitations, costs, and more, so you’ll have a clear-headed idea of if shared hosting is right for you.

We’ll be going over:

And with that, let’s kick things off with…

What is Shared Hosting

Shared hosting is website hosting that divides webserver resources between multiple domains that share a physical web server and its resources among the hosted websites. It can also be referred to as virtual shared hosting.

Users can host a website with other domains and share the server resources, which comes with a lower cost and the exact address IP as other domains from other users.

They (users) get a section of a server shared with hundreds of users. Everyone using the shared hosting has access to features, such as disk space, FTP accounts, monthly traffic, and additional add-ons offered by the host.

The system resources are shared on-demand by users that are on the server.

Who Should Use Shared Hosting?

Shared hosting is best for small websites, blogs, and low-traffic websites that do not require advanced configurations or high bandwidth. It’s also good for websites that do not need a great degree of reliability and can manage with some downtime as a result of things happening on the server (we’ll be going over that later).

Packages for shared hosting are typically minimum for features and support, but a lot of time, users can upgrade for additional costs.

Often, shared hosting is all that is required for small websites. However, if a website grows and starts developing more traffic, an upgrade to managed or dedicated server hosting is often needed.

As we touched on, shared hosting can work perfectly for budget-conscious new site owners and anyone with small, low-traffic sites due to all the benefits (that we’ll be discussing in detail later in this article).

How Does it Work?

We also mentioned this earlier, but shared hosting is where a single server hosts numerous sites. As for multiple locations, that can range from several hundred to thousands. It depends on the available hard drive space, processing speed, and RAM.

The hosting is provided by a machine the same as a dedicated server; however, numerous clients implement its resources.

What shared hosting looks like
This is shared hosting in a nutshell.

Separate portions of the server contain the user account’s files and applications. Each has its file directory tree, and users don’t have access to either the root or other files.

As you’ll see, there are many…

Advantages of Shared Hosting

Image of a pair of people holding a hosting sign.
There’s a lot of goodies that shared hosting offers.

You’ve probably already compiled some thoughts and ideas on why shared hosting may be beneficial. That being said, there are a lot of advantages to taking a look at in further detail.

So, let’s check them out.

Cost-Effectiveness

A significant reason to use shared hosting is that it’s very affordable. It’s the most cost-effective solution since many people contribute to the server’s costs, and the hosting company’s costs are distributed amongst them.

You can get a site hosted for as cheap as a couple of dollars per month, depending on your hosting provider and terms.

Flexibility

If you have a new or small website, you can start with shared hosting and upgrade without major obstacles as your site grows.

No Bandwidth Limitations

Whatever web hosting provider you have, they typically will provide you unlimited bandwidth for your website per month. However, be sure not to overload the server, or your account can be suspended.

There are also situations where if you use a lot of bandwidth and it affects the other domains, your hosting provider may indicate to you that you have to upgrade your account. However, it’s pretty uncommon.

Easy to Self-Manage without Technical Expertise

It’s straightforward to set up shared hosting. There are a lot of providers that offer a control panel for website management. Unlike VPS or dedicated servers, it’s simple to add FTP users, compress folders, change passwords, and more.

There are no expensive tools or complicated configurations to figure out.

Host Multiple Domains

For the option of hosting multiple domains, shared hosting has you covered. Most web hosts allow hosting multiple websites. That being said, there can be limitations, such as just allowing several domains per account. But, some may not have any restrictions unless your account gets tons of traffic.

Professionally Managed

When it comes to low maintenance, shared hosting fits the bill. Your hosting provider takes care of your server by ensuring basic server administrative tasks are functioning correctly. You should expect to have professional technical support for everything, including DDoS attacks, network outages, maintenance, and more.

Ability to Host Dynamic Websites

When you have a website that looks different according to who is browsing (e.g. Facebook), that is known as dynamic. CMSs and dynamic sites use programming languages (e.g. PHP), which can all be run on a shared server.

Built-in cPanel

Thanks to cPanel, you’re able to manage your web hosting tasks. With shared hosting, a built-in cPanel can ease control, simplifying setting up emails, databases, addon domains, and more.

Easy Email Hosting and Setup

It’s vital to have an email associated with your domain these days. With shared hosting, having a cPanel on an affordable hosting plan allows you to add email addresses easily. Plus, you can forward emails to other services (e.g. Gmail). Many shared hosting services offer unlimited emails accounts.

Disadvantages of Shared Hosting

A couple grappling at a hosting sign.
Anytime you share something, there can be some disadvantages…

Like anything, there are advantages – and disadvantages. That goes for shared hosting, too.

Here’s a look at some less desirable parts of shared hosting.

Security Issues

Unfortunately, shared hosting can be the most vulnerable type of hosting due to the fact hackers can use one domain to access the whole server – along with all of the other sites hosted on it.

Web hosts want the maximum number of domains hosted on a single server and often overlook security measures.

However, some web hosts mention that they implement domain isolation to prevent other domains from being affected if a particular site is hacked. And a lot of shared hosting companies step up their security more than others.

Speed

One of the major issues of shared hosting is speed. It’s due to many users sharing the server resources, RAM, and CPU – which can slow things down. Plus, if there’s a popular website on the server, it will affect all of the other sites due to the server resources and singular IP address.

This will vary with web hosts but be prepared for a slower site. However, this may not be a huge factor if you don’t get a lot of visitors or have a personal website where you don’t care much about speed.

Server Crashes

A common issue with shared hosting is server crashes. This occurs when sites use excessive CPU and RAM.

The more significant point is that hosting companies often aren’t quick to resolve the issue. It’s a common occurrence for them and not a high priority.

Of course, some good hosting providers do fix the problem and even ban a website that uses all the server resources. If a hosting provider has reviews or good customer support for issues like this, it maybe is worth using them compared with a hosting provider that doesn’t resolve the issue quickly.

Fixing Problems Can Take Longer

This corresponds with server crashes because, as mentioned above, problems sometimes aren’t quick to be determined, making it frustrating for anyone that cares about uptime.

Things can take a while to settle, even if a hosting company has the staff to fix downtime or any issues.

Again, be sure to look at reviews for the hosting company and determine for yourself if the company is good at fixing problems promptly. (And, the hosting companies we will be mentioning in this article all have good support, so keep that in mind.)

When Not to Use Shared Hosting

As discussed, shared hosting uses a common server amongst many websites. The result is that shared hosting is set up to let popular frameworks (e.g. WordPress) run flawlessly with basic configurations.

So, if you want to use a custom site framework that’s not currently installed on the server, or are looking to optimize server resources for specific website tasks, shared hosting might not be suitable for you. You’d be better off with dedicated or VPS hosting since they’ll allow for more customization.

If you need root access to install software or configure, it’s probably best not to use shared hosting since you’re very limited beyond anything basic.

Shared Hosting Providers

Now that you know all about the benefits, disadvantages, and how shared hosting works, you may decide it’s right for you. Here are some recommendations for shared hosting providers.

These companies come with a solid reputation, support and are well established. There are links to each plan, so please refer to those for more information and pricing.

Also, please note that we have no relationship with these companies beyond thinking they offer a solid product. We do not — and will not — post affiliate links here at WPMU DEV (which makes us kinda unique!).

If you’d like to share an opinion or ask a question about either our view on shared hosting or these providers, please do so in the comments at the end of the article.

HostGator

HostGator logo
HostGator is one of the most popular shared hosting providers out there.

HostGator is a popular and well-known shared hosting company.

Along with shared hosting, it offers WordPress, VPS, and dedicated hosting – which is great if your site becomes too big. Additionally, there are Linux-based shared hosting packages.

Its shared hosting has month-to-month plans and also has longer terms available. Their rich shared hosting packages include unlimited storage, monthly databases with plenty of growth opportunities, and email.

A big benefit of hosting with HostGator is their 24/7 customer support and reliability. If issues arise, they’re quick to fix the problem.

HostGator is a top-notch option for shared hosting, with a solid reputation for quality and reliable service at affordable prices. It’s highly recommended for shared hosting by numerous reviews and websites.

GoDaddy

GoDaddy logo
GoDaddy is no stranger to shared hosting.

GoDaddy is one of the cheapest options when it comes to shared hosting. It has a decent range of Linux-based shared web servers, plus it will include a free domain name if you sign up for a commitment that surpasses 12 months.

Also, you can always upgrade to VPS or dedicated hosting, if your site starts to surpass what’s passable with shared hosting.

GoDaddy includes great uptime (rarely goes down), useful website-building tools, and options for servers. Plus, their 24/7 support is beneficial if anything goes wrong.

They also have a solid reputation and has been in business for a while. It’s a good option for starting with shared hosting with upgrading in the future.

Domain.com

domain.com logo.
Domain.com offers more than domains with their shared hosting packages.

Domain.com is known more for their domains. They have shared hosting as well for an affordable price. Their various plans are determined by how many sites you have and all feature unlimited storage.

They include some free perks, such as an SSL Certificate, and a free domain. Along with that, they have 24/7 network monitoring and DDoS protection.

Their customer-friendly control panel makes it easy to use. And if you ever have questions or issues, you can always contact their support.

Hostinger

Hostinger logo.
Hostinger is a great low-cost option for shared hosting.

Hostinger offers some low-cost plans, 24/7 customer service, and their uptime is fantastic. They have three Linux-powered shared web hosting plans, with low cost – but a high commitment for those low costs (one-year to four-year).

They also have WordPress, VPS, and Linux Servers hosting, so you can always upgrade if needed.

A few cons are the lack of dedicated hosting, and not every plan has a Windows option. Plus, though there is 24/7 support, that doesn’t include phone support, which might inconvenience some users.

All this being said, Hostinger has a solid reputation for shared hosting. It can be extremely affordable, just be prepared to make a commitment.

DreamHost

DreamHost logo.
DreamHost can be a dream come true (sorry!) with their shared hosting.

DreamHost is another great option when it comes to affordable shared hosting. It’s geared more towards beginners, considering the tools that make it extremely easy to get up-and-running, support, and a one-click installation feature.

They have two shared hosting plans (Shared Starter and Shared Unlimited). They feature unlimited monthly data transfers and storage. That being said, you’ll need to upgrade to Shared Unlimited for email.

As mentioned, it’s easy to use DreamHost with their domain-management tools. The custom control panel gives you admin access to all of your Dreamhost products. From there, it’s easy to update domain information, adjust settings, add users, and more.

Plus, they have a 100% uptime guarantee. They have multiple data centers, redundant cooling, emergency generators, and are monitoring things constantly to ensure everything is running smoothly.

Bluehost

bluehost logo.
Bluehost’s ease of use makes setting up a shared hosting site a breeze.

Bluehost is another company that includes a very easy-to-use website builder. It also features resource protection, so your website’s performance stays protected and unaffected even with other sites on the shared server.

They also have terms of use, but you can upgrade to VPS or dedicated hosting if needed. There is also 24/7 support, custom themes, WordPress integration, and a free domain for a year.

As one of PC Ediotrs’ Choice for Hosting, they mention, “The ever-evolving Bluehost is a dependable web host that makes site creation a breeze, especially for WordPress hosting.” That being said, it’s worth trying out Bluehost for shared hosting – especially if you’re using WordPress.

A2 Hosting

A2 Hosting logo.
We give A2 Hosting an ‘A’ for their shared hosting.

A2 Hosting stands out for its various packages, uptime performance, and great customer service.

It has four tiers of Linux-based shared web hosting. The price varies by storage, emails, and domains.

Each of their servers is optimized for speed and they limit how many clients can operate on each server. They mention that they have 20-times faster page load times than most competitors.

They have a team of experts to help with any account migration, tote a 99.9 uptime commitment, and 24/7 support.

InMotion

inmotion logo.
InMotion is another easy-to-use platform for shared hosting.

Inmotion has a reputation for excellent uptime, ease of use, and flexibility. It also offers many hosting types, so if you ever need to upgrade out of shared hosting, you can.

It has four Linux-based shared hosting plans ranging from 10GB of SSD storage up to 200GB. The plans are determined by websites, email accounts, and data transfers. They all include free SSL and a free domain.

They have a good reputation for uptime and have 24/7 technical support. Plus, it comes with an easy-to-manage cPanel, which any beginner can appreciate.

iPage

iPage logo.
iPage is cheap, yet offers excellent shared hosting features.

iPage is a shared hosting service with extremely affordable pricing plans that vary by term. They power over one million websites and have been in business since 1998.

Some of their benefits include unlimited email addresses, unlimited domains, and 24/7 support. They also include free SSL certificates, a free domain for a year, and a free website builder.

You can also upgrade to VPS and dedicated server hosting if your website becomes too big for shared hosting.

Hosting Shouldn’t Be Spared When Shared

As you can see, there are many benefits to shared hosting. It boils down to that it’s best for beginners, smaller websites, and if you’re on a budget.

You shouldn’t spare your hosting quality if using shared hosting. Use a suitable shared hosting company (like the ones we mentioned), and you should expect quality – even if there may be some occasional hiccups (e.g. downtime). Plus, use a company that makes it easy to upgrade, for when your website grows.

You’re welcome to use this article to reference shared hosting on your website. Or, ahem, with all of this talk about sharing – feel free to share it.

Shared Web Hosting Benefits: Is It Right for You? Complete Guide

Shared hosting is a popular choice when it comes to hosting websites. This article examines everything you need to know to determine if shared hosting is right for you.

We’ll take a look at what exactly shared hosting is, how it works, its advantages and disadvantages, and check out some shared hosting providers.

illustration showing advantages and disadvantages of shared hosting.
You’ll learn all about the advantages — and some disadvantages — of shared hosting.

This article isn’t meant to persuade you to switch to shared hosting; however, it will offer you insight about it that you can use to make a logical decision when it comes to hosting. That way, you can know all the benefits, limitations, costs, and more, so you’ll have a clear-headed idea of if shared hosting is right for you.

We’ll be going over:

And with that, let’s kick things off with…

What is Shared Hosting

Shared hosting is website hosting that divides webserver resources between multiple domains that share a physical web server and its resources among the hosted websites. It can also be referred to as virtual shared hosting.

Users can host a website with other domains and share the server resources, which comes with a lower cost and the exact address IP as other domains from other users.

They (users) get a section of a server shared with hundreds of users. Everyone using the shared hosting has access to features, such as disk space, FTP accounts, monthly traffic, and additional add-ons offered by the host.

The system resources are shared on-demand by users that are on the server.

Who Should Use Shared Hosting?

Shared hosting is best for small websites, blogs, and low-traffic websites that do not require advanced configurations or high bandwidth. It’s also good for websites that do not need a great degree of reliability and can manage with some downtime as a result of things happening on the server (we’ll be going over that later).

Packages for shared hosting are typically minimum for features and support, but a lot of time, users can upgrade for additional costs.

Often, shared hosting is all that is required for small websites. However, if a website grows and starts developing more traffic, an upgrade to managed or dedicated server hosting is often needed.

As we touched on, shared hosting can work perfectly for budget-conscious new site owners and anyone with small, low-traffic sites due to all the benefits (that we’ll be discussing in detail later in this article).

How Does it Work?

We also mentioned this earlier, but shared hosting is where a single server hosts numerous sites. As for multiple locations, that can range from several hundred to thousands. It depends on the available hard drive space, processing speed, and RAM.

The hosting is provided by a machine the same as a dedicated server; however, numerous clients implement its resources.

What shared hosting looks like
This is shared hosting in a nutshell.

Separate portions of the server contain the user account’s files and applications. Each has its file directory tree, and users don’t have access to either the root or other files.

As you’ll see, there are many…

Shared Hosting Benefits/Advantages

Image of a pair of people holding a hosting sign.
There’s a lot of goodies that shared hosting offers.

You’ve probably already compiled some thoughts and ideas on why shared hosting may be beneficial. That being said, there are a lot of advantages to taking a look at in further detail.

So, let’s check them out.

1. Cost-Effectiveness

A significant reason to use shared hosting is that it’s very affordable. It’s the most cost-effective solution since many people contribute to the server’s costs, and the hosting company’s costs are distributed amongst them.

You can get a site hosted for as cheap as a couple of dollars per month, depending on your hosting provider and terms.

2. Flexibility

If you have a new or small website, you can start with shared hosting and upgrade without major obstacles as your site grows.

3. No Bandwidth Limitations

Whatever web hosting provider you have, they typically will provide you unlimited bandwidth for your website per month. However, be sure not to overload the server, or your account can be suspended.

There are also situations where if you use a lot of bandwidth and it affects the other domains, your hosting provider may indicate to you that you have to upgrade your account. However, it’s pretty uncommon.

4. Easy to Self-Manage without Technical Expertise

It’s straightforward to set up shared hosting. There are a lot of providers that offer a control panel for website management. Unlike VPS or dedicated servers, it’s simple to add FTP users, compress folders, change passwords, and more.

There are no expensive tools or complicated configurations to figure out.

5. Host Multiple Domains

For the option of hosting multiple domains, shared hosting has you covered. Most web hosts allow hosting multiple websites. That being said, there can be limitations, such as just allowing several domains per account. But, some may not have any restrictions unless your account gets tons of traffic.

6. Professionally Managed

When it comes to low maintenance, shared hosting fits the bill. Your hosting provider takes care of your server by ensuring basic server administrative tasks are functioning correctly. You should expect to have professional technical support for everything, including DDoS attacks, network outages, maintenance, and more.

7. Ability to Host Dynamic Websites

When you have a website that looks different according to who is browsing (e.g. Facebook), that is known as dynamic. CMSs and dynamic sites use programming languages (e.g. PHP), which can all be run on a shared server.

8. Built-in cPanel

Thanks to cPanel, you’re able to manage your web hosting tasks. With shared hosting, a built-in cPanel can ease control, simplifying setting up emails, databases, addon domains, and more.

9. Easy Email Hosting and Setup

It’s vital to have an email associated with your domain these days. With shared hosting, having a cPanel on an affordable hosting plan allows you to add email addresses easily. Plus, you can forward emails to other services (e.g. Gmail). Many shared hosting services offer unlimited emails accounts.

Shared Hosting Disadvantages

A couple grappling at a hosting sign.
Anytime you share something, there can be some disadvantages…

Like anything, there are advantages – and disadvantages. That goes for shared hosting, too.

Here’s a look at some less desirable parts of shared hosting.

1. Security Issues

Unfortunately, shared hosting can be the most vulnerable type of hosting due to the fact hackers can use one domain to access the whole server – along with all of the other sites hosted on it.

Web hosts want the maximum number of domains hosted on a single server and often overlook security measures.

However, some web hosts mention that they implement domain isolation to prevent other domains from being affected if a particular site is hacked. And a lot of shared hosting companies step up their security more than others.

2. Speed

One of the major issues of shared hosting is speed. It’s due to many users sharing the server resources, RAM, and CPU – which can slow things down. Plus, if there’s a popular website on the server, it will affect all of the other sites due to the server resources and singular IP address.

This will vary with web hosts but be prepared for a slower site. However, this may not be a huge factor if you don’t get a lot of visitors or have a personal website where you don’t care much about speed.

3. Server Crashes

A common issue with shared hosting is server crashes. This occurs when sites use excessive CPU and RAM.

The more significant point is that hosting companies often aren’t quick to resolve the issue. It’s a common occurrence for them and not a high priority.

Of course, some good hosting providers do fix the problem and even ban a website that uses all the server resources. If a hosting provider has reviews or good customer support for issues like this, it maybe is worth using them compared with a hosting provider that doesn’t resolve the issue quickly.

4. Fixing Problems Can Take Longer

This corresponds with server crashes because, as mentioned above, problems sometimes aren’t quick to be determined, making it frustrating for anyone that cares about uptime.

Things can take a while to settle, even if a hosting company has the staff to fix downtime or any issues.

Again, be sure to look at reviews for the hosting company and determine for yourself if the company is good at fixing problems promptly. (And, the hosting companies we will be mentioning in this article all have good support, so keep that in mind.)

When Not to Use Shared Hosting

As discussed, shared hosting uses a common server amongst many websites. The result is that shared hosting is set up to let popular frameworks (e.g. WordPress) run flawlessly with basic configurations.

So, if you want to use a custom site framework that’s not currently installed on the server, or are looking to optimize server resources for specific website tasks, shared hosting might not be suitable for you. You’d be better off with dedicated or VPS hosting since they’ll allow for more customization.

If you need root access to install software or configure, it’s probably best not to use shared hosting since you’re very limited beyond anything basic.

Shared Hosting Providers

Now that you know all about the benefits, disadvantages, and how shared hosting works, you may decide it’s right for you. Here are some recommendations for shared hosting providers.

These companies come with a solid reputation, support and are well established. There are links to each plan, so please refer to those for more information and pricing.

Also, please note that we have no relationship with these companies beyond thinking they offer a solid product. We do not — and will not — post affiliate links here at WPMU DEV (which makes us kinda unique!).

If you’d like to share an opinion or ask a question about either our view on shared hosting or these providers, please do so in the comments at the end of the article.

HostGator

HostGator logo
HostGator is one of the most popular shared hosting providers out there.

HostGator is a popular and well-known shared hosting company.

Along with shared hosting, it offers WordPress, VPS, and dedicated hosting – which is great if your site becomes too big. Additionally, there are Linux-based shared hosting packages.

Its shared hosting has month-to-month plans and also has longer terms available. Their rich shared hosting packages include unlimited storage, monthly databases with plenty of growth opportunities, and email.

A big benefit of hosting with HostGator is their 24/7 customer support and reliability. If issues arise, they’re quick to fix the problem.

HostGator is a top-notch option for shared hosting, with a solid reputation for quality and reliable service at affordable prices. It’s highly recommended for shared hosting by numerous reviews and websites.

GoDaddy

GoDaddy logo
GoDaddy is no stranger to shared hosting.

GoDaddy is one of the cheapest options when it comes to shared hosting. It has a decent range of Linux-based shared web servers, plus it will include a free domain name if you sign up for a commitment that surpasses 12 months.

Also, you can always upgrade to VPS or dedicated hosting, if your site starts to surpass what’s passable with shared hosting.

GoDaddy includes great uptime (rarely goes down), useful website-building tools, and options for servers. Plus, their 24/7 support is beneficial if anything goes wrong.

They also have a solid reputation and has been in business for a while. It’s a good option for starting with shared hosting with upgrading in the future.

Domain.com

domain.com logo.
Domain.com offers more than domains with their shared hosting packages.

Domain.com is known more for their domains. They have shared hosting as well for an affordable price. Their various plans are determined by how many sites you have and all feature unlimited storage.

They include some free perks, such as an SSL Certificate, and a free domain. Along with that, they have 24/7 network monitoring and DDoS protection.

Their customer-friendly control panel makes it easy to use. And if you ever have questions or issues, you can always contact their support.

Hostinger

Hostinger logo.
Hostinger is a great low-cost option for shared hosting.

Hostinger offers some low-cost plans, 24/7 customer service, and their uptime is fantastic. They have three Linux-powered shared web hosting plans, with low cost – but a high commitment for those low costs (one-year to four-year).

They also have WordPress, VPS, and Linux Servers hosting, so you can always upgrade if needed.

A few cons are the lack of dedicated hosting, and not every plan has a Windows option. Plus, though there is 24/7 support, that doesn’t include phone support, which might inconvenience some users.

All this being said, Hostinger has a solid reputation for shared hosting. It can be extremely affordable, just be prepared to make a commitment.

DreamHost

DreamHost logo.
DreamHost can be a dream come true (sorry!) with their shared hosting.

DreamHost is another great option when it comes to affordable shared hosting. It’s geared more towards beginners, considering the tools that make it extremely easy to get up-and-running, support, and a one-click installation feature.

They have two shared hosting plans (Shared Starter and Shared Unlimited). They feature unlimited monthly data transfers and storage. That being said, you’ll need to upgrade to Shared Unlimited for email.

As mentioned, it’s easy to use DreamHost with their domain-management tools. The custom control panel gives you admin access to all of your Dreamhost products. From there, it’s easy to update domain information, adjust settings, add users, and more.

Plus, they have a 100% uptime guarantee. They have multiple data centers, redundant cooling, emergency generators, and are monitoring things constantly to ensure everything is running smoothly.

Bluehost

bluehost logo.
Bluehost’s ease of use makes setting up a shared hosting site a breeze.

Bluehost is another company that includes a very easy-to-use website builder. It also features resource protection, so your website’s performance stays protected and unaffected even with other sites on the shared server.

They also have terms of use, but you can upgrade to VPS or dedicated hosting if needed. There is also 24/7 support, custom themes, WordPress integration, and a free domain for a year.

As one of PC Ediotrs’ Choice for Hosting, they mention, “The ever-evolving Bluehost is a dependable web host that makes site creation a breeze, especially for WordPress hosting.” That being said, it’s worth trying out Bluehost for shared hosting – especially if you’re using WordPress.

A2 Hosting

A2 Hosting logo.
We give A2 Hosting an ‘A’ for their shared hosting.

A2 Hosting stands out for its various packages, uptime performance, and great customer service.

It has four tiers of Linux-based shared web hosting. The price varies by storage, emails, and domains.

Each of their servers is optimized for speed and they limit how many clients can operate on each server. They mention that they have 20-times faster page load times than most competitors.

They have a team of experts to help with any account migration, tote a 99.9 uptime commitment, and 24/7 support.

InMotion

inmotion logo.
InMotion is another easy-to-use platform for shared hosting.

Inmotion has a reputation for excellent uptime, ease of use, and flexibility. It also offers many hosting types, so if you ever need to upgrade out of shared hosting, you can.

It has four Linux-based shared hosting plans ranging from 10GB of SSD storage up to 200GB. The plans are determined by websites, email accounts, and data transfers. They all include free SSL and a free domain.

They have a good reputation for uptime and have 24/7 technical support. Plus, it comes with an easy-to-manage cPanel, which any beginner can appreciate.

iPage

iPage logo.
iPage is cheap, yet offers excellent shared hosting features.

iPage is a shared hosting service with extremely affordable pricing plans that vary by term. They power over one million websites and have been in business since 1998.

Some of their benefits include unlimited email addresses, unlimited domains, and 24/7 support. They also include free SSL certificates, a free domain for a year, and a free website builder.

You can also upgrade to VPS and dedicated server hosting if your website becomes too big for shared hosting.

Hosting Shouldn’t Be Spared When Shared

As you can see, there are many benefits to shared hosting. It boils down to that it’s best for beginners, smaller websites, and if you’re on a budget.

You shouldn’t spare your hosting quality if using shared hosting. Use a suitable shared hosting company (like the ones we mentioned), and you should expect quality – even if there may be some occasional hiccups (e.g. downtime). Plus, use a company that makes it easy to upgrade, for when your website grows.

You’re welcome to use this article to reference shared hosting on your website. Or, ahem, with all of this talk about sharing – feel free to share it.

PostgreSQL versus MySQL performance

When you need a database, you typically chose one according to its performance, and/or feature set. However, a "highly scientific performance test" might not speak much about your application's end resulting performance. Due to these reasons, I chose to create a "fully fledged app" using Magic and Hyperlambda, and I chose to create it twice. Once using Oracle's Sakila database and then once more using PostgreSQL's Pagila database.

The reasons why this is interesting, is because the Pagila database is more or less an exact "port" of the Sakila database, including each row in its tables. This allows us to measure the performance differences between these two different databases, and end up with a result resembling the performance difference we might expect in our application.

New Hummingbird Update Ushers In Unified Notifications, a New Wizard, and More!

Our well-loved page speed optimization suite just got even suite-er with a new version release!

Hummingbird is here to make your holiday brighter. We’ve added a slew of new features and improvements, from a browser cache wizard (which automates the process), to a brand new Notifications dashboard widget―with all of your reports and notifiers combined.

Keep reading to check out all the newness, or jump ahead using these links:

Enough suspense… let’s tear into this package. 😁

True Blue and New

Hummingbird has had Notifications, Performance Reports and state of the art Caching features for some time now, but we’ve upped the ante on all of them.

We’re gonna dive into the details, starting with…

New, More Useful and Unified Notifications Module

Notifications have been revamped, and are more functional than ever.

You can use notifications to automate your workflow, sending reports and notification messages directly to any inbox, and at the schedule of your choosing.

There are four types of Notifications:

  • Performance Test – schedule and send regular performance tests of your site
  • Uptime Reports – send reports of any of your sites’ downtime in a given timeframe
  • Database Cleanup – schedule and send reports for regular cleanups of your database {NEW}
  • Uptime Notifications – instant notification of any downtime to allow for timely action

It used to be that you needed to visit each specific feature to set up or view notifiers. Now they are all grouped in one section, which saves time and effort. And looks spiffy, too!

You can access Notifications two different ways, from the WordPress dashboard main menu:

  • Hummingbird, then scroll down (last module on right)
  • Hummingbird > Notifications
Hummingbird notification module, two ways.
Two options for getting to the Notifications module.

Click on the Manage Notifications button to access settings.

To create a new Notification for any report type, click the plus + icon from the desired notification type row. This will open the scheduling wizard.

Enabling notifications in Hummingbird
Enabling notifications in Hummingbird.

Configuring is fast and easy through the Notifications wizard.

Scheduling options for Performance Tests, Uptime Reports, and Database Cleanup types are identical. You can set the Frequency (daily, weekly, or monthly), the Day, and the Time you’d like the report to run.

Scheduling options for Uptime Notifications is a single setting only, for Threshold. This enables you to trigger email notifications based on the amount of time your site is down (5, 10, or 30 minutes).

Threshold uptime notifications
Choose what Threshold time will trigger email notifications when your site is down.

Note that any recipients for Uptime Notifications will receive an email invitation to subscribe, and must confirm that subscription (via the clickable link) in order to receive them.

Notification confirm subscription
Subscribers who’ve not yet accepted your Uptime invite (pending) will have a stopwatch icon.

For notifications you’ve already set up, you will see a blue Enabled status and a cog icon instead of a plus sign. Click the cog icon to Reconfigure―or Disable―any notification, and simply revise at will using the same wizard you originally set it up through.

Notification configure
Reconfiguring is as easy as a few clicks.

Performance Reports Get a Greenlight for Subsites*

Performance reports have been a part of prior Hummingbird versions. However, now this option is also available in every subsite in multisite installs.

You can schedule regular performance tests of your site and send reports of the data to any recipients you desire. These can be for desktop only, mobile only, or both. You can also specify the test Results you want included in your reports: Score Metrics, Audits, and Historic Field Data.

Sample Performance report delivered via email.

Schedule Database Cleanups, & Get Reports*

This feature will allow you to schedule regular cleanups of your database, and send reports with this information to recipients of your choosing.

The Database Cleanup settings enable you to specify which Tables should be included in your scheduled cleanups and corresponding reports. The options are:

  • Post Revisions
  • Draft Posts
  • Trashed Posts
  • Spam Comments
  • Trashed Comments
  • Expired Transients
  • All Transients

To set up, navigate to the Hummingbird Notifications module. Then from the Database Cleanup row, click on the plus + sign.

From the Scheduling window, select Frequency, Day, and Time, then click the Continue button.

Database cleanup schedule
Choose your desired options in the Scheduling window.

Next is the Recipients window. Add people one at a time―from site users, or invite by email―then click the Add Recipient button after each. Once your recipient list is complete, click the Continue button.

Database cleanup recipients
Add Recipients from site users or email addresses.

Finally we have the Customize window. Check the boxes for the Tables you’d like to include (likewise, uncheck any to exclude), then click the Activate button.

Database cleanup customize
You get to decide which Tables you’d like to be in the Database Cleanup report.

That’s it! You can view or revise your settings anytime from the Notifications section of the dashboard module in Hummingbird.

Click here for full documentation on Notifications in Hummingbird.

*Reports are available in the Pro version of Hummingbird only.

Browser Caching Setup is a Wiz with This Automation

The new Browser Caching wizard will get you set up properly, and is a cinch to use.

If your site is hosted with WPMU DEV, Browser Caching has already been configured and no further action is needed.

To access the wizard, navigate to Hummingbird > Caching > Browser Caching, then click on the Configure button.

Browser caching wizard
Hummingbird’s Browser Caching wizard makes setup a breeze.

The first step is Choosing Server Type.

Hummingbird will automatically detect the server type used by your site, however you can manually select another option if need be. (If you have CloudFlare integration enabled, the wizard will skip this step.)

Browser cache server type
Server type is automatically detected in the browser caching wizard.

Click the Next button to proceed to step two, Set Expiry Time.

Here you can select All file types or Individual file types, along with more than a dozen incremental time frames ranging from one hour to one year.

Browser cache expiry time
It’s ideal to Set Expiry Time to the longest value possible.

Click the Next button to proceed to step three: Add Rules for your server type.

Browser cache applying rules
If you have enabled CloudFlare integration, the wizard will apply the rules directly to your CF account for this site, and no further action will be required.

On Apache servers, the wizard will attempt to apply the browser caching rules automatically to the .htaccess file.

However, if your .htaccess file is not writable or if the wizard encounters unexpected issues, you’ll be prompted to copy & paste the code generated by the wizard. Once done, click the Check Status button.

If your site is running on any of the below servers (and not hosted by WPMU DEV), Hummingbird cannot automatically configure browser cache, so you will need to set it up manually. Contact your hosting provider if you’re not sure what server your site is running on.

Guidance for setting up other server options, follows:

  • OpenLitespeed – see this OpenLiteSpeed guide
  • Nginx – manually add the required rules to your nginx.conf file, usually located at /etc/nginx/nginx.conf or /usr/local/nginx/nginx.conf
  • IIS7 (or above) – manually add the required rule following this Microsoft guide

At any time during this process, you can go back a step by clicking the Previous button, or click the Quit Setup button to exit the wizard.

Click here for full documentation on Adding Rules in Browser Caching.

WP Ultimo Now Has Page Cache Integration

Hoorah for WP Ultimo users! This new feature is automatic, so you won’t see the settings for it, but rest assured we’ve got you covered.

Hummingbird automatically works behind the scenes to ensure cache is cleared on domain-mapped subsites, as it should be.

One Last Enhancement (Hint: AO)

There is one last enhancement in this version release…

You can now change the file location for Asset Optimization in multisite.

This allows you to choose where Hummingbird should store your modified assets. This is helpful for those who prefer to store assets in custom directories, perhaps due to internal company policies, or in order to change the hummingbird-assets path for whitelabel purposes.

To access this setting, navigate to:

Hummingbird > Asset Optimization > Settings > File Location

Asset optimizations file location
Choose where you want your modified assets to be stored.

If you’re using WPMU DEV’s CDN, this feature is inactive.

Keep Your Site Humming

The new features and improvements in this latest version of Hummingbird continue to make it a must-have tool. 

And, we already have the next set of new features in the works, which as of right now includes:

  • Adding font optimization to automatic mode in asset optimization module
  • Improved UI for manual mode in asset optimization
  • New onboarding wizard
  • Safe Mode in asset optimization
  • many more improvements to asset optimization

We always keep a running list of upcoming features in our roadmap so you can take a peek anytime and see what’s in the pipeline.

The free version of Hummingbird is feature-packed, and definitely worth implementing on your sites. But if you want to really blow your speed out of the water and switch on more premium features, check out Hummingbird Pro.

You will see a vast improvement in load times once you install and activate either version of Hummingbird, and can continue to tweak your settings for optimal performance.

Prepare to fly!

Test Your Product on a Crappy Laptop

There is a huge and ever-widening gap between the devices we use to make the web and the devices most people use to consume it. It’s also no secret that the average size of a website is huge, and it’s only going to get larger.

What can you do about this? Get your hands on a craptop and try to use your website or web app.

Craptops are cheap devices with lower power internals. They oftentimes come with all sorts of third-party apps preinstalled as a way to offset its cost—apps like virus scanners that are resource-intensive and difficult to remove. They’re everywhere, and they’re not going away anytime soon.

As you work your way through your website or web app, take note of:

  • what loads slowly,
  • what loads so slowly that it’s unusable, and
  • what doesn’t even bother to load at all.

After that, formulate a plan about what to do about it.

The industry average

At the time of this post, the most common devices used to read CSS-Tricks are powerful, modern desktops, laptops, tablets, and phones with up-to-date operating systems and plenty of computational power.

Granted, not everyone who makes websites and web apps reads CSS-Tricks, but it is a very popular industry website, and I’m willing to bet its visitors are indicative of the greater whole.

In terms of performance, the qualities we can note from these devices are:

  • powerful processors,
  • generous amounts of RAM,
  • lots of storage space,
  • high-quality displays, and most likely a
  • high-speed internet connection

Unfortunately, these qualities are not always found in the devices people use to access your content.

Survivor bias

British soldiers in World War I were equipped with a Brodie helmet, a steel hat designed to protect its wearer from overhead blasts and shrapnel while conducting trench warfare. After its deployment, field hospitals saw an uptick in soldiers with severe head injuries.

A grizzled British soldier smiling back at the camera, holding a Brodie helmet with a large hole punched in it. Black and white photograph.
Source: History Daily

Because of the rise in injuries, British command considered going back to the drawing board with the helmet’s design. Fortunately, a statistician pointed out that the dramatic rise in hospital cases was because people were surviving injuries that previously would have killed them—before the introduction of steel the British Army used felt or leather as headwear material.

Survivor bias is the logical error that focuses on those who made it past a selection process. In the case of the helmet, it’s whether you’re alive or not. In the case of websites and web apps, it’s if a person can load and use your content.

Lies, damned lies, and statistics

People who can’t load your website or web app don’t show up as visitors in your analytics suite. This is straightforward enough.

However, the “use” part of “load and use your content” is the important bit here. There’s a certain percentage of devices who try to access your product that will be able to load enough of it to register a hit, but then bounce because the experience is so terrible it is effectively unusable.

Yes, I know analytics can be more sophisticated than this. But through the lens of survivor bias, is this behavior something your data is accommodating?

Blame

It’s easy to go out and get a cheap craptop and feel bad about a slow website you have no control over. The two real problems here are:

  1. Third-party assets, such as the very analytics and CRM packages you use to determine who is using your product and how they go about it. There’s no real control over the quality or amount of code they add to your site, and setting up the logic to block them loading their own third-party resources is difficult to do.
  2. The people who tell you to add these third-party assets. These people typically aren’t aware of the performance issues caused by the ask, or don’t care because it’s not part of the results they’re judged by.

What can we do about these two issues? Tie abstract, one-off business requests into something more holistic and personal.

Bear witness

I know of organizations who do things like “Testing Tuesdays,” where moderated usability testing is conducted every Tuesday. You could do the same for performance, even thread this idea into existing usability testing plans—slow websites aren’t usable, after all.

The point is to construct a regular cadence of seeing how real people actually use your website or web app, using real world devices. And when I say real world, make sure it’s not just the average version of whatever your analytics reports says.

Then make sure everyone is aware of these sessions. It’s a powerful thing to show a manager someone trying to get what they need, but can’t because of the choices your organization has made.

Craptop duty

There are roughly 260 work days in a year. That’s 260 chances to build some empathy by having someone on your development, design, marketing, or leadership team use the craptop for a day.

You can run Linux from a Windows subsystem to run most development tooling. Most other apps I’m aware of in the web-making space have a Windows installer, or can run from a browser. That should be enough to do what you need to do. And if you can’t, or it’s too slow to get done at the pace you’re accustomed to, well, that’s sort of the point.

Craptop duty, combined with usability testing with a low power device, should hopefully be enough to have those difficult conversations about what your website or web app really needs to load and why.

Don’t tokenize

The final thing I’d like to say is that it’s easy to think that the presence of a lower power device equals the presence of an economically disadvantaged person. That’s not true. Powerful devices can become circumstantially slowed by multiple factors. Wealthy individuals can, and do, use lower-power technology.

Perhaps the most important takeaway is poor people don’t deserve an inferior experience, regardless of what they are trying to do. Performant, intuitive, accessible experiences on the web are for everyone, regardless of device, ability, or circumstance.

3D CSS Flippy Snaps With React And GreenSock

Naming things is hard, right? Well, “Flippy Snaps” was the best thing I could come up with. 😂 I saw an effect like this on TV one evening and made a note to myself to make something similar.

Although this isn’t something I’d look to drop on a website any time soon, it’s a neat little challenge to make. It fits in with my whole stance on “Playfulness in Code” to learn. Anyway, a few days later, I sat down at the keyboard, and a couple of hours later I had this:

3D CSS Flippy Snaps ✨

Tap to flip for another image 👇

⚒️ @reactjs && @greensock
👉 https://t.co/Na14z40tHE via @CodePen pic.twitter.com/nz6pdQGpmd

— Jhey 🐻🛠️✨ (@jh3yy) November 8, 2021

My final demo is a React app, but we don’t need to dig into using React to explain the mechanics of making this work. We will create the React app once we’ve established how to make things work.

Note: Before we get started. It’s worth noting that the performance of this demo is affected by the grid size and the demos are best viewed in Chromium-based browsers.

Let’s start by creating a grid. Let’s say we want a 10 by 10 grid. That’s 100 cells (This is why React is handy for something like this). Each cell is going to consist of an element that contains the front and back for a flippable card.

<div class="flippy-snap">
  <!-- 100 of these -->
  <div class="flippy-snap__card flippy-card">
    <div class="flippy-card__front></div>
    <div class="flippy-card__rear></div>
  </div>
</div>

The styles for our grid are quite straightforward. We can use display: grid and use a custom property for the grid size. Here we are defaulting to 10.

.flippy-snap {
  display: grid;
  grid-gap: 1px;
  grid-template-columns: repeat(var(--grid-size, 10), 1fr);
  grid-template-rows: repeat(var(--grid-size, 10), 1fr);
}

We won’t use grid-gap in the final demo, but, it’s good for seeing the cells easier whilst developing.

See the Pen 1. Creating a Grid by JHEY

Next, we need to style the sides of our cards and display images. We can do this by leveraging inline CSS custom properties. Let’s start by updating the markup. We need each card to know its x and y position in the grid.

<div class="flippy-snap">
  <div class="flippy-snap__card flippy-card" style="--x: 0; --y: 0;">
    <div class="flippy-card__front"></div>
    <div class="flippy-card__rear"></div>
  </div>
  <div class="flippy-snap__card flippy-card" style="--x: 1; --y: 0;">
    <div class="flippy-card__front"></div>
    <div class="flippy-card__rear"></div>
  </div>
  <!-- Other cards -->
</div>

For the demo, I'm using Pug to generate this for me. You can see the compiled HTML by clicking “View Compiled HTML” in the demo.

- const GRID_SIZE = 10
- const COUNT = Math.pow(GRID_SIZE, 2)
.flippy-snap
  - for(let f = 0; f < COUNT; f++)
    - const x = f % GRID_SIZE  
    - const y = Math.floor(f / GRID_SIZE)
    .flippy-snap__card.flippy-card(style=`--x: ${x}; --y: ${y};`)
      .flippy-card__front
      .flippy-card__rear

Then we need some styles.

.flippy-card {
  --current-image: url("https://random-image.com/768");
  --next-image: url("https://random-image.com/124");
  height: 100%;
  width: 100%;
  position: relative;
}
.flippy-card__front,
.flippy-card__rear {
  position: absolute;
  height: 100%;
  width: 100%;
  backface-visibility: hidden;
  background-image: var(--current-image);
  background-position: calc(var(--x, 0) * -100%) calc(var(--y, 0) * -100%);
  background-size: calc(var(--grid-size, 10) * 100%);
}
.flippy-card__rear {
  background-image: var(--next-image);
  transform: rotateY(180deg) rotate(180deg);
}

The rear of the card gets its position using a combination of rotations via transform. But, the interesting part is how we show the image part for each card. In this demo, we are using a custom property to define the URLs for two images. And then we set those as the background-image for each card face.

But the trick is how we define the background-size and background-position. Using the custom properties --x and --y we multiply the value by -100%. And then we set the background-size to --grid-size multiplied by 100%. This gives displays the correct part of the image for a given card.

See the Pen 2. Adding an Image by JHEY

You may have noticed that we had --current-image and --next-image. But, currently, there is no way to see the next image. For that, we need a way to flip our cards. We can use another custom property for this.

Let’s introduce a --count property and set a transform for our cards:

.flippy-snap {
  --count: 0;
  perspective: 50vmin;
}
.flippy-card {
  transform: rotateX(calc(var(--count) * -180deg));
  transition: transform 0.25s;
  transform-style: preserve-3d;
}

We can set the --count property on the containing element. Scoping means all the cards can pick up that value and use it to transform their rotation on the x-axis. We also need to set transform-style: preserve-3d so that we see the back of the cards. Setting a perspective gives us that 3D perspective.

This demo lets you update the --count property value so you can see the effect it has.

See the Pen 3. Turning Cards by JHEY

At this point, you could wrap it up there and set a simple click handler that increments --count by one on each click.

const SNAP = document.querySelector('.flippy-snap')
let count = 0
const UPDATE = () => SNAP.style.setProperty('--count', count++)
SNAP.addEventListener('click', UPDATE)

Remove the grid-gap and you’d get this. Click the snap to flip it.

See the Pen 4. Boring Flips by JHEY

Now we have the basic mechanics worked out, it’s time to turn this into a React app. There’s a bit to break down here.

const App = () => {
  const [snaps, setSnaps] = useState([])
  const [disabled, setDisabled] = useState(true)
  const [gridSize, setGridSize] = useState(9)
  const snapRef = useRef(null)

  const grabPic = async () => {
    const pic = await fetch('https://source.unsplash.com/random/1000x1000')
    return pic.url
  }

  useEffect(() => {
    const setup = async () => {
      const url = await grabPic()
      const nextUrl = await grabPic()
      setSnaps([url, nextUrl])
      setDisabled(false)
    }
    setup()
  }, [])

  const setNewImage = async count => {
    const newSnap = await grabPic()
    setSnaps(
      count.current % 2 !== 0 ? [newSnap, snaps[1]] : [snaps[0], newSnap]
    )
    setDisabled(false)
  }

  const onFlip = async count => {
    setDisabled(true)
    setNewImage(count)
  }

  if (snaps.length !== 2) return <h1 className="loader">Loading...</h1>

  return (
    <FlippySnap
      gridSize={gridSize}
      disabled={disabled}
      snaps={snaps}
      onFlip={onFlip}
      snapRef={snapRef}
    />
  )
}

Our App component handles grabbing images and passing them to our FlippySnap component. That’s the bulk of what’s happening here. For this demo, we’re grabbing images from Unsplash.

const grabPic = async () => {
  const pic = await fetch('https://source.unsplash.com/random/1000x1000')
  return pic.url
}

// Initial effect grabs two snaps to be used by FlippySnap
useEffect(() => {
  const setup = async () => {
    const url = await grabPic()
    const nextUrl = await grabPic()
    setSnaps([url, nextUrl])
    setDisabled(false)
  }
  setup()
}, [])

If there aren’t two snaps to show, then we show a “Loading...” message.

if (snaps.length !== 2) return <h1 className="loader">Loading...</h1>

If we are grabbing a new image, we need to disable FlippySnap so we can’t spam-click it.

<FlippySnap
  gridSize={gridSize}
  disabled={disabled} // Toggle a "disabled" prop to stop spam clicks
  snaps={snaps}
  onFlip={onFlip}
  snapRef={snapRef}
/>

We’re letting App dictate the snaps that get displayed by FlippySnap and in which order. On each flip, we grab a new image, and depending on how many times we’ve flipped, we set the correct snaps. The alternative would be to set the snaps and let the component figure out the order.

const setNewImage = async count => {
  const newSnap = await grabPic() // Grab the snap
  setSnaps(
    count.current % 2 !== 0 ? [newSnap, snaps[1]] : [snaps[0], newSnap]
  ) // Set the snaps based on the current "count" which we get from FlippySnap
  setDisabled(false) // Enable clicks again
}

const onFlip = async count => {
  setDisabled(true) // Disable so we can't spam click
  setNewImage(count) // Grab a new snap to display
}

How might FlippySnap look? There isn’t much to it at all!

const FlippySnap = ({ disabled, gridSize, onFlip, snaps }) => {
  const CELL_COUNT = Math.pow(gridSize, 2)
  const count = useRef(0)

  const flip = e => {
    if (disabled) return
    count.current = count.current + 1
    if (onFlip) onFlip(count)
  }

  return (
    <button
      className="flippy-snap"
      ref={containerRef}
      style={{
        '--grid-size': gridSize,
        '--count': count.current,
        '--current-image': `url('${snaps[0]}')`,
        '--next-image': `url('${snaps[1]}')`,
      }}
      onClick={flip}>
      {new Array(CELL_COUNT).fill().map((cell, index) => {
        const x = index % gridSize
        const y = Math.floor(index / gridSize)
        return (
          <span
            key={index}
            className="flippy-card"
            style={{
              '--x': x,
              '--y': y,
            }}>
            <span className="flippy-card__front"></span>
            <span className="flippy-card__rear"></span>
          </span>
        )
      })}
    </button>
  )
}

The component handles rendering all the cards and setting the inline custom properties. The onClick handler for the container increments the count. It also triggers the onFlip callback. If the state is currently disabled, it does nothing. That flip of the disabled state and grabbing a new snap triggers the flip when the component re-renders.

See the Pen 5. React Foundation by JHEY

We have a React component that will now flip through images for as long as we want to keep requesting new ones. But, that flip transition is a bit boring. To spice it up, we’re going to make use of GreenSock and its utilities. In particular, the “distribute” utility. This will allow us to distribute the delay of flipping our cards in a grid-like burst from wherever we click. To do this, we’re going to use GreenSock to animate the --count value on each card.

It’s worth noting that we have a choice here. We could opt to apply the styles with GreenSock. Instead of animating the --count property value, we could animate rotateX. We could do this based on the count ref we have. And this also goes for any other things we choose to animate with GreenSock in this article. It’s down to preference and use case. You may feel that updating the custom property value makes sense. The benefit being that you don’t need to update any JavaScript to get a different styled behavior. We could change the CSS to use rotateY for example.

Our updated flip function could look like this:

const flip = e => {
  if (disabled) return
  const x = parseInt(e.target.parentNode.getAttribute('data-snap-x'), 10)
  const y = parseInt(e.target.parentNode.getAttribute('data-snap-y'), 10)
  count.current = count.current + 1
  gsap.to(containerRef.current.querySelectorAll('.flippy-card'), {
    '--count': count.current,
    delay: gsap.utils.distribute({
      from: [x / gridSize, y / gridSize],
      amount: gridSize / 20,
      base: 0,
      grid: [gridSize, gridSize],
      ease: 'power1.inOut',
    }),
    duration: 0.2,
    onComplete: () => {
      // At this point update the images
      if (onFlip) onFlip(count)
    },
  })
}

Note how we’re getting an x and y value by reading attributes of the clicked card. For this demo, we’ve opted for adding some data attributes to each card. These attributes communicate a card's position in the grid. We’re also using a new ref called containerRef. This is so we reference only the cards for a FlippySnap instance when using GreenSock.

{new Array(CELL_COUNT).fill().map((cell, index) => {
  const x = index % gridSize
  const y = Math.floor(index / gridSize)
  return (
    <span
      className="flippy-card"
      data-snap-x={x}
      data-snap-y={y}
      style={{
        '--x': x,
        '--y': y,
      }}>
      <span className="flippy-card__front"></span>
      <span className="flippy-card__rear"></span>
    </span>
  )
})}

Once we get those x and y values, we can make use of them in our animation. Using gsap.to we want to animate the --count custom property for every .flippy-card that’s a child of containerRef.

To distribute the delay from where we click, we set the value of delay to use gsap.utils.distribute. The from value of the distribute function takes an Array containing ratios along the x and y axis. To get this, we divide x and y by gridSize. The base value is the initial value. For this, we want 0 delay on the card we click. The amount is the largest value. We've gone for gridSize / 20 but you could experiment with different values. Something based on the gridSize is a good idea though. The grid value tells GreenSock the grid size to use when calculating distribution. Last but not least, the ease defines the ease of the delay distribution.

gsap.to(containerRef.current.querySelectorAll('.flippy-card'), {
  '--count': count.current,
  delay: gsap.utils.distribute({
    from: [x / gridSize, y / gridSize],
    amount: gridSize / 20,
    base: 0,
    grid: [gridSize, gridSize],
    ease: 'power1.inOut',
  }),
  duration: 0.2,
  onComplete: () => {
    // At this point update the images
    if (onFlip) onFlip(count)
  },
})

As for the rest of the animation, we are using a flip duration of 0.2 seconds. And we make use of onComplete to invoke our callback. We pass the flip count to the callback so it can use this to determine snap order. Things like the duration of the flip could get configured by passing in different props if we wished.

Putting it all together gives us this:

See the Pen 6. Distributed Flips with GSAP by JHEY

Those that like to push things a bit might have noticed that we can still “spam” click the snap. And that’s because we don’t disable FlippySnap until GreenSock has completed. To fix this, we can use an internal ref that we toggle at the start and end of using GreenSock.

const flipping = useRef(false) // New ref to track the flipping state

const flip = e => {
  if (disabled || flipping.current) return
  const x = parseInt(e.target.parentNode.getAttribute('data-snap-x'), 10)
  const y = parseInt(e.target.parentNode.getAttribute('data-snap-y'), 10)
  count.current = count.current + 1
  gsap.to(containerRef.current.querySelectorAll('.flippy-card'), {
    '--count': count.current,
    delay: gsap.utils.distribute({
      from: [x / gridSize, y / gridSize],
      amount: gridSize / 20,
      base: 0,
      grid: [gridSize, gridSize],
      ease: 'power1.inOut',
    }),
    duration: 0.2,
    onStart: () => {
      flipping.current = true
    },
    onComplete: () => {
      // At this point update the images
      flipping.current = false
      if (onFlip) onFlip(count)
    },
  })
}

And now we can no longer spam click our FlippySnap!

See the Pen 7. No Spam Clicks by JHEY

Now it’s time for some extra touches. At the moment, there’s no visual sign that we can click our FlippySnap. What if when we hover, the cards raise towards us? We could use onPointerOver and use the “distribute” utility again.

const indicate = e => {
  const x = parseInt(e.currentTarget.getAttribute('data-snap-x'), 10)
  const y = parseInt(e.currentTarget.getAttribute('data-snap-y'), 10)
  gsap.to(containerRef.current.querySelectorAll('.flippy-card'), {
    '--hovered': gsap.utils.distribute({
      from: [x / gridSize, y / gridSize],
      base: 0,
      amount: 1,
      grid: [gridSize, gridSize],
      ease: 'power1.inOut'
    }),
    duration: 0.1,
  })
}

Here, we are setting a new custom property on each card named --hovered. This is set to a value from 0 to 1. Then within our CSS, we are going to update our card styles to watch for the value.

.flippy-card {
  transform: translate3d(0, 0, calc((1 - (var(--hovered, 1))) * 5vmin))
             rotateX(calc(var(--count) * -180deg));
}

Here we are saying that a card will move on the z-axis at most 5vmin.

We then apply this to each card using the onPointerOver prop.

{new Array(CELL_COUNT).fill().map((cell, index) => {
  const x = index % gridSize
  const y = Math.floor(index / gridSize)
  return (
    <span
      onPointerOver={indicate}
      className="flippy-card"
      data-snap-x={x}
      data-snap-y={y}
      style={{
        '--x': x,
          '--y': y,
      }}>
      <span className="flippy-card__front"></span>
      <span className="flippy-card__rear"></span>
    </span>
  )
})}

And when our pointer leaves our FlippySnap we want to reset our card positions.


const reset = () => {
  gsap.to(containerRef.current.querySelectorAll('.flippy-card'), {
    '--hovered': 1,
    duration: 0.1,
  })
}

And we can apply this with the onPointerLeave prop.

<button
  className="flippy-snap"
  ref={containerRef}
  onPointerLeave={reset}
  style={{
    '--grid-size': gridSize,
    '--count': count.current,
    '--current-image': `url('${snaps[0]}')`,
    '--next-image': `url('${snaps[1]}')`,
  }}
  onClick={flip}>

Put that all together and we get something like this. Try moving your pointer over it.

See the Pen 8. Visual Inidication with Raised Cards by JHEY

What next? How about a loading indicator so we know when our App is grabbing the next image? We can render a loading spinner when our FlippySnap is disabled.

{disabled && <span className='flippy-snap__loader'></span>}

He styles for which could make a rotating circle.

.flippy-snap__loader {
  border-radius: 50%;
  border: 6px solid #fff;
  border-left-color: #000;
  border-right-color: #000;
  position: absolute;
  right: 10%;
  bottom: 10%;
  height: 8%;
  width: 8%;
  transform: translate3d(0, 0, 5vmin) rotate(0deg);
  animation: spin 1s infinite;
}
@keyframes spin {
  to {
    transform: translate3d(0, 0, 5vmin) rotate(360deg);
  }
}

And this gives us a loading indicator when grabbing a new image.

See the Pen 9. Add Loading Indicator by JHEY

That’s it!

That’s how we can create a FlippySnap with React and GreenSock. It’s fun to make things that we may not create on a day-to-day basis. Demos like this can pose different challenges and can level up your problem-solving game.

I took it a little further and added a slight parallax effect along with some audio. You can also configure the grid size! (Big grids affect performance though.)

See the Pen 3D CSS Flippy Snaps v2 (React && GSAP) by JHEY

It’s worth noting that this demo works best in Chromium-based browsers.

So, where would you take it next? I’d like to see if I can recreate it with Three.js next. That would address the performance. 😅

Stay Awesome! ʕ•ᴥ•ʔ

Collective #689



Codrops Collective 687 image

Our Sponsor

Black Friday is Coming

Over $1,000,000 worth of free prizes, free bonus gifts, dozens of exclusive discounts and perks from our partners, and our biggest discount ever on Divi memberships and upgrades (plus tons of discounts in the Divi Marketplace)!

Check it out





Codrops Collective 689 item image

Tiny UI Toggle

Toggle the state of a UI element to easily create components e.g. collapse, accordion, tabs, dropdown, dialog/modal.

Check it out







Codrops Collective 689 item image

AppFlowy.IO

AppFlowy is an open source alternative to Notion where you are in charge of your
data and customizations.

Check it out




Codrops Collective 689 item image

State of CSS 2021

Philip Jägenstedt gives an overview of the State of CSS 2021 survey results and how they will influence priorities in 2022.

Check it out


Codrops Collective 689 item image

Tamagui

In case you didn’t know about it: Universal React design systems that optimize for native and web.

Check it out


Codrops Collective 689 item image

Caffeine

A very basic REST service for JSON data – enough for prototyping and MVPs.

Check it out



Codrops Collective 689 item image

Backgrounds

In this “Learn CSS!” module you’ll learn how you can style backgrounds of boxes using CSS.

Check it out







The post Collective #689 appeared first on Codrops.

Improving The Performance Of Wix Websites (Case Study)

A website’s performance can make or break its success, yet in August 2020, despite many improvements we had previously made, such as implementing Server-Side Rendering (SSR), the ratio of Wix websites with good Google Core Web Vitals (CWV) scores was only 4%. It was at this point that we realized we needed to make a significant change in our approach towards performance, and that we must embrace performance as part of our culture.

Implementing this change enabled us to take major steps such as updating our infrastructure along with completely rewriting our core functionality from the ground up. We deployed these enhancements gradually over time to ensure that our users didn’t experience any disruptions, but instead only a consistent improvement of their site speed.

Since implementing these changes, we have seen a dramatic improvement in the performance of websites built and hosted on our platform. In particular, the worldwide ratio of Wix websites that receive a good (green) CWV score has increased from 4% to over 33%, which means an increase of over 750%. We also expect this upwards trend to continue as we roll out additional improvements to our platform.

You can see the impact of these efforts in the Core Web Vitals Technology Report from Google Chrome User Experience Report (CrUX) / HTTP Archive:

These performance improvements provide a lot of value to our users because sites that have good Google CWV scores are eligible for the maximum performance ranking boost in the Google search results (SERP). They also likely have increased conversion rates and lower bounce rates due to the improved visitor experience.

Now, let’s take a deeper look into the actions and processes we put in place in order to achieve these significant results.

The Wix Challenge

Let’s begin by describing who we are, what are our use-cases, and our challenges.

Wix is a SaaS platform providing products and services for any type of user to create an online presence. This includes building websites, hosting websites, managing campaigns, SEO, analytics, CRM, and much more. It was founded in 2006 and has since grown to have over 210 million users in 190 countries, and hosts over five million domains. In addition to content websites, Wix also supports e-commerce, blogs, forums, bookings and events, and membership and authentication. And Wix has its own app store with apps and themes for restaurants, fitness, hotels, and much more. To support all this, we have over 5,000 employees spread around the globe.

This high rate of growth, coupled with the current scale and diversity of offerings presents a huge challenge when setting out to improve performance. It’s one thing to identify bottlenecks and implement optimizations for a specific website or a few similar websites, and quite another when dealing with many millions of websites, having such a wide variety of functionality, and an almost total freedom of design. As a result, we cannot optimize for a specific layout or set of features that are known in advance. Instead, we have to accommodate all of this variability, mostly on-demand. On the positive side, since there are so many users and websites on Wix, improvements that we make benefit millions of websites, and can have a positive impact on the Web as a whole.

There are more challenges for us in addition to scale and diversity:

  • Retaining existing design and behavior
    A key requirement we set for ourselves was to improve the performance of all existing websites built on Wix without altering any aspect of their look and feel. So essentially, they need to continue to look and work exactly the same, only operate faster.
  • Maintaining development velocity
    Improving performance requires a significant amount of resources and effort. And the last thing we want is to negatively impact our developers' momentum, or our ability to release new features at a high rate. So once a certain level of performance is achieved, we want to be able to preserve it without being constantly required to invest additional effort, or slow down the development process. In other words, we needed to find a way to automate the process of preventing performance degradations.
  • Education
    In order to create change across our entire organization, we needed to get all the relevant employees, partners, and even customers up to speed about performance quickly and efficiently. This required a lot of planning and forethought, and quite a bit of trial and error.
Creating A Performance Culture

Initially, at Wix, performance was a task assigned to a relatively small dedicated group within the company. This team was tasked with identifying and addressing specific performance bottlenecks, while others throughout the organization were only brought in on a case-by-case basis. While some noticeable progress was made, it was challenging to implement significant changes just for the sake of speed.

This was because the amount of effort required often exceeded the capacity of the performance team, and also because ongoing work on various features and capabilities often got in the way. Another limiting factor was the lack of data and insight into exactly what the bottlenecks were so that we could know exactly where to focus our efforts for maximum effect.

About two years ago, we came to the conclusion that we cannot continue with this approach. That in order to provide the level of performance that our users require and expect we need to operate at the organizational level. And that if we do not provide this level of performance it will be detrimental to our business and future success. There were several catalysts for this understanding, some due to changes in the Web ecosystem in general, and others to our own market segment in particular:

  • Changes in device landscape
    Six years ago, over 70% of sessions for Wix websites originated from desktops, with under 30% coming from mobile devices. Since then the situation has flipped, and now over 70% of sessions originate on mobile. While mobile devices have come a long way in terms of network and CPU speed, many of them are still significantly underpowered when compared to desktops, especially in countries where mobile connectivity is still poor. As a result, unless performance improves, many visitors experience a decline in the quality of experience they receive over time.
  • Customer expectations
    Over the past few years, we’ve seen a significant shift in customer expectations regarding performance. Thanks to activities by Google and others, website owners now understand that having good loading speed is a major factor in the success of their sites. As a result, customers prefer platforms that provide good performance — and avoid or leave those that don’t.
  • Google search ranking
    Back in 2018 Google announced that sites with especially slow pages on mobile would be penalized. But starting in 2021, Google shifted its approach to instead boost the ranking of mobile sites that have good performance. This has increased the motivation of site owners and SEOs to use platforms that can create fast sites.
  • Heavier websites
    As the demand for faster websites increases, so does the expectation that websites provide a richer and more engaging experience. This includes features like videos and animations, sophisticated interactions, and greater customization. As websites become heavier and more complex, the task of maintaining performance becomes ever more challenging.
  • Better tooling and metrics standardization
    Measuring website performance used to be challenging and required specific expertise. But in recent years the ability to gauge the speed and responsiveness of websites has improved significantly and has become much simpler, thanks to tools like Google Lighthouse and PageSpeed Insights. Moreover, the industry has primarily standardized on Google’s Core Web Vitals (CWV) performance metrics, and monitoring them is now integrated into services such as the Google Search Console.

These changes dramatically shifted our perception of website performance from being just a part of our offerings to become an imperative company focus and a strategic priority. And that in order to achieve this strategy implementing a culture of performance throughout the organization is a must. In order to accomplish this, we took a two-pronged approach. First, at an “all hands” company update, our CEO announced that going forward ensuring good performance for websites built on our platform will be a strategic priority for the company as a whole. And that the various units within the company will be measured on their ability to deliver on this goal.

At the same time, the performance team underwent a huge transformation in order to support the company-wide prioritization of performance. It went from working on specific speed enhancements to interfacing with all levels of the organization, in order to support their performance efforts. The first task was providing education on what website performance actually means, and how it can be measured. And once the teams started working off of the knowledge, it meant organizing performance-focused design and code reviews, training and education, plus providing tools and assets to support these ongoing efforts.

To this end, the team built on the expertise that it had already gained while working on specific performance projects. And it also engaged with the performance community as a whole, for example by attending conferences, bringing in domain experts, and studying up on modern architectures such as the Jamstack.

Measuring And Monitoring

Peter Drucker, one of the best-known management consultants, famously stated:

“If you can’t measure it, you can’t improve it.”

This statement is true for management, and it’s undoubtedly true for website performance.

But which metrics should be measured in order to determine website performance? Over the years many metrics have been proposed and used, which made it difficult to compare results taken from different tools. In other words, the field lacked standardization. This changed approximately two years ago when Google introduced three primary metrics for measuring website performance, known collectively as Google Core Web Vitals (CWV).

The three metrics are:

  1. LCP: Largest Contentful Paint (measures visibility)
  2. FID: First Input Delay (measures response time)
  3. CLS: Cumulative Layout Shift (measures visual stability)

CWV have enabled the industry to focus on a small number of metrics that cover the main aspects of the website loading experience. And the fact that Google is now using CWV as a search ranking signal provides additional motivation for people to improve them.

Recommended Reading: An In-Depth Guide To Measuring Core Web Vitals” by Barry Pollard

At Wix, we focus on CWV when analyzing field data, but also use lab measurements during the development process. In particular, lab tests are critical for implementing performance budgets in order to prevent performance degradations. The best implementations of performance budgets integrate their enforcement into the CI/CD process, so they are applied automatically, and prevent deployment to production when a regression is detected. When such a regression does occur it breaks the build, forcing the team to fix it before deployment can proceed.

There are various performance budgeting products and open-source tools available, but we decided to create our own custom budgeting service called Perfer. This is because we operate at a much larger scale than most web development operations, and at any given moment hundreds of different components are being developed at Wix and are used in thousands of different combinations in millions of different websites.

This requires the ability to test a very large number of configurations. Moreover, in order to avoid breaking builds with random fluctuations, tests that measure performance metrics or scores are run multiple times and an aggregate of the results is used for the budget. In order to accommodate such a high number of test runs without negatively impacting build time, Perfer executes the performance measurements in parallel on a cluster of dedicated servers called WatchTower. Currently, WatchTower is able to execute up to 1,000 Lighthouse tests per minute.

After deployment performance data is collected anonymously from all Wix sessions in the field. This is especially important in our case because the huge variety of Wix websites makes it effectively impossible to test all relevant configurations and scenarios “in the lab.” By collecting and analyzing RUM data, we ensure that we have the best possible insight into the experiences of actual visitors to the websites. If we identify that a certain deployment degrades performance and harms that experience, even though this degradation was not identified by our lab tests, we can quickly roll it back.

Another advantage of field measurements is that they match the approach taken by Google in order to collect performance data into the CrUX database. Since it is the CrUX data that is used as an input for Google’s performance ranking signal, utilizing the same approach for performance analysis is very important.

All Wix sessions contain custom instrumentation code that gathers performance metrics and transmits this information anonymously back to our telemetry servers. In addition to the three CWV, this code also reports Time To First Byte (TTFB), First Contentful Paint (FCP), Total Blocking Time (TBT), and Time To Interactive (TTI), and also low-level metrics such as DNS lookup time and SSL handshake time. Collecting all this information makes it possible for us to not only quickly identify performance issues in production, but also to analyze the root causes of such issues. For example, we can determine if an issue was caused by changes in our own software by the changes in our infrastructure configuration, or even by issues affecting third-party services that we utilize (such as CDNs).

Upgrading Our Services And Infrastructure

Back when I joined Wix seven years ago, we only had a single data center (along with a fallback data center) in the USA which was used to serve users from all around the world. Since then we’ve expanded the number of data centers significantly, and have multiple such centers spread around the globe. This ensures that wherever our users connect from, they’ll be serviced both quickly and reliably. In addition, we use CDNs from multiple providers to ensure rapid content delivery regardless of location. This is especially important given that we now have users in 190 countries.

In order to make the best possible use of this enhanced infrastructure, we completely redesigned and rewrote significant portions of our front-end code. The goal was to shift as much of the computation as possible off of the browsers and onto fast servers. This is especially beneficial in the case of mobile devices, which are often less powerful and slower. In addition, this significantly reduced the amount of JavaScript code that needs to be downloaded by the browser.

Reducing JavaScript size almost always benefits performance because it decreases the overhead of the actual download as well as parsing and execution. Our measurements showed a direct correlation between the JavaScript size reduction and performance improvements:

Another benefit of moving computations from browsers to servers is that the results of these computations can often be cached and reused between sessions even for unrelated visitors, thus reducing per-session execution time dramatically. In particular, when a visitor navigates to a Wix site for the first time, the HTML of the landing page is generated on the server by Server-Side Rendering (SSR) and the resulting HTML can then be propagated to a CDN.

Navigations to the same site — even by unrelated visitors — can then be served directly from the CDN, without even accessing our servers. If this workflow sounds familiar that’s because it’s essentially the same as the on-demand mechanism provided by some advanced Jamstack services.

Note: “On-demand” means that instead of Static Site Generation performed at build time, the HTML is generated in response to the first visitor request, and propagated to a CDN at runtime.

Similarly to Jamstack, client-side code can enhance the user interface, making it more dynamic by invoking backend services using APIs. The results of some of these APIs are also cached in a CDN as appropriate. For example, in the case of a shopping cart checkout icon, the HTML for the button is generated on the server, but the actual number of items in the cart is determined on the client-side and then rendered into that icon. This way, the page HTML can be cached even though each visitor is able to see a different item count value. If the HTML of the page does need to change, for example, if the site owner publishes a new version, then the copy in the CDN is immediately purged.

In order to reduce the impact of computations on end-point devices, we moved business logic that does need to run in the browsers into Web Workers. For example, business logic that is invoked in response to user interactions. The code that runs in the browser’s main thread is mostly dedicated to the actual rendering operations. Because Web Workers execute their JavaScript code off of the main thread, they don’t block event handling, enabling the browser to quickly respond to user interactions and other events.

Examples of code that runs in Web Workers include the business logic of various vertical solutions such as e-commerce and bookings. Sending requests to backend services is mostly done from Web Workers, and the responses are parsed, stored and managed in the Web Workers as well. As a result, using Web Workers can reduce blocking and improve the FID metric significantly, providing better responsiveness in general. In lab measurements, this improved TBT measurements.

Enhanced Media Delivery

Modern websites often provide a richer user experience by downloading and presenting much more media resources, such as images and videos, than ever before. Over the past decade the median amount of bytes of images downloaded by websites, according to the Google CrUX database, has increased more than eightfold! This is more than the median improvement in network speeds during the same period, which results in slower loading times. Additionally, our RUM data (field measurements) shows that for almost ¾ of Wix sessions the LCP element is an image. All of this highlights the need to deliver images to the browsers as efficiently as possible and to quickly display the images that are in a webpage’s initially visible viewport area.

At the same time, it is crucial to deliver the highest quality of images possible in order to provide an engaging and delightful user experience. This means that improving performance by noticeably degrading visual experience is almost always out of the question. The performance enhancements we implement need to preserve the original quality of images used, unless explicitly specified otherwise by the user.

One technique for improving media-related performance is optimizing the delivery process. This means downloading required media resources as quickly as possible. In order to achieve this for Wix websites, we use a CDN to deliver the media content, as we do with other resources such as the HTML itself. And by specifying a lengthy caching duration in the HTTP response header, we allow images to be cached by browsers as well. This can improve the loading speed for repeat visits to the same page significantly by completely avoiding downloading the images over the network again.

Another technique for improving performance is to deliver the required image information more efficiently by reducing the number of bytes that need to be downloaded while preserving the desired image quality. One method to achieve this is to use a modern image format such as WebP. Images encoded as WebP are generally 25% to 35% smaller than equivalent images encoded as PNG or JPG. Images uploaded to Wix are automatically converted to WebP before being delivered to browsers that support this format.

Very often images need to be resized, cropped, or otherwise manipulated when displayed within a webpage. This manipulation can be performed inside the browser using CSS, but this usually means that more data needs to be downloaded than is actually used. For example, all the pixels of an image that have been cropped out aren’t actually needed but are still delivered. We also take into account viewport size and resolution, and display pixel depth, to optimize the image size. For Wix sites, we perform these manipulations on the server-side before the images are downloaded, this way we can ensure that only the pixels that are actually required are transmitted over the network. On the servers, we employ AI and ML models to generate resized images at the best quality possible.

Yet another technique that is used for reducing the amount of image data that needs to be downloaded upfront is lazy loading images. This means not loading images that are wholly outside the visible viewport until they are about to scroll in. Deferring image download in this way, and even avoiding it completely (if a visitor never scrolls to that part of the page), reduces network contention for resources that are already required as soon as the page loads, such as an LCP image. Wix websites automatically utilize lazy loading for images, and for various other types of resources as well.

Looking Forward

Over the past two years, we have deployed numerous enhancements to our platform intended to improve performance. The result of all these enhancements is a dramatic increase in the percentage of Wix websites that get a good score for all three CWVs compared to a year ago. But performance is a journey, not a destination, and we still have many more action items and future plans for improving websites’ speed. To that end, we are investigating new browser capabilities as well as additional changes to our own infrastructure. The performance budgets and monitoring that we have implemented provide safeguards that these changes provide actual benefits.

New media formats are being introduced that have the potential to reduce download sizes even more while retaining image quality. We are currently investigating AVIF, which looks to be especially promising for photographic images that can use lossy compression. In such scenarios, AVIF can provide significantly reduced download sizes even compared to WebP, while retaining image quality. AVIF also supports progressive rendering which may improve perceived performance and user experience, especially on slower connections, but currently won’t provide any benefits for CWV.

Another promising browser innovation that we are researching is the content-visibility CSS property. This property enables the browser to skip the effort of rendering an HTML element until it’s actually needed. In particular, when content-visibility:auto setting is applied to an element that is off-screen its descendants are not rendered. This enables the browser to skip most of the rendering work, such as styling and layout of the element’s subtree. This is especially desirable for many Wix pages that tend to be lengthy and content-rich. In particular, Wix’s new EditorX responsive sites editor support sophisticated grid and flexbox layouts that can be expensive for the browser to render, so that avoiding unnecessary rendering operations is especially desirable. Unfortunately, this property is currently only supported in Chromium-based browsers. Also, it’s challenging to implement this functionality in such a way that no Wix website is ever adversely affected in terms of its visual appearance or behavior.

Priority Hints is an upcoming browser feature that we are also investigating, which promises to improve performance by providing greater control over when and how browsers download resources. This feature will inform browsers about which resources are more urgent and should be downloaded ahead of other resources. For example, a foreground image could be assigned a higher priority than a background image since it’s more likely to contain significant content. On the other hand, if applied incorrectly, priority hints can actually degrade download speed, and hence also CWV scores. Priority hints are currently undergoing Origin Trial in Chrome.

In addition to enhancing Wix’s own infrastructure, we’re also working on providing better tooling for our users so that they can design and implement faster websites. Since Wix is highly customizable, users have the freedom and flexibility to create both fast and slow websites on our platform, depending on the decisions they make while building these sites. Our goal is to inform users about the performance of their decisions so that they can make appropriate choices. This is similar to the SEO Wiz tool that we already provide.

Summary

Implementing a performance culture at Wix enabled us to apply performance enhancements to almost every part of our technological stack — from infrastructure to software architecture and media formats. While some of these enhancements have had a greater impact than others, it’s the cumulative effect that provides the overall benefits. And these benefits aren’t just measurable at a large scale; they’re also apparent to our users, thanks to tools like WebPageTest and Google PageSpeed Insights and actual feedback that they receive from their own users.

The feedback we ourselves receive, from our users and the industry at large, and the tangible benefits we experience, drive us forward to continue improving our speed. The performance culture that we’ve implemented is here to stay.

Related Resources

Collective #685









Next.js 12

Next.js 12 introduces a brand-new Rust compiler, Middleware (beta), React 18 Support, Native ESM Support, URL Imports, React Server Components (alpha), and more!

Check it out



CookLang

CookLang is a markup language for recipes. Create a recipe file, where each line is a step in the recipe.

Check it out





Obsidian

Obsidian is a powerful knowledge base on top of a local folder of plain text Markdown files. Free for personal use.

Check it out







Localstack

A fully functional local AWS cloud stack. Develop and test your cloud and Serverless apps offline.

Check it out



Compatlib

With these Python utilities you can easily write cross-version compatible libraries.

Check it out





The post Collective #685 appeared first on Codrops.

Collective #682






Collective 682 item image

Atropos

Atropos is a lightweight, free and open-source JavaScript library to create touch-friendly 3D parallax hover effects.

Check it out











Collective 682 item image

Tidy Viewer

Tidy Viewer is a cross-platform CLI CSV pretty printer that uses column styling to maximize viewer enjoyment.

Check it out



Collective 682 item image

AESON

Welcome to AESON, a futuristic (and creepy) chatroom in WebGL. The project was made at Gobelins Paris during a workshop. By Thoma Lecornu. Read more about it in this tweet.

Check it out


Collective 682 item image

Medusa

In case you didn’t know about it: Medusa is a headless open-source commerce platform.

Check it out




Collective 682 item image

From Our Blog

Creating 3D Characters in Three.js

Are you looking to get started with 3D on the web? In this tutorial we’ll walk through creating a three-dimensional character using Three.js, adding some simple but effective animation, and a generative color palette.

Read it


The post Collective #682 appeared first on Codrops.

6 Best Reseller Hosting Plans of 2021 (Best Value + Quality)

Are you looking for the best reseller hosting?

Reseller hosting lets you sell hosting services just like a web hosting company. Web designers, developers, and agencies can offer reseller hosting as an addon service for clients and customers.

In this article, we’ll share our favorite reseller hosting so that you can choose the right hosting company for your business.

Best reseller hosting of 2021 (compared)

What is Reseller Hosting and Who is it For?

With reseller hosting, you purchase web hosting services and then sell the server space and features to other customers.

Think of it like running your own web hosting business, but without all of the expensive costs like hardware, servers, maintenance, support, and more. Every technical task is managed behind the scenes by the web host.

It’s important to choose a high quality hosting provider, since their service will be the foundation for your business.

If you’re a developer, agency, or manage WordPress websites for clients in any way, then reselling hosting can be a great way to make money online.

With that said, let’s take a look at some of the best reseller hosting options available on the market today.

1. SiteGround

SiteGround Reseller

SiteGround is a popular hosting provider that’s known for its high quality support and fast loading speeds. It’s also one of the hosts officially recommended by WordPress.

The reseller hosting plans let you pass on all of SiteGround’s great hosting features to your clients.

All reseller packages support an unlimited number of hosting accounts, have free WordPress installation and updates, daily backups, and more.

Free SSL certificates, email accounts, and a CDN are included with your customers’ hosting accounts too.

You can also offer your clients access to the site staging features, datacenter selection, and free migration for those coming from other web hosts.

Pricing: SiteGround reseller plans start at $7.99 and include 20GB of storage and support for unlimited websites. If you want custom branding for your account, then you’ll need one of the higher priced plans.

For more details, see our SiteGround review to learn more about the features, performance, and more.

2. HostGator

HostGator Reseller

HostGator is one of the top WordPress hosting providers in the world. They’ve been around since 2002 and have grown to become one of the biggest and most beginner friendly hosts in the market.

The reseller hosting offers great features like unlimited domains, a free SSL certificate, dedicated IP addresses, FTP accounts, private name servers, automated backups, and more.

Every reseller plan comes with the WHM control panel for easier client management and server control. You can monitor and control the server bandwidth and disk space for every customer server.

It also includes WHMCS billing software to easily automate your billing.

Finally, HostGator includes 24/7 support via live chat and phone.

Pricing: HostGator reseller plans start at $19.95 per month and have 60GB of disk space and support for unlimited websites.

To learn more about HostGator, see our in depth HostGator review where we evaluate their speed, performance, and support.

3. GreenGeeks

GreenGeeks Reseller

GreenGeeks is well known in the hosting industry for being an environmentally friendly host. They offer very fast loading speeds, 24/7 US-based support, and power over 600,000 websites.

Every reseller plan includes unlimited disk space and bandwidth, a free CDN, and automated daily backups. Plus, high level plans can support eCommerce stores across different platforms like WooCommerce.

If your clients are more advanced users, then you can offer support for multiple versions of PHP, FTP access, WP-CLI, Git, and more.

White label services are also available, so you can sell hosting under your branding instead of GreenGeeks.

Pricing: GreenGeeks reseller plans start at $19.95 per month with 60GB of disk space and support for 25 cPanel accounts.

To learn more about GreenGeeks, see our in depth GreenGeeks review where we cover the pros and cons in depth.

4. WP Engine

WP Engine Managed Hosting

WP Engine is known for its managed WordPress hosting plans, rock solid support team, and fast speeds.

It’s very popular with WordPress developers since it offers support for up to 30 websites on managed hosting when you choose the Managed Hosting Scale plan.

You get access to 24/7 support, automated migrations, daily backups, SSH access, and a free SSL certificate with managed hosting.

Plus, with WP Engine, you get access to 10 different StudioPress WordPress themes that you can use on client websites.

Pricing: WP Engine starts at $241.67 per month when billed yearly and supports up to 30 websites.

For more details, see our in depth WP Engine review where we highlight the pros, cons, performance, and more.

5. A2 Hosting

A2 Reseller Hosting

A2 Hosting is a web host known for its speed, performance, and reliability. There’s also 24/7 tech support to assist with any website issues.

All of the reseller hosting plans are managed with the Web Host Manager (WHM) tool, which makes it easy to keep track of your client websites.

Plus, you can white label the hosting to create a branded experience for your customers.

The reseller plans also include free account migration, SSL certificates, automated backups, and a CDN.

Pricing: A2 Hosting reseller plans start at $24.99 per month when paid yearly, and include 60 GB of disk space, a money-back guarantee, and more.

For more details, see our detailed A2 Hosting review for an in depth look at the hosting features, performance, and plans.

6. InMotion

InMotion Reseller

InMotion is a popular host that offers reliable performance for business websites. The technical support team is very helpful, plus there’s 99.99% guaranteed uptime.

Every plan has high bandwidth and disk space to support more websites and traffic at an affordable price.

All plans come with a free cPanel or WHM control panel, root server access, and built-in DDoS and malware protection. This offers your customers flexibility and improved website security.

White label services and billing software are included for free. So, you can easily manage payments while offering customers a hosting experience with your own branding.

If you want to sell domain names too, then you can use the domain reseller account also included in the reseller program.

Pricing: InMotion Hosting reseller plans start at $29.99 per month when paid yearly and offer 80GB storage and support for 25 websites, along with a 90-day money-back guarantee.

To learn more about InMotion, see our in depth InMotion Hosting review where we highlight the performance, speed, and pros and cons.

What is the Best Reseller Hosting? (Expert Pick)

All of the reseller hosting services above are great choices. The best reseller host for your business will depend on your goals and the kind of websites you’ll be hosting.

If website speed and high quality customer support are important, then SiteGround is the best option.

If you want a beginner-friendly host that your customers can grow and scale their websites with, then HostGator is perfect.

We also looked into other reseller web hosting providers like Liquid Web, GoDaddy, Bluehost, etc but we decided not to list them here to help you avoid choice paralysis.

We hope this article helped you find the best reseller hosting to help you start your own reseller business. You may also want to see our guide on choosing the best website builder and our expert picks on the best business phone services for small businesses.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post 6 Best Reseller Hosting Plans of 2021 (Best Value + Quality) appeared first on WPBeginner.

Slow Query Basics: Why Are Queries Slow?

Introduction

If you've ever worked with databases, you have probably encountered some queries that seem to take longer to execute than they should. Queries can become slow for various reasons ranging from improper index usage to bugs in the storage engine itself. However, in most cases, queries become slow because developers or MySQL database administrators neglect to monitor them and keep an eye on their performance. In this article, we will figure out how to avoid that.

What Is a Query?

To understand what makes queries slow and improve their performance, we have to start from the very bottom. First, we have to understand what a query fundamentally is–a simple question, huh? However, many developers and even very experienced database administrators could fail to answer.