How To Quickly Set Up, Use & Resell Webmail: A Guide For Agencies And Resellers

Webmail is a robust IMAP-based email service and the latest exciting addition to WPMU DEV’s all-in-one WordPress management platform product suite.

In this comprehensive guide, we show you how to get started with Webmail, how to use its features, and how to resell professional business email to clients. We also provide information on the benefits of offering IMAP-based email services for WPMU DEV platform users and resellers.

Read the full article to learn all about Webmail or click on one of the links below to jump to any section:

Overview of Webmail

In addition to our current email hosting offerings, Webmail is a standalone service for Agency plan members that allows for greater flexibility in email account creation.

WPMU DEV’s Webmail:

  • Is affordably priced
  • Offers a superior email service with high standards of quality and reliability.
  • Does not require a third-party app to work.
  • Lets you set up email accounts on any domain you own or manage, whether it’s a root domain like mydomain.com or a subdomain such as store.mydomain.com.
  • Lets you provide clients with professional business email no matter where their domain is hosted (or whether the domain is associated with a site in your Hub or not)
  • Can be accessed from any device, even directly from your web browser.
  • Can be white labeled and resold under your own brand with Reseller.

Read more about the benefits of using Webmail.

Let’s show you now how to set your clients up with email accounts and a fully-functional mailbox in just a few clicks, using any domain, and no matter where their domain is hosted.

Getting Started With Webmail

Webmail is very quick and easy to set up.

If you’re an Agency member, just head on over to The Hub.

Now, all you need to do is get acquainted with the latest powerful tool in your complete WordPress site management toolbox…

Webmail Manager

The Hub lets you create, manage, and access IMAP email accounts for any domain you own from one central location, even domains that are not directly associated with a site in your Hub.

Click on Webmail on the main menu at the top of the screen…

The Hub - Webmail
Click Webmail to set up and manage your emails.

This will bring you to the Webmail Overview screen.

If you haven’t set up an email account yet, you’ll see the screen below. Click on the “Create New Email” button to get started.

Webmail screen with no email accounts set up yet!
Click the button to create a new email account in Webmail.

As mentioned earlier, Webmail gives you the choice of creating an email account from a domain you manage in The Hub, or a domain managed elsewhere.

For this tutorial, we’ll select a domain being managed in The Hub.

Select the domain you want to associate your email account with from the dropdown menu and click the arrow to continue.

Create New Email screen - Step 1 of 2
Select a domain managed in The Hub or elsewhere.

Next, create your email address, choose a strong password, and click on the blue arrow button to continue.

Create New Email screen - Step 2 of 2
Add your username and password to create your email address.

You will see a payment screen displaying the cost of your new email address and billing start date. Click the button to make the payment and create your new email account.

Email account payment screen.
Make the payment to complete setting up your email account.

Your new email account will be automatically created after payment has been successfully processed.

New user email has been created successfully.
Our new email has been created successfully…we’re in business!

The last step to make your email work correctly is to add the correct DNS records.

Fortunately, if your site or domain are hosted with WPMU DEV, Webmail Manager can easily and automatically do this for you too!

Note: If your domain is managed elsewhere, you will need to copy and manually add the DNS records at your registrar or DNS manager (e.g. Cloudflare).

Click on the View DNS Records button to continue.

This will bring up the DNS Records screen.

As our example site is hosted with WPMU DEV, all you need to do is click on the ADD DNS Records button and your records will be automatically created and added to your email account.

DNS Records screen - Add DNS Records button selected.
If your domain is hosted with WPMU DEV click the button to automatically add the correct DNS records to make your email work.

After completing this step, wait for the DNS records to propagate successfully before verifying the DNS.

You can use an online tool like https://dnschecker.org to check the DNS propagation status.

Note: DNS changes can take 24-48 hours to propagate across the internet, so allow some time for DNS propagation to occur, especially if the domain is hosted elsewhere.

Click the Verify DNS button to check if the DNS records have propagated.

DNS Records screen with Verify DNS button selected.
Click the Verify DNS button to check if your DNS records have propagated.

If your DNS records have propagated successfully, you will see green ticks for all records under the DNS Status column.

DNS Records screen showing green ticks in DNS Status for all records.
Your emails won’t be seen until all those ticks are green.

Your email account is now fully set up and ready to use.

Repeat the above process to create and add more emails.

Webmail overview screen showing an active domain.
Click on the + Create New Email button to add more emails.

Now that you know how to create a new email account, let’s look at how to manage your emails effectively.

Managing Your Emails

If you have set up one or more email accounts, navigate to the Webmail Manager screen any time to view a list of all connected domains, their status, number of email accounts associated with each domain, and additional options.

Webmail screen with added domain email accounts.
Manage all of your email accounts in the Webmail overview screen.

To manage your email accounts, click on a domain name or select Manage Domain Email from the Options dropdown menu (the vertical ellipsis icon).

Webmail screen - Manage Domain Email option selected.
Click on the vertical ellipsis and select Manage Domain Email to manage your email accounts.

This opens up the email management section for the selected domain.

The Email Accounts tab lists all the existing email accounts for that domain, status and creation date information, plus additional email management options that we’ll explore in a moment.

Webmail - Email Accounts tab
Email Accounts lists all the email accounts you have created for your domain.

Email accounts can have the following statuses: active, suspended, or disabled.

Active accounts can send and receive emails, provided DNS records have been set up and propagated correctly.

Suspended accounts occur if email activity is in violation of our webmail provider’s email sending policy.

A disabled account (see further below) only disables the sending and receiving of emails and webmail access for that email account. It does not affect billing.

Note: Unless you delete the account, you will still be charged for a disabled email account.

Email accounts tab listing email accounts with different statuses.
Email accounts can display an active, suspended, or disabled status.

Before we discuss managing individual email accounts, let’s look at other main features of Webmail Manager.

Email Forwarding

Email forwarding automatically redirects emails sent to one email address to another designated email address. It allows users to receive emails sent to a specific address without having to check multiple accounts. For example, emails sent to info@yourcompany.tld can be automatically forwarded to john@yourcompany.tld.

Every email account includes 10 email forwarders. This allows you to automatically forward emails to multiple addresses simultaneously (e.g. john@yourcompany.tld, accounts@yourcompany.tld, etc.).

To activate email forwarding hover over the arrow icon and turn its status to On and then click on Manage Email Forwarding to set up email forwarders.

Webmail - Email Accounts - Email Forwarding with status turned on and Manage Email Forwarding selected.
Turn Email Forwarding on and click on Manage Email Forwarding to set up forwarders for an email account.

This will bring up the Email Forwarding tab. Here, you can easily add, delete, and edit email forwarders.

If no email forwarders exist for your email account, click the Create Email Forwarder button to create the first one.

Email Forwarding screen with no forwarders set up yet.
Let’s create an email forwarder for this email account.

In the Add Email Forwarder screen, enter the forwarding email address where you would like incoming email messages to redirect to and click Save.

Webmail - Add Email Forwarder
You can create up to 10 email forwarders per email account.

As stated, you can add multiple forwarding email addresses to each email account (up to 10).

Webmail email forwarders.
Webmail’s Email Forwarding lets you easily add, delete, and edit email forwarders.

Webmail Login

With Webmail, all emails are stored on our servers, so in addition to being able to access and view emails on any device, every webmail account includes a mailbox that can be accessed online directly via Webmail’s web browser interface.

There are several ways to log in and view emails.

Access Webmail From The Hub

To log into webmail directly via The Hub, you can go to the Email Account Management > Email Accounts screen of your domain, click the envelope icon next to the email account, and click on the Webmail Login link…

Webmail - Email Accounts - Webmail Login
Click on the envelope icon in Email Accounts to access Webmail login.

Or, if you are working inside an individual email account, just click on the Webmail Login link displayed in all of the account’s management screens…

Webmail - Email Accounts - Email Information - Webmail Login
Click on the Webmail Login link of any email account management screen to access emails for that account.

This will log you directly into the webmail interface for that email account.

Webmail interface
Webmail’s intuitive and easy-to-use interface.

The Webmail interface should look familiar and feel intuitive to most users. If help using any of Webmail’s features is required, click the Help icon on the menu sidebar to access detailed help documentation.

Let’s look at other ways to access Webmail.

Access Webmail From The Hub Client

If you have set up your own branded client portal using The Hub Client plugin, your team members and clients can access and manage emails via Webmail with team user roles configured to give them access permissions and SSO (Single Sign-On) options enabled.

This allows users to seamlessly log into an email account from your client portal without having to enter login credentials.

Webmail menu link on a branded client portal.
Team members and clients can access Webmail directly from your own branded client portal.

Direct Access URL

Another way to log into Webmail is via Direct Access URL.

To access webmail directly from your web browser for any email account, enter the following URL into your browser exactly as shown here: https://webmail.yourwpsite.email/, then enter the email address and password, and click “Login.”

Webmail direct login
Log into webmail directly from your web browser.

Note: The above example uses our white labeled URL address webmail.yourwpsite.email to log into Webmail via a web browser. However, you can also brand your webmail accounts with your own domain so users can access their email from a URL like webmail.your-own-domain.tld.

For more details on how to set up your own branded domain URL, see our Webmail documentation.

Email Aliases

An email alias is a virtual email address that redirects emails to a primary email account. It serves as an alternative name for a single mailbox, enabling users to create multiple email addresses that all direct messages to the same inbox.

For instance, the following could all be aliases for the primary email address john@mysite.tld:

  • sales@mysite.tld
  • support@mysite.tld
  • info@mysite.tld

Webmail lets you create up to 10 email aliases per email account.

To create an alias for an email account, click on the vertical ellipsis icon and select Add Alias.

Webmail - Add Alias
Let’s add an alias to our email account.

Enter the alias username(s) you would like to create in the Add Alias modal and click Save.

Webmail - Add Alias screen with three aliases set up.
You can create up to 10 aliases for each email account.

Emails sent to any of these aliases will be delivered to your current email account.

Additional Email Management Features

In addition to the features and options found in the Email Accounts tab that we have just discussed, Webmail lets you manage various options and settings for each individual email account.

Let’s take a brief look at some of these options and settings.

Email Information

To manage an individual email account:

  1. Click on The Hub > Webmail to access the Email Accounts tab
  2. Click on the domain you have set up to use Webmail
  3. Click on the specific email account (i.e. the email address) you wish to manage.

Click on the Webmail management screens to access and manage individual email accounts.

The Email Information tab lets you edit your current email account and password and displays important information, such as status, creation date (this is the date your billing starts for this email account), storage used, and current email send limit.

Webmail - Email Accounts - Email Information tab.
Edit and view information about an individual email account in the Email Information tab.

In addition to the Email Information tab, you can click on the Email Forwarding tab to manage your email forwarders and the Email Aliases tab to manage your email aliases for your email account.

Note: Newly created accounts have send limits set up to prevent potential spamming and account suspension. These limits gradually increase over a two-week period, allowing email accounts to send up to 500 emails every 24 hours.

Email Information - Email limit increase.
Each email account’s send limits increase over two weeks and can send up to 500 emails per 24 hours.

Coming soon, you will also be able to add more storage to your email accounts if additional space is required.

Upgrade Storage modal
Upgrade your email account storage space (coming soon!)

Now that we have drilled down and looked at all the management tabs for an individual email account, let’s explore some additional features of the Webmail Manager.

Go back to The Hub > Webmail and click on one of the email accounts you have set up.

DNS Records

Click on the DNS Records tab to view the DNS Records of your email domain.

DNS Records Tab
Set up and verify your email DNS records in the DNS Records tab.

Note: The DNS Records tab is available to team members and client custom roles, so team members and clients can access these if you give them permission.

Configurations

Click on the Configurations tab to view and download configuration settings that allow you to set up email accounts in applications other than Webmail.

Webmail - Domain Email - Configurations
Download and use the configurations shown in this section to set up email accounts in other applications.

The Configurations tab is also available for both team member and client custom roles.

Client Association

If you want to allow clients to manage their own email accounts, you will need to set up your client account first, assign permissions to allow the client to view Webmail, then link the client account with the email domain in the Client Association tab.

After setting up your client in The Hub, navigate to the Client Association tab (The Hub > Webmail > Email Domain) and click on Add Client.

Webmail - Domain Email - Client Association
You can let clients manage their own email accounts by linking the email domain with their client account.

Select the client from the dropdown menu and click Add.

Webmail - Associate email with a client modal.
Linking the email domain with a client allows them to manage their email accounts.

Notes:

  • When you associate a client with an email domain, SSO for the email domain is disabled in The Hub. However, your client will be able to access Webmail login via The Hub Client plugin.
  • The Client Association tab is only made available for team member custom roles.

Reseller Integration

We’re currently working on bringing full auto-provisioning of emails to our Reseller platform. Until this feature is released, you can manually resell emails to clients and bill them using the Clients & Billing tool.

Once Webmail has been fully integrated with our Reseller platform, you will be able to rebrand Webmail as your own and resell everything under one roof: hosting, domains, templates, plugins, expert support…and now business emails!

Reseller price table example.
Resell professional business emails under your own brand!

If you need help with Reseller, check out our Reseller documentation.

Congratulations! Now you know how to set up, manage, and resell Webmail in your business as part of your digital services.

Email Protocols – Quick Primer

WPMU DEV offers the convenience of using both IMAP and POP3 email.

Not sure what IMAP is, how it works, or how IMAP differs from POP3? Then read below for a quick primer on these email protocols.

What is IMAP?

IMAP (Internet Message Access Protocol) is a standard protocol used to retrieve emails from a mail server. It allows users to access their emails from multiple devices like a phone, laptop, or tablet, because it stores emails on the server, rather than downloading them to a single device.

Since emails are managed and stored on the server, this reduces the need for extensive local storage and allows for easy backup and recovery.

Additional points about IMAP:

  • Users can organize emails into folders, flag them for priority, and save drafts on the server.
  • It supports multiple email clients syncing with the server, ensuring consistent message status across devices.
  • IMAP operates as an intermediary between the email server and client, enabling remote access from any device.
  • When users read emails via IMAP, they’re viewing them directly from the server without downloading them locally.
  • IMAP downloads messages only upon user request, enhancing efficiency compared to other protocols like POP3.
  • Messages persist on the server unless deleted by the user.
  • IMAP uses port 143, while IMAP over SSL/TLS uses port 993 for secure communication.

The advantages of using IMAP include the following:

  • Multi-Device Access: IMAP supports multiple logins, allowing users to connect to the email server from various devices simultaneously.
  • Flexibility: Unlike POP3, IMAP enables users to access their emails from different devices, making it ideal for users who travel frequently or need access from multiple locations.
  • Shared Mailbox: A single IMAP mailbox can be shared by multiple users, facilitating collaboration and communication within teams.
  • Organizational Tools: Users can organize emails on the server by creating folders and subfolders, enhancing their efficiency in managing email correspondence.
  • Email Functions Support: IMAP supports advanced email functions such as search and sort, improving user experience and productivity.
  • Offline Access: IMAP can be used offline, allowing users to access previously downloaded emails even without an internet connection.

There are some challenges to setting up and running your own IMAP service, which is why using a solution like WPMU DEV’s Webmail is highly recommended:

  • Hosting an IMAP service can be resource-intensive, requiring more server storage and bandwidth to manage multiple connections and the storage of emails.
  • IMAP requires implementing SSL encryption to ensure secure email communication.
  • Smaller businesses might find it challenging to allocate the necessary IT resources for managing an IMAP server efficiently.

IMAP vs POP3: What’s The Difference?

IMAP and POP3 are both client-server email retrieval protocols, but they are two different methods for accessing email messages from a server.

IMAP is designed for modern email users. It allows users to access your email from multiple devices because it keeps their emails on the server. When users read, delete, or organize their emails, these changes are synchronized across all devices.

For example, if you read an email on your phone, it will show as being read on your laptop as well.

POP3, on the other hand, is simpler and downloads emails from the server to a single device, then usually deletes them from the server. This means if users access their emails from a different device, they won’t see the emails that were downloaded to the first device.

For instance, if you download an email via POP3 on your computer, that email may not be accessible on your phone later.

Here are some of the key differences between IMAP and POP3:

Storage Approach

  • IMAP: Users can store emails on the server and access them from any device. It functions more like a remote file server.
  • POP3: Emails are saved in a single mailbox on the server and downloaded to the user’s device when accessed.

Access Flexibility

  • IMAP: Allows access from multiple devices, enabling users to view and manage emails consistently across various platforms.
  • POP3: Emails are typically downloaded to one device and removed from the server.

Handling of Emails

  • IMAP: Maintains emails on the server, allowing users to organize, flag, and manage them remotely.
  • POP3: Operates as a “store-and-forward” service, where emails are retrieved and then removed from the server.

In practice, IMAP is more suited for users who want to manage their emails from multiple devices or locations, offering greater flexibility and synchronization. POP could be considered for situations where email access is primarily from a single device, or there is a need to keep local copies of emails while removing them from the server to save space.

Essentially, IMAP prioritizes remote access and centralized email management on the server, while POP3 focuses on downloading and storing emails locally.

Professional Business Email For Your Clients

Integrating email hosting, particularly IMAP, with web hosting to create a seamless platform for managing client websites and emails under one roof is challenging, costly, and complex.

With WPMU DEV’s Webmail, you can enhance your email management capabilities and provide clients with affordable and professional business email no matter where their domain is hosted that is easy-to-use and does not require a third-party app.

Note: If you don’t require the full features of IMAP email for a site hosted with WPMU DEV, we also offer the option to create POP3 email accounts with our hosted email. These accounts can be linked to any email client of your choice, ensuring flexibility and convenience.

If you’re yet to set up a WPMU DEV account, we encourage you to become an Agency member. It’s 100% risk-free and includes everything you need to manage your clients and resell services like hosting, domains, emails, and more, all under your own brand.

If you’re already an Agency member, then head over to your Hub and click on Webmail to get started. If you need any help, our support team is available 24×7 (or ask our AI assistant) and you can also check out our extensive webmail documentation.

The View Transitions API And Delightful UI Animations (Part 2)

Last time we met, I introduced you to the View Transitions API. We started with a simple default crossfade transition and applied it to different use cases involving elements on a page transitioning between two states. One of those examples took the basic idea of adding products to a shopping cart on an e-commerce site and creating a visual transition that indicates an item added to the cart.

The View Transitions API is still considered an experimental feature that’s currently supported only in Chrome at the time I’m writing this, but I’m providing that demo below as well as a video if your browser is unable to support the API.

Those diagrams illustrate (1) the origin page, (2) the destination page, (3) the type of transition, and (4) the transition elements. The following is a closer look at the transition elements, i.e., the elements that receive the transition and are tracked by the API.

So, what we’re working with are two transition elements: a header and a card component. We will configure those together one at a time.

Header Transition Elements

The default crossfade transition between the pages has already been set, so let’s start by registering the header as a transition element by assigning it a view-transition-name. First, let’s take a peek at the HTML:

<div class="header__wrapper">
  <!-- Link back arrow -->
  <a class="header__link header__link--dynamic" href="/">
    <svg ...><!-- ... --></svg>
  </a>
  <!-- Page title -->
  <h1 class="header__title">
    <a href="/" class="header__link-logo">
      <span class="header__logo--deco">Vinyl</span>Emporium </a>
  </h1>
  <!-- ... -->
</div>

When the user navigates between the homepage and an item details page, the arrow in the header appears and disappears — depending on which direction we’re moving — while the title moves slightly to the right. We can use display: none to handle the visibility.

/* Hide back arrow on the homepage */
.home .header__link--dynamic {
    display: none;
}

We’re actually registering two transition elements within the header: the arrow (.header__link--dynamic) and the title (.header__title). We use the view-transition-name property on both of them to define the names we want to call those elements in the transition:

@supports (view-transition-name: none) {
  .header__link--dynamic {
    view-transition-name: header-link;
  }
  .header__title {
    view-transition-name: header-title;
  }
}

Note how we’re wrapping all of this in a CSS @supports query so it is scoped to browsers that actually support the View Transitions API. So far, so good!

To do that, let’s start by defining our transition elements and assign transition names to the elements we’re transitioning between the product image (.product__image--deco) and the product disc behind the image (.product__media::before).

@supports (view-transition-name: none) {
  .product__image--deco {
    view-transition-name: product-lp;
  }
 .product__media::before {
    view-transition-name: flap;
  }
  ::view-transition-group(product-lp) {
    animation-duration: 0.25s;
    animation-timing-function: ease-in;
  }
  ::view-transition-old(product-lp),
  ::view-transition-new(product-lp) {
    /* Removed the crossfade animation */
    mix-blend-mode: normal;
    animation: none;
  }
}

Notice how we had to remove the crossfade animation from the product image’s old (::view-transition-old(product-lp)) and new (::view-transition-new(product-lp)) states. So, for now, at least, the album disc changes instantly the moment it’s positioned back behind the album image.

But doing this messed up the transition between our global header navigation and product details pages. Navigating from the item details page back to the homepage results in the album disc remaining visible until the view transition finishes rather than running when we need it to.

Let’s configure the router to match that structure. Each route gets a loader function to handle page data.

import { createBrowserRouter, RouterProvider } from "react-router-dom";
import Category, { loader as categoryLoader } from "./pages/Category";
import Details, { loader as detailsLoader } from "./pages/Details";
import Layout from "./components/Layout";

/* Other imports */

const router = createBrowserRouter([
  {
    /* Shared layout for all routes */
    element: <Layout />,
    children: [
      {
        /* Homepage is going to load a default (first) category */
        path: "/",
        element: <Category />,
        loader: categoryLoader,
      },
      {
      /* Other categories */
        path: "/:category",
        element: <Category />,
        loader: categoryLoader,
      },
      {
        /* Item details page */
        path: "/:category/product/:slug",
        element: <Details />,
        loader: detailsLoader,
      },
    ],
  },
]);

const root = ReactDOM.createRoot(document.getElementById("root"));
root.render(
  <React.StrictMode>
    <RouterProvider router={router} />
  </React.StrictMode>
);

With this, we have established the routing structure for the app:

  • Homepage (/);
  • Category page (/:category);
  • Product details page (/:category/product/:slug).

And depending on which route we are on, the app renders a Layout component. That’s all we need as far as setting up the routes that we’ll use to transition between views. Now, we can start working on our first transition: between two category pages.

Transition Between Category Pages

We’ll start by implementing the transition between category pages. The transition performs a crossfade animation between views. The only part of the UI that does not participate in the transition is the bottom border of the category filter menu, which provides a visual indication for the active category filter and moves between the formerly active category filter and the currently active category filter that we will eventually register as a transition element.

Since we’re using react-router, we get its web-based routing solution, react-router-dom, baked right in, giving us access to the DOM bindings — or router components we need to keep the UI in sync with the current route as well as a component for navigational links. That’s also where we gain access to the View Transitions API implementation.

Specifically, we will use the component for navigation links (Link) with the unstable_viewTransition prop that tells the react-router to run the View Transitions API when switching page contents.

import { Link, useLocation } from "react-router-dom";
/* Other imports */

const NavLink = ({ slug, title, id }) => {
  const { pathname } = useLocation();
  /* Check if the current nav link is active */
  const isMatch = slug === "/" ? pathname === "/" : pathname.includes(slug);
  return (
    <li key={id}>
      <Link
        className={isMatch ? "nav__link nav__link--current" : "nav__link"}
        to={slug}
        unstable_viewTransition
      >
        {title}
      </Link>
    </li>
  );
};

const Nav = () => {
  return 
    <nav className={"nav"}>
      <ul className="nav__list">
        {categories.items.map((item) => (
          <NavLink {...item} />
        ))}
      </ul>
    </nav>
  );
};

That is literally all we need to register and run the default crossfading view transition! That’s again because react-router-dom is giving us access to the View Transitions API and does the heavy lifting to abstract the process of setting transitions on elements and views.

Creating The Transition Elements

We only have one UI element that gets its own transition and a name for it, and that’s the visual indicator for the actively selected product category filter in the app’s navigation. While the app transitions between category views, it runs another transition on the active indicator that moves its position from the origin category to the destination category.

I know that I had earlier described that visual indicator as a bottom border, but we’re actually going to establish it as a standard HTML horizontal rule (<hr>) element and conditionally render it depending on the current route. So, basically, the <hr> element is fully removed from the DOM when a view transition is triggered, and we re-render it in the DOM under whatever NavLink component represents the current route.

We want this transition only to run if the navigation is visible, so we’ll use the react-intersection-observer helper to check if the element is visible and, if it is, assign it a viewTransitionName in an inline style.

import { useInView } from "react-intersection-observer";
/* Other imports */

const NavLink = ({ slug, title, id }) => {
  const { pathname } = useLocation();
  const isMatch = slug === "/" ? pathname === "/" : pathname.includes(slug);
  return (
    <li key={id}>
      <Link
        ref={ref}
        className={isMatch ? "nav__link nav__link--current" : "nav__link"}
        to={slug}
        unstable_viewTransition
      >
        {title}
      </Link>
      {isMatch && (
        <hr
          style={{
            viewTransitionName: inView ? "marker" : "",
          }}
          className="nav__marker"
        />
      )}
    </li>
  );
};

First, let’s take a look at our Card component used in the category views. Once again, react-router-dom makes our job relatively easy, thanks to the unstable_useViewTransitionState hook. The hook accepts a URL string and returns true if there is an active page transition to the target URL, as well as if the transition is using the View Transitions API.

That’s how we’ll make sure that our active image remains a transition element when navigating between a category view and a product view.

import { Link, unstable_useViewTransitionState } from "react-router-dom";
/* Other imports */

const Card = ({ author, category, slug, id, title }) => {
  /* We'll use the same URL value for the Link and the hook */
  const url = /${category}/product/${slug};

  /* Check if the transition is running for the item details pageURL */
  const isTransitioning = unstable_useViewTransitionState(url);

  return (
    <li className="card">
      <Link unstable_viewTransition to={url} className="card__link">
        <figure className="card__figure">
          <img
            className="card__image"
            style=}}
              /* Apply the viewTransitionName if the card has been clicked on */
              viewTransitionName: isTransitioning ? "item-image" : "",
            }}
            src={/assets/$&#123;category&#125;/${id}-min.jpg}
            alt=""
          />
         {/* ... */}
        </figure>
        <div className="card__deco" />
      </Link>
    </li>
  );
};

export default Card;

We know which image in the product view is the transition element, so we can apply the viewTransitionName directly to it rather than having to guess:

import {
  Link,
  useLoaderData,
  unstable_useViewTransitionState,
} from "react-router-dom";
/* Other imports */

const Details = () => {
  const data = useLoaderData();
  const { id, category, title, author } = data;
  return (
    <>
      <section className="item">
        {/* ... */}
        <article className="item__layout">
          <div>
              <img
                style={{viewTransitionName: "item-image"}}
                className="item__image"
                src={/assets/${category}/${id}-min.jpg}
                alt=""
              />
          </div>
          {/* ... */}
        </article>
      </section>
    </>
  );
};

export default Details;

We’re on a good track but have two issues that we need to tackle before moving on to the final transitions.

One is that the Card component’s image (.card__image) contains some CSS that applies a fixed one-to-one aspect ratio and centering for maintaining consistent dimensions no matter what image file is used. Once the user clicks on the Card — the .card-image in a category view — it becomes an .item-image in the product view and should transition into its original state, devoid of those extra styles.


/* Card component image */
.card__image {
  object-fit: cover;
  object-position: 50% 50%;
  aspect-ratio: 1;
  /* ... */
}

/* Product view image */
.item__image {
 /* No aspect-ratio applied */
 /* ... */
}

Jake has recommended using React’s flushSync function to make this work. The function forces synchronous and immediate DOM updates inside a given callback. It’s meant to be used sparingly, but it’s okay to use it for running the View Transition API as the target component re-renders.

// Assigns view-transition-name to the image before transition runs
const [isImageTransition, setIsImageTransition] = React.useState(false);

// Applies fixed-positioning and full-width image styles as transition runs
const [isFullImage, setIsFullImage] = React.useState(false);

/* ... */

// State update function, which triggers the DOM update we want to animate
const toggleImageState = () => setIsFullImage((state) => !state);

// Click handler function - toggles both states.
const handleZoom = async () => {
  // Run API only if available.
  if (document.startViewTransition) {
    // Set image as a transition element.
    setIsImageTransition(true);
    const transition = document.startViewTransition(() => {
      // Apply DOM updates and force immediate re-render while.
      // View Transitions API is running.
      flushSync(toggleImageState);
    });
    await transition.finished;
    // Cleanup
    setIsImageTransition(false);
  } else {
    // Fallback 
    toggleImageState();
  }
};

/* ... */

With this in place, all we really have to do now is toggle class names and view transition names depending on the state we defined in the previous code.

import React from "react";
import { flushSync } from "react-dom";

/* Other imports */

const Details = () => {
  /* React state, click handlers, util functions... */

  return (
    <>
      <section className="item">
        {/* ... */}
        <article className="item__layout">
          <div>
            <button onClick={handleZoom} className="item__toggle">
              <img
                style={{
                  viewTransitionName:
                    isTransitioning || isImageTransition ? "item-image" : "",
                }}
                className={
                  isFullImage
                    ? "item__image item__image--active"
                    : "item__image"
                }
                src={/assets/${category}/${id}-min.jpg}
                alt=""
              />
            </button>
          </div>
          {/* ... */}
        </article>
      </section>
      <aside
        className={
          isFullImage ? "item__overlay item__overlay--active" : "item__overlay"
        }
      />
    </>
  );
};

We are applying viewTransitionName directly on the image’s style attribute. We could have used boolean variables to toggle a CSS class and set a view-transition-name in CSS instead. The only reason I went with inline styles is to show both approaches in these examples. You can use whichever approach fits your project!

Let’s round this out by refining styles for the overlay that sits behind the image when it is expanded:

.item__overlay--active {
  z-index: 2;
  display: block;
  background: rgba(0, 0, 0, 0.5);
  position: fixed;
  top: 0;
  left: 0;
  width: 100vw;
  height: 100vh;
}

.item__image--active {
  cursor: zoom-out;
  position: absolute;
  z-index: 9;
  top: 50%;
  left: 50%;
  transform: translate3d(-50%, -50%, 0);
  max-width: calc(100vw - 4rem);
  max-height: calc(100vh - 4rem);
}

Demo

The following demonstrates only the code that is directly relevant to the View Transitions API so that it is easier to inspect and use. If you want access to the full code, feel free to get it in this GitHub repo.

Conclusion

We did a lot of work with the View Transitions API in the second half of this brief two-part article series. Together, we implemented full-view transitions in two different contexts, one in a more traditional multi-page application (i.e., website) and another in a single-page application using React.

We started with transitions in a MPA because the process requires fewer dependencies than working with a framework in a SPA. We were able to set the default crossfade transition between two pages — a category page and a product page — and, in the process, we learned how to set view transition names on elements after the transition runs to prevent naming conflicts.

From there, we applied the same concept in a SPA, that is, an application that contains one page but many views. We took a React app for a “Museum of Digital Wonders” and applied transitions between full views, such as navigating between a category view and a product view. We got to see how react-router — and, by extension, react-router-dom — is used to define transitions bound to specific routes. We used it not only to set a crossfade transition between category views and between category and product views but also to set a view transition name on UI elements that also transition in the process.

The View Transitions API is powerful, and I hope you see that after reading this series and following along with the examples we covered together. What used to take a hefty amount of JavaScript is now a somewhat trivial task, and the result is a smoother user experience that irons out the process of moving from one page or view to another.

That said, the View Transitions API’s power and simplicity need the same level of care and consideration for accessibility as any other transition or animation on the web. That includes things like being mindful of user motion preferences and resisting the temptation to put transitions on everything. There’s a fine balance that comes with making accessible interfaces, and motion is certainly included.

References

WordPress Playground: From 5-Minute Install To Instant Spin-Up

Many things have changed in WordPress over the years, but installation has largely remained the same: download WordPress, drop it on a server, create a database, sprinkle in some configuration, and presto, we have a WordPress site. This process was once lovingly referred to as the “famous five-minute install,” although that moniker seems to have faded with time, particularly as many hosting providers offer a more streamlined experience.

But what if WordPress didn’t require any setup at all? As in, you tap a link, and WordPress spins up a site for you right there, on demand? That’s probably difficult to imagine, considering WordPress runs on top of PHP, MySQL databases, and Apache. It’s not the most portable system.

That’s the aim of WordPress Playground, which got its first public boost when Matt Mullenweg introduced it during State of Word 2022.

Notice how the URL is a subdomain of a TasteWP-related top-level domain: hangingpurpose.s1-tastewp.com. It generates an instance on the multi-site network and establishes a URL for it based on a randomized naming system.

There’s a giant countdown timer on the screen that indicates when the site is scheduled to expire. That makes sense, right? Allowing anyone and everyone to create a site on the spot without so much as a login could become taxing on the server, so allowing sites to self-destruct on a schedule is likely as much to do with self-preservation as it does economics.

Speaking of economics, the countdown timer is immediately followed by a call to action to upgrade, which buys you permanence, extra server space, and customer support.

Without upgrading, though, you are only allowed two free instant sites. But if you create an account and log into TasteWP, then you can create up to six test sites on a free pricing tier.

That’s a look at the “quick” onboarding, but TasteWP does indeed have a more robust way to spin up a WordPress testing site with a set of advanced configurations, including which WordPress version to use with which version of PHP, settings you might normally define in wp-config.php, and options for adding specific themes and plugins.

So, how does that compare to WordPress Playground? Perhaps the greatest difference is that a TasteWP site is connected to the internet. It’s not a WordPress simulation, but an actual instance with a URL you can link up and share with others… as long as the site hasn’t expired. That could very well be enough of a differentiation to warrant more players in this space, even with WordPress Playground hanging around.

I wanted to give you a sense of what’s already offered before actually unboxing WordPress Playground. Now that we know what else is out there let’s turn our attention back to Playground and explore it.

Starting Up WordPress Playground

One of the first interesting things about WordPress Playground is that it is available in not just one but several places. I wouldn’t liken it completely to a service like TasteWP, where you create an account to create and manage WordPress instances. It’s more like a developer tool, one that you can reach for when testing your work in a WordPress environment.

You can simply hit the playground.wordpress.net URL in your browser to launch a new site on the spot. Or, you can launch an instance from the command line. Perhaps you prefer to use the official Chrome extension instead. Whatever the case, let’s look at those options.

1. Using The WordPress Playground URL

This is the most straightforward way to get a WordPress Playground instance up and running. That’s because all you do is visit the playground.wordpress.net address in the browser, and a WordPress site is created immediately.

This is exactly how the WordPress Playground demo works, prompting you to click a button to open a new WordPress site. In fact, try clicking the following button to create one now.

Create A WordPress Site

If you want to use a specific version of WordPress and PHP in your Playground, all it takes is adding a couple of parameters to the URL. For example, we can instruct Playground to run WordPress 6.2 on PHP 8.2 with the following URL:

https://playground.wordpress.net/?php=8.2&wp=6.2

You can even try out the developmental versions of WordPress using Playground by using the following parameter:

https://playground.wordpress.net/?wp=beta

2. Using The GitHub Repository

True to the WordPress ethos, WordPress Playground is very much an open-source project. The repo is available over at GitHub, and we can pull it into a local environment and use WordPress Playground right from a terminal.

First, let’s clone the repository from the command line:

git clone https://github.com/WordPress/wordpress-playground.git

There is a slightly faster alternative that fetches just the latest revision:

git clone -b trunk --single-branch --depth 1 git@github.com:WordPress/wordpress-playground.git

Now that we have the WordPress Playground package in our local environment, we can formally install it:

cd wordpress-playground
npm install
npm run dev

Once the local server is running, we should get a URL from the terminal that we can use to access the new Playground instance, likely pointed to http://localhost:5400/website-server/.

We are also able to set which versions of WordPress and PHP to use in the virtual environment by adding a couple of instructions to the command. For example, this command triggers a new WordPress 5.9 instance running on PHP 7.4:

wp-now start --wp=5.9 --php=7.4

3. Using wp-now In The Command Line

An even quicker way to get Playground running from the command line is to globally install the wp-now CLI tool:

npm install -g @wp-now/wp-now

This way, we can create a new Playground instance anytime you want with a single command:

wp-now start

Be sure that you’re using Node 18 or higher. Otherwise, you’re likely to bump into some errors. Once the command executes, however, the browser will automatically open a new tab pointing to the new instance. You’re already signed into WordPress and everything!

We can configure the environment just as we could with the npm package:

wp-now start --wp=5.9 --php=7.4

A neat thing about this method is that there are several different “modes” you can run this in, and which one you use depends on the directory you’re in when running the command. For example, if you run the command from a directory that already contains WordPress, then Playground will automatically recognize that and run the directory as a full WordPress installation. Or, it’s possible to execute the command from a directory that contains nothing but an index.php file, and Playground will start the server and run requests through that file.

There are other options, including modes for theme, plugin, wp-content, and wordpress-develop, that are worth checking out in the documentation.

4. Using The Visual Studio Code Extension

WordPress Playground is also available as a Visual Studio Code extension. It provides a nice one-click process to launch a local WordPress site.

Installing the extension adds a WordPress icon to the sidebar menu that, when clicked, opens a panel for launching a new WordPress Playground site.

Open a project folder, click the “Start WordPress Server,” and the Playground extension boots up a new site on the spot. The extension also provides server details, including the local URL, the mode it’s in, and settings to change which versions of WordPress and PHP are in use.

One thing I noticed while poking at the instance is that it automatically installs and activates the SQLite Database Integration plugin. Obviously, that’s a required component for things to work, but I thought it was worth pointing out that the installation does indeed include at least one pre-installed plugin right out of the gate.

5. Using A Chrome Extension To Preview Themes & Plugins

Have you ever found yourself perusing the WordPress Theme Directory and wanting to take a particular theme out for a test drive? There’s already a “Preview” button baked right into the directory to do exactly that.

That’s nice, as it opens up the theme in a frame that looks a lot like the classic WordPress Customizer.

But how cool would it be to really open up the theme and see what it is like to do actual tasks with it in the WordPress admin, such as creating a post, editing a page, or exploring its block patterns?

That is what the “Open in WordPress Playground” extension for Chrome can do. It literally adds a button to “Preview” a theme in a fresh WordPress Playground instance that, when clicked, allows you to interact with the theme in a real WordPress environment.

I tried out the extension, and it worked as described, and not only that, but it works with the WordPress Plugin Directory as well. In other words, it’s now possible to try a new plugin on the spot without having to install, activate, and test it yourself in some sandbox or, worse, your live or staging WordPress environments.

This is a potential game-changer as far as lowering the barrier to entry for using WordPress and for theme and plugin developers offering a convenient way to provide users with a demo experience. I can easily imagine a future where paid commercial plugins adopt a similar user experience to help reduce refunds from customers merely wanting to try a plugin before formally committing to it.

The extension is available free of charge in the Chrome Web Store, but you can check out the source code in its GitHub repository as well. While we’re on it, it’s worth noting that this is a third-party extension rather than an official WordPress or Automattic release.

The Default Playground Site

No matter which Playground method you use, the instances that spin up are nearly identical. For example, all of the methods we covered have the WordPress Twenty Twenty-Three theme installed and activated by default. That makes a lot of sense: a standard WordPress installation does the same.

Similarly, all of the instances we covered make use of the SQLite Database Integration plugin developed by the WordPress Performance Team. This also makes sense: we need the plugin to establish a database. It also sounds like from the plugin description that the intent is to eventually integrate the plugin into WordPress Core, so perhaps we’ll eventually see zero plugins in a default Playground instance at some point.

There are a few differences between instances. They’re not massive, but worth calling out so you know what you are activating or have available when using a particular method to create a WordPress instance. The following table breaks down the current components included in each method at the time of this writing:

Method WordPress Version PHP Version Themes Plugins
WordPress Playground website 6.3.2 8.0
  • Twenty Twenty-Three (active)
  • SQLite Database Integration (active)
GitHub repo 6.3.2 8.0
  • Twenty Twenty-Three (active)
  • SQLite Database Integration (active)
wp-now package 6.3.2 8.0.10-dev
  • Twenty Twenty-Three (active)
  • Twenty Twenty-Two
  • Twenty Twenty-One
  • Akismet
  • Hello Dolly
  • SQLite Database Integration (active)
VS Code extension 6.3.2 7.4
  • Twenty Twenty-Three (active)
  • Twenty Twenty-Two
  • Twenty Twenty-One
  • Akismet
  • Hello Dolly
  • SQLite Database Integration (active)
Chrome extension 6.3.2 8.0
  • Twenty Twenty-Three (active)
  • SQLite Database Integration (active)

And, of course, any other differences would come from how you configure an instance. For example, if you run the wp-now package on the command line when you’re in a directory with WordPress and several themes and plugins installed, then those themes and plugins will be available to activate and use. Similarly, using the Chrome Extension on any WordPress Theme Directory page or Plugin Directory page will install that particular theme or plugin.

Installing Themes, Plugins, and Block Patterns

In a standard WordPress installation, you might log into the WordPress admin, navigate to AppearanceThemes, and install a new theme straight from the WordPress Theme Directory. That’s because your site has a web connection and is able to pull things in from WordPress.org. Since a WordPress Playground instance from the WordPress Playground website (which is essentially the same as the Chrome extension) is not technically connected to the internet, there is no way to install plugins and themes to it.

If you want the same sort of point-and-click experience in your Playground site that you would get in a standard WordPress installation, then go with the GitHub repo, the wp-now package, or the VS Code extension. Each of these is indeed connected to the internet and is able to install themes and plugins directly from the WordPress admin.

You may notice a note about using the Query API to install a theme or plugin to a WordPress Playground instance that is disconnected from the web:

“Playground does not yet support connecting to the themes directory yet. You can still upload a theme or install it using the Query API (e.g. ?theme=pendant).”

That’s right! We’re still able to load in whatever theme we want by passing the theme’s slug into the Playground URL used to generate the site. For example,

https://playground.wordpress.net/?theme=ollie

The same goes for plugins:

https://playground.wordpress.net/?plugin=jetpack

And if we want to bundle multiple plugins, we can pass in each plugin as a separate parameter chain with an ampersand (&) in the URL:

It does not appear that we can do the same thing with themes. If you’re testing several themes in a single instance, then it’s probably best to use the wp-now package or the VS Code extension when pointing at a directory that already includes those themes.

What about block patterns, you ask? We only get two pre-defined patterns in a default WordPress Playground instance created on Playground’s site: Posts and Call to Action.

That’s because block patterns, too, are served to the WordPress admin from an internet connection. We get a much wider selection of options when creating an instance using any of the methods that establish a local host connection.

There appears to be no way, unfortunately, to import patterns with the Query API like we can for themes and plugins. The best way to bring in a new pattern, it seems, is to either bundle them in the theme you are using (or pointing to) or manually navigate to the Block Pattern Directory and use the “Copy” option to paste a pattern into the page or post you are testing in Playground.

Importing & Exporting Playgrounds

The transience of a WordPress Playground instance is its appeal. The site practically evaporates into thin air with the trigger of a page refresh. But what if you actually want to preserve an instance? Perhaps you need to come back to your work later. Or maybe you’re working on a visual tweak and want to demo it for your team. Playground instances can indeed be exported and even imported into other instances.

Open up a new WordPress site over at the playground.wordpress.net and locate the Upload and Download icons at the top-right corner of the frame.

No worries, this is not a step-by-step tutorial on how to click buttons. The only thing you really need to know is that these buttons are only available in instances created at the WordPress Playground site or when using the Chrome Extension to preview themes and plugins at WordPress.org.

What’s more interesting is what we get when exporting an instance. We get a ZIP file — wordpress-playground.zip to be exact — as you might expect. Extract that, and what we have is the entire website, including the full WordPress installation. It resembles any other standard WordPress project with a wp-content directory that contains the source files for the installed themes and plugins, as well as media library uploads.

The only difference I could spot between this WordPress Playground package and a standard project is that Playground provides the SQLite database in the export, also conveniently located in the wp-content directory.

This is a complete WordPress project. Now that we have it and have confirmed it has everything we would expect a WordPress site to have, we can use Playground’s importing feature to replicate the exported site in a brand-new WordPress Playground instance. Click the Upload icon in the frame of the new instance, then follow the prompts to upload the ZIP file we downloaded from the original instance.

You can probably guess what comes next. If we can export a complete WordPress site with Playground, we can not only import that site into a new Playground instance but import it to a hosting provider as well.

In other words, it’s possible to use Playground as a testing ground for development and then ship it to a production or staging environment when ready. Similarly, the exported files can be committed to a GitHub repo where your production files are, and that triggers a fresh build in production. However you choose to roll!

Sharing Playgrounds

There are clear benefits to being able to import and export Playground sites. WordPress has never been the more portable system. You know that if you’ve migrated WordPress sites and data. But when WordPress is able to move around as freely as it does with Playground, it opens up new possibilities for how we share work.

Sharing With The Query API

We’ve been using the Query API in many examples. It’s extremely convenient in that you append parameters on the WordPress Playground site, hit the URL, and a site spins up with everything specified.

The WordPress Playground site is hosted, so sharing a specific configuration of a Playground site only requires you to share a URL with the site’s configurations appended as parameters. For example. this link shares the Blue Note theme configured with the Gutenberg plugin:

We can do a little more than that, like link directly to the post editor:

Even better, let’s link someone to the theme’s templates in the Site Editor:

Again, there are plenty more parameters than what we have explored in this article that are worth checking out in the WordPress Playground documentation.

Sharing With An Embedded iFrame

We already know this is possible because the best example of it is the WordPress Playground developer page. There’s a Playground instance running and embedded directly on the page. Even when you spin up a new Playground instance, you’re effectively running an iframe within an iframe.

Let’s say we want to embed a WordPress site configured with the Pendant theme and the Gutenberg plugin:

<iframe width="800" height="650" src="https://playground.wordpress.net/?plugin=gutenberg&theme=pendant&mode=seamless" allowfullscreen></iframe>

So, really, what we’re doing is using the source URL in a different context. We can share the URL with someone, and they get to access the configured site in a browser. In this case, however, we are dropping the URL into an iframe element in HTML, and the Playground instance renders on the page.

Not to get too meta, but it’s pretty neat that we can log into a WordPress production site, create a new page, and embed a Playground instance on the page with the Custom HTML Block:

What I like about sharing Playground sites this way is that the instance is effectively preserved and always accessible. Sure, the data will not persist on a page refresh, but create the URL once, and you always have a copy of it previewed on another page that you host.

Speaking of which, WordPress Playground can be self-hosted. You have to imagine that the current Playground API hosted at playground.wordpress.net will get overburdened with time, assuming that Playground catches on with the community. If their server is overworked, I expect that the hosted API will either go away (breaking existing instances) or at least be locked for creating new instances.

That’s why self-hosting WordPress Playground might be a good idea in the long run. I can see WordPress developers and agencies reaching for this to provide customers and clients with demo work. There’s so much potential and nuance to self-hosting Playground that it might even be worth its own article.

The documentation provides a list of parameters that can used in the Playground URL.

Sharing With JSON Blueprints

This “modern” era of WordPress is all about block-based layouts that lean more heavily into JaveScript, where PHP has typically been the top boss. And with this transition, we gained the ability to create entire WordPress themes without ever opening a template file, thanks to the introduction of theme.json.

Playground can also be configured with structured data. In fact, you can see the Playground website’s JSON configurations via this link. It’s pretty incredible that we can both configure a Playground site without writing code and share the file with others to sync environments.

Here is an example pulled directly from the Playground docs:

{
  "$schema": "https://playground.wordpress.net/blueprint-schema.json",
  "landingPage": "/wp-admin/",
  "preferredVersions": {
"php": "8.0",
"wp": "latest"
},
"steps": [{
"step": "login",
"username": "admin",
"password": "password"
}] }

We totally can send this file to someone to clone a site we’re working on. Or, we can use the file in a self-hosted context, and others can pull it into their own blueprint.

Interestingly, we can even ditch the blueprint file altogether and write the structured data as URL fragments instead:

That might get untenable really fast, but it is nice that the WordPress Playground team is thinking about all of the possible ways we might want to port WordPress.

Advanced Playground Configurations

Up to now, we’ve looked at a variety of ways to configure WordPress Playground using APIs that are provided by or based on playground.wordpress.net. It’s fast, convenient, and pretty darn flexible for something so new and experimental.

But let’s say you need full control to configure a Playground instance. I mean everything, from which themes and plugins are preinstalled to prepublished pages and posts, defining php.ini memory limits, you name it. The JavaScript API is what you’ll need because it is capable of executing PHP code, make requests, manage files and directories, and configuring parts of WordPress that none of the other approaches offer.

The JavaScript API is integrated into an iframe and uses the @wp-playground/client npm package. The Playground docs provide the following example in its “Quick Start” guide.

<iframe id="wp" style="width: 100%; height: 300px; border: 1px solid #000;"></iframe>

<script type="module">
  // Use unpkg for convenience
  import { startPlaygroundWeb } from 'https://unpkg.com/@wp-playground/client/index.js';

  const client = await startPlaygroundWeb({
    iframe: document.getElementById('wp'),
    remoteUrl: https://playground.wordpress.net/remote.html,
  });
  // Let's wait until Playground is fully loaded
  await client.isReady();
</script>

This is an overly simplistic example that demonstrates how the JavaScript API is embedded in a page in an iframe. The Playground docs provide a better example of how PHP is used within JavaScript to do things, like execute a file pointed at a specific path:

php.writeFile(
  "/www/index.php",
  `<?php echo "Hello world!";"`
);
const result = await php.run({
  scriptPath: "/www/index.php"
});
// result.text === "Hello world!"

Adam Zieliński and Thomas Nattestad offer a nicely commented example with multiple tasks in the article they published over at web.dev:

import {
  connectPlayground,
  login,
  connectPlayground,
} from '@wp-playground/client';

const client = await connectPlayground(
  document.getElementById('wp'), // An iframe
  { loadRemote: 'https://playground.wordpress.net/remote.html' },
);
await client.isReady();

// Login the user as admin and go to the post editor:
await login(client, 'admin', 'password');
await client.goTo('/wp-admin/post-new.php');

// Run arbitrary PHP code:
await client.run({ code: '<?php echo "Hi!"; ?>' });

// Install a plugin:
const plugin = await fetchZipFile();
await installPlugin(client, plugin);

Once again, the scope and breadth of using the JavaScript API for advanced configurations is yet another topic that might warrant its own article.

Wrapping Up

WordPress Playground is an excellent new platform that’s an ideal testing environment for WordPress themes, plugins… or even WordPress itself. Despite the fact that it is still in its early days, Playground is already capable of some pretty incredible stuff that makes WordPress more portable than ever.

We looked at lots of ways that Playground accomplishes this. Just want to check out a new theme? Use the playground.wordpress.net URL configured with parameters supported by the Query API, or grab the Chrome extension. Need to do a quick test of your theme in a different PHP environment? Use the wp-now package to spin up a test site locally. Want to let others demo a plugin you made? Embed Playground in an iframe on your site.

WordPress Playground is an evolving space, so keep your eye on it. You can participate in the discussion and request a feature through a pull request or report an issue that you encounter in your testing. In the meantime, you may want to be aware of what the WordPress Playground team has identified as known limitations of the service:

  • No access to plugins and theme directories in the browser.
    The theme and plugin directories are not accessible due to the fact that Playgrounds are not connected to the internet, but are virtual environments.
  • Instances are destroyed on a browser refresh.
    Because WordPress Playground uses a browser-based temporary database, all changes and uploads are lost after a browser refresh. If you want to preserve your changes, though, use the export feature to download a zipped archive of the instance. Meanwhile, this is something the team is working on.
  • iFrame issues with anchor links.
    Clicking a link in a Playground instance that is embedded on a page in an iframe may trigger the main page to refresh, causing the instance to reset.
  • iFrame rendering issues.
    There are reports where setting the iframe’s src attribute to a blobbed URL instead of an HTTP URL breaks links to assets, including CSS and images.

How will you use WordPress Playground? WordPress Playground creator Adam Zieliński recently shipped a service that uses Playground to preview pull requests in GitHub. We all know that WordPress has never put a strong emphasis on developer experience (DX) the same way other technical stacks do, like static site generators and headless configurations. But this is exactly the sort of way that I imagine Playground improving DX to make developing for WordPress easier and, yes, fun.

References & Resources

Gatsby Headaches: Working With Media (Part 1)

Working with media files in Gatsby might not be as straightforward as expected. I remember starting my first Gatsby project. After consulting Gatsby’s documentation, I discovered I needed to use the gatsby-source-filesystem plugin to make queries for local files. Easy enough!

That’s where things started getting complicated. Need to use images? Check the docs and install one — or more! — of the many, many plugins available for handling images. How about working with SVG files? There is another plugin for that. Video files? You get the idea.

It’s all great until any of those plugins or packages become outdated and go unmaintained. That’s where the headaches start.

If you are unfamiliar with Gatsby, it’s a React-based static site generator that uses GraphQL to pull structured data from various sources and uses webpack to bundle a project so it can then be deployed and served as static files. It’s essentially a static site generator with reactivity that can pull data from a vast array of sources.

Like many static site frameworks in the Jamstack, Gatsby has traditionally enjoyed a great reputation as a performant framework, although it has taken a hit in recent years. Based on what I’ve seen, however, it’s not so much that the framework is fast or slow but how the framework is configured to handle many of the sorts of things that impact performance, including media files.

So, let’s solve the headaches you might encounter when working with media files in a Gatsby project. This article is the first of a brief two-part series where we will look specifically at the media you are most likely to use: images, video, and audio. After that, the second part of this series will get into different types of files, including Markdown, PDFs, and even 3D models.

Solving Image Headaches In Gatsby

I think that the process of optimizing images can fall into four different buckets:

  1. Optimize image files.
    Minimizing an image’s file size without losing quality directly leads to shorter fetching times. This can be done manually or during a build process. It’s also possible to use a service, like Cloudinary, to handle the work on demand.
  2. Prioritize images that are part of the First Contentful Paint (FCP).
    FCP is a metric that measures the time between the point when a page starts loading to when the first bytes of content are rendered. The idea is that fetching assets that are part of that initial render earlier results in faster loading rather than waiting for other assets lower on the chain.
  3. Lazy loading other images.
    We can prevent the rest of the images from render-blocking other assets using the loading="lazy" attribute on images.
  4. Load the right image file for the right context.
    With responsive images, we can serve one version of an image file at one screen size and serve another image at a different screen size with the srcset and sizes attributes or with the <picture> element.

These are great principles for any website, not only those built with Gatsby. But how we build them into a Gatsby-powered site can be confusing, which is why I’m writing this article and perhaps why you’re reading it.

Lazy Loading Images In Gatsby

We can apply an image to a React component in a Gatsby site like this:

import * as React from "react";

import forest from "./assets/images/forest.jpg";

const ImageHTML = () => {
  return <img src={ forest } alt="Forest trail" />;
};

It’s important to import the image as a JavaScript module. This lets webpack know to bundle the image and generate a path to its location in the public folder.

This works fine, but when are we ever working with only one image? What if we want to make an image gallery that contains 100 images? If we try to load that many <img> tags at once, they will certainly slow things down and could affect the FCP. That’s where the third principle that uses the loading="lazy" attribute can come into play.

import * as React from "react";

import forest from "./assets/images/forest.jpg";

const LazyImageHTML = () => {
  return <img src={ forest } loading="lazy" alt="Forest trail" />;
};

We can do the opposite with loading="eager". It instructs the browser to load the image as soon as possible, regardless of whether it is onscreen or not.

import * as React from "react";

import forest from "./assets/images/forest.jpg";

const EagerImageHTML = () => {
  return <img src={ forest } loading="eager" alt="Forest trail" />;
};

Implementing Responsive Images In Gatsby

This is a basic example of the HTML for responsive images:

<img
  srcset="./assets/images/forest-400.jpg 400w, ./assets/images/forest-800.jpg 800w"
  sizes="(max-width: 500px) 400px, 800px"
  alt="Forest trail"
/>

In Gatsby, we must import the images first and pass them to the srcset attribute as template literals so webpack can bundle them:

import * as React from "react";

import forest800 from "./assets/images/forest-800.jpg";

import forest400 from "./assets/images/forest-400.jpg";

const ResponsiveImageHTML = () => {
  return (
    <img
      srcSet={`

        ${ forest400 } 400w,

        ${ forest800 } 800w

      `}
      sizes="(max-width: 500px) 400px, 800px"
      alt="Forest trail"
    />
  );
};

That should take care of any responsive image headaches in the future.

Loading Background Images In Gatsby

What about pulling in the URL for an image file to use on the CSS background-url property? That looks something like this:

import * as React from "react";

import "./style.css";

const ImageBackground = () => {
  return <div className="banner"></div>;
};
/* style.css */

.banner {
      aspect-ratio: 16/9;
      background-size: cover;

    background-image: url("./assets/images/forest-800.jpg");

  /* etc. */
}

This is straightforward, but there is still room for optimization! For example, we can do the CSS version of responsive images, which loads the version we want at specific breakpoints.

/* style.css */

@media (max-width: 500px) {
  .banner {
    background-image: url("./assets/images/forest-400.jpg");
  }
}

Using The gatsby-source-filesystem Plugin

Before going any further, I think it is worth installing the gatsby-source-filesystem plugin. It’s an essential part of any Gatsby project because it allows us to query data from various directories in the local filesystem, making it simpler to fetch assets, like a folder of optimized images.

npm i gatsby-source-filesystem

We can add it to our gatsby-config.js file and specify the directory from which we will query our media assets:

// gatsby-config.js

module.exports = {
  plugins: [
    {
      resolve: `gatsby-source-filesystem`,

      options: {
        name: `assets`,

        path: `${ __dirname }/src/assets`,
      },
    },
  ],
};

Remember to restart your development server to see changes from the gatsby-config.js file.

Now that we have gatsby-source-filesystem installed, we can continue solving a few other image-related headaches. For example, the next plugin we look at is capable of simplifying the cures we used for lazy loading and responsive images.

Using The gatsby-plugin-image Plugin

The gatsby-plugin-image plugin (not to be confused with the outdated gatsby-image plugin) uses techniques that automatically handle various aspects of image optimization, such as lazy loading, responsive sizing, and even generating optimized image formats for modern browsers.

Once installed, we can replace standard <img> tags with either the <GatsbyImage> or <StaticImage> components, depending on the use case. These components take advantage of the plugin’s features and use the <picture> HTML element to ensure the most appropriate image is served to each user based on their device and network conditions.

We can start by installing gatsby-plugin-image and the other plugins it depends on:

npm install gatsby-plugin-image gatsby-plugin-sharp gatsby-transformer-sharp

Let’s add them to the gatsby-config.js file:

// gatsby-config.js

module.exports = {
plugins: [

// other plugins
`gatsby-plugin-image`,
`gatsby-plugin-sharp`,
`gatsby-transformer-sharp`],

};

This provides us with some features we will put to use a bit later.

Using The StaticImage Component

The StaticImage component serves images that don’t require dynamic sourcing or complex transformations. It’s particularly useful for scenarios where you have a fixed image source that doesn’t change based on user interactions or content updates, like logos, icons, or other static images that remain consistent.

The main attributes we will take into consideration are:

  • src: This attribute is required and should be set to the path of the image you want to display.
  • alt: Provides alternative text for the image.
  • placeholder: This attribute can be set to either blurred or dominantColor to define the type of placeholder to display while the image is loading.
  • layout: This defines how the image should be displayed. It can be set to fixed for, as you might imagine, images with a fixed size, fullWidth for images that span the entire container, and constrained for images scaled down to fit their container.
  • loading: This determines when the image should start loading while also supporting the eager and lazy options.

Using StaticImage is similar to using a regular HTML <img> tag. However, StaticImage requires passing the string directly to the src attribute so it can be bundled by webpack.

import * as React from "react";

import { StaticImage } from "gatsby-plugin-image";

const ImageStaticGatsby = () => {
  return (
    <StaticImage
      src="./assets/images/forest.jpg"
      placeholder="blurred"
      layout="constrained"
      alt="Forest trail"
      loading="lazy"
    />
  );
  };

The StaticImage component is great, but you have to take its constraints into account:

  • No Dynamically Loading URLs
    One of the most significant limitations is that the StaticImage component doesn’t support dynamically loading images based on URLs fetched from data sources or APIs.
  • Compile-Time Image Handling
    The StaticImage component’s image handling occurs at compile time. This means that the images you specify are processed and optimized when the Gatsby site is built. Consequently, if you have images that need to change frequently based on user interactions or updates, the static nature of this component might not fit your needs.
  • Limited Transformation Options
    Unlike the more versatile GatsbyImage component, the StaticImage component provides fewer transformation options, e.g., there is no way to apply complex transformations like cropping, resizing, or adjusting image quality directly within the component. You may want to consider alternative solutions if you require advanced transformations.

Using The GatsbyImage Component

The GatsbyImage component is a more versatile solution that addresses the limitations of the StaticImage component. It’s particularly useful for scenarios involving dynamic image loading, complex transformations, and advanced customization.

Some ideal use cases where GatsbyImage is particularly useful include:

  • Dynamic Image Loading
    If you need to load images dynamically based on data from APIs, content management systems, or other sources, the GatsbyImage component is the go-to choice. It can fetch images and optimize their loading behavior.
  • Complex transformations
    The GatsbyImage component is well-suited for advanced transformations, using GraphQL queries to apply them.
  • Responsive images
    For responsive design, the GatsbyImage component excels by automatically generating multiple sizes and formats of an image, ensuring that users receive an appropriate image based on their device and network conditions.

Unlike the StaticImage component, which uses a src attribute, GatsbyImage has an image attribute that takes a gatsbyImageData object. gatsbyImageData contains the image information and can be queried from GraphQL using the following query.

query {
  file(name: { eq: "forest" }) {
    childImageSharp {
      gatsbyImageData(width: 800, placeholder: BLURRED, layout: CONSTRAINED)
    }

    name
  }
}

If you’re following along, you can look around your Gatsby data layer at http://localhost:8000/___graphql.

From here, we can use the useStaticQuery hook and the graphql tag to fetch data from the data layer:

import * as React from "react";

import { useStaticQuery, graphql } from "gatsby";

import { GatsbyImage, getImage } from "gatsby-plugin-image";

const ImageGatsby = () => {
  // Query data here:

  const data = useStaticQue(graphql``);

  return <div></div>;
};

Next, we can write the GraphQL query inside of the graphql tag:

import * as React from "react";

import { useStaticQuery, graphql } from "gatsby";

const ImageGatsby = () => {
  const data = useStaticQuery(graphqlquery {
      file(name: { eq: "forest" }) {
        childImageSharp {
          gatsbyImageData(width: 800, placeholder: BLURRED, layout: CONSTRAINED)
        }

        name
      }
    });

  return <div></div>;
};

Next, we import the GatsbyImage component from gatsby-plugin-image and assign the image’s gatsbyImageData property to the image attribute:

import * as React from "react";

import { useStaticQuery, graphql } from "gatsby";

import { GatsbyImage } from "gatsby-plugin-image";

const ImageGatsby = () => {
  const data = useStaticQuery(graphqlquery {
      file(name: { eq: "forest" }) {
        childImageSharp {
          gatsbyImageData(width: 800, placeholder: BLURRED, layout: CONSTRAINED)
        }

        name
      }
    });

  return <GatsbyImage image={ data.file.childImageSharp.gatsbyImageData } alt={ data.file.name } />;
};

Now, we can use the getImage helper function to make the code easier to read. When given a File object, the function returns the file.childImageSharp.gatsbyImageData property, which can be passed directly to the GatsbyImage component.

import * as React from "react";

import { useStaticQuery, graphql } from "gatsby";

import { GatsbyImage, getImage } from "gatsby-plugin-image";

const ImageGatsby = () => {
  const data = useStaticQuery(graphqlquery {
      file(name: { eq: "forest" }) {
        childImageSharp {
          gatsbyImageData(width: 800, placeholder: BLURRED, layout: CONSTRAINED)
        }

        name
      }
    });

  const image = getImage(data.file);

  return <GatsbyImage image={ image } alt={ data.file.name } />;
};

Using The gatsby-background-image Plugin

Another plugin we could use to take advantage of Gatsby’s image optimization capabilities is the gatsby-background-image plugin. However, I do not recommend using this plugin since it is outdated and prone to compatibility issues. Instead, Gatsby suggests using gatsby-plugin-image when working with the latest Gatsby version 3 and above.

If this compatibility doesn’t represent a significant problem for your project, you can refer to the plugin’s documentation for specific instructions and use it in place of the CSS background-url usage I described earlier.

Solving Video And Audio Headaches In Gatsby

Working with videos and audio can be a bit of a mess in Gatsby since it lacks plugins for sourcing and optimizing these types of files. In fact, Gatsby’s documentation doesn’t name or recommend any official plugins we can turn to.

That means we will have to use vanilla methods for videos and audio in Gatsby.

Using The HTML video Element

The HTML video element is capable of serving different versions of the same video using the <source> tag, much like the img element uses the srset attribute to do the same for responsive images.

That allows us to not only serve a more performant video format but also to provide a fallback video for older browsers that may not support the bleeding edge:

import * as React from "react";

import natureMP4 from "./assets/videos/nature.mp4";

import natureWEBM from "./assets/videos/nature.webm";

const VideoHTML = () => {
  return (
    <video controls>
      <source src={ natureMP4 } type="video/mp4" />

      <source src={ natureWEBM } type="video/webm" />
    </video>
  );
};

P;

We can also apply lazy loading to videos like we do for images. While videos do not support the loading="lazy" attribute, there is a preload attribute that is similar in nature. When set to none, the attribute instructs the browser to load a video and its metadata only when the user interacts with it. In other words, it’s lazy-loaded until the user taps or clicks the video.

We can also set the attribute to metadata if we want the video’s details, such as its duration and file size, fetched right away.

<video controls preload="none">
  <source src={ natureMP4 } type="video/mp4" />

  <source src={ natureWEBM } type="video/webm" />
</video>

Note: I personally do not recommend using the autoplay attribute since it is disruptive and disregards the preload attribute, causing the video to load right away.

And, like images, display a placeholder image for a video while it is loading with the poster attribute pointing to an image file.

<video controls preload="none" poster={ forest }>
  <source src={ natureMP4 } type="video/mp4" />

  <source src={ natureWEBM } type="video/webm" />
</video>

Using The HTML audio Element

The audio and video elements behave similarly, so adding an audio element in Gatsby looks nearly identical, aside from the element:

import * as React from "react";

import audioSampleMP3 from "./assets/audio/sample.mp3";

import audioSampleWAV from "./assets/audio/sample.wav";

const AudioHTML = () => {
  return (
    <audio controls>
      <source src={ audioSampleMP3 } type="audio/mp3" />

      <source src={ audioSampleWAV } type="audio/wav" />
    </audio>
  );
};

As you might expect, the audio element also supports the preload attribute:

<audio controls preload="none">
  <source src={ audioSampleMP3 } type="audio/mp3" />

  <source src={a udioSampleWAV } type="audio/wav" />
</audio>

This is probably as good as we can do to use videos and images in Gatsby with performance in mind, aside from saving and compressing the files as best we can before serving them.

Solving iFrame Headaches In Gatsby

Speaking of video, what about ones embedded in an <iframe> like we might do with a video from YouTube, Vimeo, or some other third party? Those can certainly lead to performance headaches, but it’s not as we have direct control over the video file and where it is served.

Not all is lost because the HTML iframe element supports lazy loading the same way that images do.

import * as React from "react";

const VideoIframe = () => {
  return (
    <iframe
      src="https://www.youtube.com/embed/jNQXAC9IVRw"
      title="Me at the Zoo"
      allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture"
      allowFullScreen
      loading="lazy"
    />
  );
};

Embedding a third-party video player via iframe can possibly be an easier path than using the HTML video element. iframe elements are cross-platform compatible and could reduce hosting demands if you are working with heavy video files on your own server.

That said, an iframe is essentially a sandbox serving a page from an outside source. They’re not weightless, and we have no control over the code they contain. There are also GDPR considerations when it comes to services (such as YouTube) due to cookies, data privacy, and third-party ads.

Solving SVG Headaches In Gatsby

SVGs contribute to improved page performance in several ways. Their vector nature results in a much smaller file size compared to raster images, and they can be scaled up without compromising quality. And SVGs can be compressed with GZIP, further reducing file sizes.

That said, there are several ways that we can use SVG files. Let’s tackle each one in the contact of Gatsby.

Using Inline SVG

SVGs are essentially lines of code that describe shapes and paths, making them lightweight and highly customizable. Due to their XML-based structure, SVG images can be directly embedded within the HTML <svg> tag.

import * as React from "react";



const SVGInline = () => {

  return (

    <svg viewBox="0 0 24 24" fill="#000000">

      <!-- etc. -->

    </svg>

  );

};

Just remember to change certain SVG attributes, such as xmlns:xlink or xlink:href, to JSX attribute spelling, like xmlnsXlink and xlinkHref, respectively.

Using SVG In img Elements

An SVG file can be passed into an img element's src attribute like any other image file.

import * as React from "react";

import picture from "./assets/svg/picture.svg";

const SVGinImg = () => {
  return <img src={ picture } alt="Picture" />;
};

Loading SVGs inline or as HTML images are the de facto approaches, but there are React and Gatsby plugins capable of simplifying the process, so let’s look at those next.

Inlining SVG With The react-svg Plugin

react-svg provides an efficient way to render SVG images as React components by swapping a ReactSVG component in the DOM with an inline SVG.

Once installing the plugin, import the ReactSVG component and assign the SVG file to the component’s src attribute:

import * as React from "react";

import { ReactSVG } from "react-svg";

import camera from "./assets/svg/camera.svg";

const SVGReact = () => {
  return <ReactSVG src={ camera } />;
};

Using The gatsby-plugin-react-svg Plugin

The gatsby-plugin-react-svg plugin adds svg-react-loader to your Gatsby project’s webpack configuration. The plugin adds a loader to support using SVG files as React components while bundling them as inline SVG.

Once the plugin is installed, add it to the gatsby-config.js file. From there, add a webpack rule inside the plugin configuration to only load SVG files ending with a certain filename, making it easy to split inline SVGs from other assets:

// gatsby-config.js

module.exports = {
  plugins: [
    {
      resolve: "gatsby-plugin-react-svg",

      options: {
        rule: {
          include: /\.inline\.svg$/,
        },
      },
    },
  ],
};

Now we can import SVG files like any other React component:

import * as React from "react";

import Book from "./assets/svg/book.inline.svg";

const GatsbyPluginReactSVG = () => {
  return <Book />;
};

And just like that, we can use SVGs in our Gatsby pages in several different ways!

Conclusion

Even though I personally love Gatsby, working with media files has given me more than a few headaches.

As a final tip, when needing common features such as images or querying from your local filesystem, go ahead and install the necessary plugins. But when you need a minor feature, try doing it yourself with the methods that are already available to you!

If you have experienced different headaches when working with media in Gatsby or have circumvented them with different approaches than what I’ve covered, please share them! This is a big space, and it’s always helpful to see how others approach similar challenges.

Again, this article is the first of a brief two-part series on curing headaches when working with media files in a Gatsby project. The following article will be about avoiding headaches when working with different media files, including Markdown, PDFs, and 3D models.

Further Reading

What is 414 Request URI Too Long Error and How to Fix It

Have you ever encountered a 414 request URI too long error on your WordPress website?

The error is usually caused when there is a critical error between your web browser and a server. You’ll see this error when clicking on a link or any action performed by a WordPress plugin.

In this article, we will show you what is the ‘414 request URI too long’ error and how to fix it.

What is 414 request URI too long error and how to fix it

What is 414 Request URI Too Long Error?

A 414 request URI too long error occurs when a URL or an action you’re requesting is too long for the server to handle.

Do note that there is a difference between URI and URL. A URI or Uniform Resource Identifier can be a resource’s name, location, or both. On the other hand, a URL or Uniform Resource Locator can only be the location of a resource.

Both terms are usually used interchangeably because URL is part of URI. However, the 414 error can be triggered by both components, so let’s look at the causes.

What Causes 414 Request URI Too Long Error?

You might see the 414 error when you click on the link, and the server is unable to process it because it’s too long.

One situation where a link might to very long is using UTM (Urchin Tracking Module) parameters. If you’re using UTM codes to track conversions on your WordPress website and there are a lot of parameters in the URL, then it can cause this error.

Another situation that can cause a 414 error is a redirect loop. This is when a misconfiguration or a setting in a WordPress plugin causes a lot of redirect requests.

As a result, you get incredibly long URLs and 414 requests URI too long error.

Similarly, some plugins can also generate lengthy URIs as part of their functionality. You’re most likely to encounter this error if you have all-in-one WordPress security plugins installed on your site.

In a rare event, a developer-side issue can also trigger a 414 error when a POST request converts into a GET request with query information being too long. Lastly, cyber attacks on your website server can also result in 414 URI too long issues.

That said, let’s see how you can fix the 414 error on your WordPress website.

Fixing 414 Request URI Too Long Error

A quick way to fix this issue is by increasing the size of the URI your website server can process.

Before we move forward, we recommend creating a WordPress backup. That’s because fixing the 414 error involves editing the website configuration files. In case anything goes wrong, you’ll have a backup copy of your site ready to restore.

For more details, please see our guide on how to backup a WordPress site.

Determine if Your Website is Using Apache or Nginx

First, you’ll need to find out the type of server your WordPress website is using. There are 2 main types of servers, which includes Apache and Nginx.

A simple way to do that is by opening your site in a browser. After that, you can right-click on the homepage and select the ‘Inspect’ option.

Open inspect element

Next, you’ll need to switch to the ‘Network’ tab at the top.

From here, you can select any element under the Name column. After that, you will need to scroll down to the ‘Response Headers’ section and see the ‘Server’ details.

View server type of your site

This will show you whether your site is using Nginx or Apache.

If you’re still unsure which server type to use, then you can reach out to your WordPress hosting provider to get more details.

Once you’ve determined the server type, let’s look at how to fix the 414 request URI too long error for Apache and Nginx.

Fixing 414 Request URI Too Long Error in Nginx

First, you’ll need an FTP or file transfer protocol client to access website configuration files.

There are many FTP clients you can use. For this tutorial, we will use Filezilla. If you need help setting up FTP and accessing website files, then please see our guide on how to use FTP to upload files to WordPress.

Once you’re logged in, you’ll need to download the ‘nginx.conf’ file. You can access this by following this path: /etc/nginx/nginx.conf

Access Nginx file

After locating the file, go ahead and download it on your computer and then open it in a notepad software.

From here, you can search for large_client_header_buffers 4 8K settings. If it’s not there, then simply add it to the end of the file.

You will see 2 sets of values, which relate to a number and size. Simply edit the size from 8K to 128K. This will increase the URI size and allow the site server to process long URLs.

Increase URI size in Nginx

Once you’re done, simply save the text file and reupload it to your website using the FTP client.

For more details, please see our guide on how to use FTP to upload files to WordPress.

Fixing 414 Request URI Too Long Error in Apache

If you’re using the Apache server type, then the process is similar to that of Nginx. First, you’ll need an FTP client to access website files.

Once you’re logged in, you’ll need to locate the ‘apache2.conf’ file. Simply head to the following path using the FTP client: /etc/apache2/apache2.conf

Access apache config files

Next, you’ll need to download the file and open it in notepad software.

After that, you can look for LimitRequestLine 128000 settings. If you don’t see one, then simply add it to the end of the file.

Usually, LimitRequestLine is set to 128000. However, you can increase this to 256000 or higher to remove the 414 error. Just make sure that the value you set is a multiple of 2.

Increase URI size in apache

Once you’re done, simply upload the file back to the website using the FTP client. This should help resolve the 414 error on your WordPress website.

We hope this article helped you learn about what is 414 request URI too long error and how to fix it. You may also want to see our guide on WordPress security and the most common WordPress errors.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post What is 414 Request URI Too Long Error and How to Fix It first appeared on WPBeginner.

Cloud Broken Link Checker Repairs Broken Links Faster and Supercharges Your SEO

WPMU DEV’s free all-new and improved Broken Link Checker plugin saves you the time and tedious hassle of handling crucial link management tasks across all your WordPress sites.

“I have been looking to find an easier way to check for broken links. Thank you for making this tool so accessible.” Dena, WPMU DEV Member

Broken links are a negative indicator of site health and can have a major impact on your PageRank and your reputation. Staying on top of your site links is an essential and crucial aspect of good WordPress site management.

But… manually checking your content for broken links is time-consuming and tedious excruciating, especially if you manage multiple WordPress sites.

“This is potentially going to save a ton of time! Before now I’ve always done a manual check on all sites I create.” TNT Systems, WPMU DEV Member

This article shows you how to use our powerful link checking tool on unlimited WordPress sites – completely re-engineered with a top user-requested feature and a new API that works 20x faster, to deliver better and more accurate results, prevent negative SEO performance issues, and improve user experience.

We also include a comprehensive guide covering all you need to know about why link management is important and how to effectively manage broken links on all of your sites.

We’ll cover the following topics:

Let’s jump right in and take a look at the only tool you’ll ever need to check and repair broken links on unlimited WordPress sites.

WordPress Broken Link Checker (BLC) Plugin

Broken Link Checker by WPMU DEV
Broken Link Checker is now even better and faster at finding broken links on WordPress sites.

WPMU DEV acquired Broken Link Checker many years back from ManageWP, and since then have implemented many tweaks and fixes to improve its capabilities, growing its popularity to 700K downloads and its user satisfaction to 4/5 stars.

[NB: Special shoutout to Patrick Walker, Team Lead at WP Engine’s Customer Experience Operations Team for his hard work in collaboration with our engineers to get our plugin removed from WP Engine’s and Flywheel’s block list.]

While we plan to continue maintaining and improving the old plugin version for the thousands of users who are still currently using it, starting from versions 2.0 and onward, we’ve also introduced a new cloud-based link checking plugin for WordPress.

Note: We’ll focus the rest of this article on our Cloud Link Checker. For more information on using the old (Local) BLC plugin, visit the plugin download page on WordPress.org.

Why Two Different Link Checking Engines?

The old Broken Link Checker plugin (we now call this version Local Link Checker) is a great tool currently used and loved by thousands of WordPress users to keep their URLs healthy.

If you love it, feel free to keep using it. Keep in mind, however, that it depends on your site’s resources to run scans, which can be affected by your hosting plan’s available resources, and, depending on what plugins are installed on your site, could cause conflicts or WP/PHP errors.

Our latest innovation — a cloud-based plugin — takes things to a whole new level and opens the door to an entirely new scope of possibilities that we couldn’t achieve before by integrating the best of the Local BLC plugin with cloud capabilities directly into The Hub (our all-in-one WordPress platform), all at no additional cost to users.

For example, some of the benefits of the new cloud-based BLC include:

  • Scalability: Cloud Link Checker runs on WPMU DEV’s engines not the individual site where the plugin is installed, so you can run broken link scans on sites of any size and server type.
  • Blazing Fast Scans: Being cloud-based means any dependencies on the performance of your server are eliminated, giving your scans a massive speed increase.
  • No Risk of Blocklisting: Pinging external sites repeatedly from your website raises suspicious flags with internet service providers and puts your site at risk of being blocklisted. Cloud BLC doesn’t use your site’s IP address, so there’s no risk of your site(s) being blocklisted.
  • Faster Updates and Instant Improvements: No more waiting for plugin version releases or worrying whether you site’s resources can handle the changes. We test, fix, and improve everything on the cloud and your site benefits instantly as soon as we deploy the changes and improvements to our engine..
  • Eliminate WordPress and PHP Errors: Our cloud-based link checker doesn’t run on your site’s resources, so you won’t experience site resource errors using the plugin.
  • Crawl Everything: Cloud Link Checker follows the same logic as search engine crawlers, so no URLs are missed on all standard pages and posts, menus, category pages, etc. Even better, use scheduled reports to discover broken links before the search engines see them.
Cloud Link Checker Splash screen
WPMU DEV’s Cloud-based Link Checker provides better performance and faster speed.

Cloud BLC scans your site from top to bottom, monitoring external and internal links in your posts, pages, comments, blogroll – even custom fields.

It detects links that don’t work, as well as missing images and redirects. It will then notify you via the Broken Link Checker section of the Hub, or you can view a summary of the results in the plugin dashboard of your WordPress site.

“Love this new tool, especially since it runs off-site and doesn’t hog server resources.” Levi, WPMU DEV Member

Set Up

Whether you’re an existing Broken Link Checker plugin user or new to the tool, setup is a breeze.

You can set up the plugin from the WordPress dashboard plugin and from WPMU DEV’s Hub.

Let’s look at both methods.

From the WordPress Dashboard

Link Checker Menu - WordPress Dashboard
Cloud Link Checker activation in the WordPress dashboard.
  1. Install and activate the plugin.
  2. Go to the Link Checker menu and select Cloud.
  3. If you’re logged into WPMU DEV, click ‘Enable Cloud Engine’ (otherwise, the button will say ‘Connect to WPMU DEV’), and you’ll be taken through The Free Hub onboarding process, as well as the broken links checking tool component. This will lead you directly to the BLC service tab in The Free Hub.
  4. From here, run a new scan. You’ll get a notification once the scan completes, so feel free to look around The Free Hub while you wait.
  5. Once you receive notification that the scan is complete, you can view the results in the Broken Link Checker section of The Free Hub.

From The Hub

The Hub - Broken Link Checker Activation screen.
Broken Link Checker activation in The Hub.
  1. After logging into The Hub, you’ll see Broken Link Checker listed as a new service in the menu (top & sidebar).
  2. Activate this to install the plugin on the site.
  3. Run a scan to see your results.

However you choose to install the Cloud Link Checker, the WordPress dashboard will display the Summary Report, while The Hub will have the Full Report – including the list of broken and dead links.

BLC - Scan Results
Scan results in the WordPress dashboard.

Click View Full Report to see the full scan report in The Hub.

BLC scan report
The Hub displays a detailed list of your broken URLs after scanning.

Aside from locating your broken links, missing images, and redirects, the plugin has additional functions that let you schedule scans, send reports by email, search (with built-in filters), and export your lists for download.

Our members spoke…and we listened!

One of the most requested features for this tool was the ability to edit and unlink broken links.

We’re thrilled to announce that in addition to ignoring and reporting links as not broken, you can now also easily edit and unlink broken links from The Hub.

Simply click on the vertical ellipsis (3 dots) to the right of any link listed in the Hub’s scan report and select one of the available options.

Edit and Unlink
Edit, Unlink, Ignore, or Report links as Not Broken.

Select Edit Link to point the link to a new URL, Unlink to remove the link and change it to plain text, or select one of the other options to ignore the link or to report false positives (note: we use Not Broken reports to improve BLC’s engine).

Note: New scans are temporarily disabled while the system is performing editing or unlinking operations. You can run a new scan after these processes have completed.

Also, to keep reports manageable, if the scan detects multiple instances of the same broken link URL, the report only displays the first 10 instances and notifies you how many other instances there are.

Broken Link Checker - Scan Report
Scan reports are kept manageable by displaying only the first 10 URLs for the same broken link.

You can choose to edit or unlink only the first 10 visible links, or perform the operation on all instances of that same link.

Edit Broken Link pop-up screen
Edit (or unlink) only the first 10 links or all links.

Note that the tool does not scan hardcoded links written in php files (eg template files, shortcodes, etc.).

Run Manual Scans

You can run a manual scan any time, in both the WordPress dashboard and The Hub. Just hit the blue Run Scan button. This can be helpful if you’ve done some clean up, and want to refresh your view of the list.

Schedule Scans & Send Reports by Email

Scheduling scans is done from the Broken Link Checker plugin section in the WordPress dashboard.

BLC - Schedule scan
Schedule new scans for broken link checks in your WordPress dashboard.

At least one recipient must be added to schedule reports, so that it can be sent to a party via email.

scan configuration dropdown (WP dash)
Click on the cog icon to see the menu options for scanning.
  1. From the Schedule Scan section, click Configure.
    Check that you are on the Schedule Date tab from the top menu.
  2. Choose the Frequency, from Daily, Weekly, Monthly.
  3. Select desired time, day, or date from the dropdown options; then click Save.
Schedule broken link checker report date time (WordPress dashboard).
The plugin provides many options for scheduling scans.

Now we will add recipients (at least one), so the report has a destination to be sent to:

  1. Click on the Add Recipients tab.
  2. You can either Add Users, from the list of those you’ve already added to the site, or Add By Email, for anyone at all. Remember to Save Changes.
Broken Link Checker- Schedule report: Add Recipients (WordPress dashboard)
Adding recipients to get scan reports via email is fast and easy.

You can deactivate the scheduled scan or change the sending schedule, as well as who it goes to, at any time.

To easily locate your URLs, search results can be filtered from within The Hub.

From the summary screen, you can use the dropdowns to filter by Status or Domain.

search tools (hub)
Use the built-in filters to locate items more easily in your Broken URL list.

Export Lists

You can export your broken URL lists anytime in CSV format.

To do this, simply click the Export as CSV button from the summary screen in The Hub.

And … that’s it! You’re now a BLC pro.

BLC scan results showing no broken links.
Keep your site’s links healthy with the best free broken link checker tool for WordPress.

“I love this! Offsetting the resources to the cloud will help so many sites!” PTaubman, WPMU DEV Member

“But I’m happy with Local BLC and I don’t run multiple sites…”

If you want to keep using the older plugin, you don’t have to switch to Cloud Link Checker. Local BLC will keep working just fine and you can easily switch to the cloud version at any time inside your WordPress admin.

Broken Link Checker Menu
You can switch between cloud and local link checker inside the WordPress admin.

Just keep in mind that you can only activate one engine at a time, so if the Cloud engine is running, Local Link Checker will be inactive and vice-versa.

Local Link Checker - inactive
Switch link checker engines inside your WordPress dashboard.

Note: if you run a multisite installation, BLC cloud version will only be available on the main site when network-activated. Due to the complexity of scanning multisite, subsites will continue to use the BLC Local version.

Cloud Link Checker – Perfect For Agencies

Being able to manage all of your sites from one place (The Hub) and send clients white labeled reports makes Cloud Link Checker the perfect solution for agencies, freelancers, and anyone running multiple WordPress sites.

Whitelabel report - Broken links
Clients will love you even more when they see you’ve taken care of their broken links.

You can also use the tool with a customized report as a way to generate new clients for your agency and upsell WordPress maintenance services to existing clients.

Whitelabel Report - Broken Links
Use BLC with whitelabel reports to generate new clients and upsell maintenance services.

Compare our plugin with other broken link checking tools and you will quickly see why WPMU DEV’s cloud-based link checker is a no-brainer.

For example, here’s one of our competitors’ offering:

  • Free version limitations:
    • Only one website allowed.
    • Only 200 links checked per month.
    • Only internal links are checked.
  • Links are checked once every 3 days.
  • Cost: $30/month (credit card required to sign up).

Whereas, with WPMU DEV’s Cloud Link Checker…

  • No limitations:
    • Unlimited number of websites.
    • Unlimited number of links.
    • Internal and External links are checked (Local and Cloud versions).
  • Set your own schedule (Local and Cloud versions).
  • Manually check all your sites anytime.
  • Cost: Free (priority support included for members only).
  • No credit card required to sign up.

To get the full picture of what our broken links checker can do, see the plugin documentation.

Now that we’ve shared the good news with you about a powerful WordPress troubleshooting tool every web developer (and user) should have in their site management toolbox, let’s take a closer look at the harm broken links can cause if left unchecked and why you need a tool like Broken Link Checker.

High-quality, relevant, and authoritative links are crucial to a website’s SEO and reputation. Broken links can have several negative impacts on search engine optimization, including:

1. Crawling and Indexing Issues: Search engine crawlers follow links to discover and index web pages. In fact, Google cites good working links as a best practice. When a crawler encounters a broken link, it cannot access the linked page and may struggle to navigate through your website effectively. This can prevent certain pages from being indexed, making them invisible to search engines and reducing their chances of appearing in search results.

2. Increased Bounce Rates: Bounce rate measure how long users spend time on a particular web page before “bouncing” to another one. Visitors who stumble upon broken links may abandon a site altogether. When visitors repeatedly choose to leave a web page almost immediately after landing on it, this leads to a high bounce rate, which sends a “low-quality” signal to search engines about the site.

3. Decreased Search Engine Rankings: Search engines aim to deliver the best user experience by providing relevant and high-quality search results. Websites with broken links may be considered less reliable and valuable by search engines, leading to lower rankings in search results. This can result in reduced organic traffic and visibility for your website.

4. Impact on Internal Link Structure: Broken links disrupt the internal link structure of your website. Internal linking helps search engines understand the relationships between different pages and establishes a hierarchy of importance. When broken links exist within this structure, it can confuse search engines and weaken the overall SEO structure of your website.

5. Lost Backlink Opportunities: Backlinks are an important factor in SEO, as they indicate the authority and relevance of your website. If other websites link to broken pages on your site, it can negatively impact your backlink profile. Broken links may deter other webmasters from linking to your site, reducing your chances of acquiring valuable backlinks.

To mitigate the negative impact of broken links on SEO, it is crucial to regularly monitor and fix them. Conducting regular website audits, using tools to identify broken links like BLC, and implementing redirects or updating links can help improve user experience, maintain search engine rankings, and enhance the overall SEO performance of your website.

In addition to impacting your site’s SEO, broken links can also cause serious damage to your business and its reputation. This includes:

1. Poor User Experience: Studies show that 89% of consumers will shop with a competitor after having a poor user experience on a site. Broken links create a negative user experience by leading visitors to dead-end pages or error messages. Users expect links to provide relevant information or resources, and encountering broken links can be frustrating. This can decrease user engagement, increase bounce rates, and ultimately harm your website’s reputation.

2. Negative Impact on Revenue: Broken links can sometimes cause roadblocks in your sales conversion process. Investing money and time into marketing efforts to get potential customers to your site then losing sales because they cannot reach conversion pages means wasted time and lost revenue.

3. Security Vulnerabilities: Broken links can also lead to malicious attacks on your site, phishing scams, and broken links hijacking (see below)

Broken Links Hijacking (BLH) refers to the practice of exploiting expired, unlinked, or inactive external links found within a webpage.

It involves malicious actors taking advantage of resources or third-party services that are no longer available or valid, such as due to expired domains. These attackers can seize control of these links to carry out various harmful activities, including defacement through acquiring expired domains, impersonation, or even cross-site scripting.

Attack Scenario and Security Risks

Let’s imagine a scenario where a business shuts down or forgets to create a social media page but still has the link to that page on its website. In this case, an attacker can simply create an account using the same name and then proceed to post offensive content or launch phishing attacks while pretending to be the business.

Illustrative Scenario

To illustrate this further, let’s consider a website called thewebsite.com that mentions a LinkedIn page URL but hasn’t actually created the page yet. As a result, when users try to visit the LinkedIn page using the URL (e.g., https://www.linkedin.com/company/the-website/), they encounter a “404 page not found” error.

Exploiting this situation, an attacker creates a fake LinkedIn page and customizes the URL to resemble “the-website.” Consequently, when a regular user accesses the company’s LinkedIn page through the URL, they unknowingly get redirected to the attacker’s controlled LinkedIn page.

There are several factors that can lead to broken links. Some of the most common causes include:

1. Typo: Mistakes made when writing the link can result in broken links. This could be a simple error in typing or copy-pasting the URL incorrectly.

2. Deleted Pages: When a page is deleted from a website, any links pointing to that page will become broken. This can happen when content is removed or when a website undergoes restructuring.

3. Renamed Pages: If URLs are changed or pages are renamed without implementing proper redirects, the old links pointing to those pages will no longer work and lead to broken links.

4. Domain Name Change: When a website changes its domain name, any links pointing to the old domain will become broken unless appropriate redirects are in place.

To fix broken links, it is important to follow these best practices:

1. Check Links with a Broken Link Checker (BLC): Use a reliable tool to identify broken links on your website. This will provide you with a list of broken links that need to be addressed.

2. Prioritize High Authority Pages: Start by addressing broken links on pages with high authority or those that receive significant traffic. Fixing these links will have a greater impact on your website’s overall performance.

3. Redirect to Relevant URLs: If a page has been deleted or its URL has changed, set up proper redirects (such as 301 redirects) to automatically send visitors to the relevant new URL. This ensures a seamless user experience and avoids 404 error pages.

To prevent and resolve 404 pages (page not found errors), consider the following steps:

1. Preserve and Update Content: Instead of deleting pages outright, consider updating or refreshing the content. This helps avoid unnecessary 404 pages caused by removing content that other pages have linked to.

2. Implement 301 or 302 Redirects: If a page’s content still exists but its location or URL structure has changed, use 301 or 302 redirects to redirect visitors to the new page. This ensures they can still access the desired content without encountering a 404 error.

3. Reach Out to Webmasters for Updated Links: If a 404 error occurs due to an external website incorrectly linking to your content, you can try contacting the website’s author or web administrator. Requesting an update to the erroneous link can help resolve the issue, or alternatively, suggest changing the link altogether.

Fixing Broken Links: Manual vs Automated Methods

Fixing broken links has long been considered an essential practice among SEO practitioners. Broken links should be fixed quickly.

Google understands that broken links are a natural happening. However, SEOs know that taking time to correct these issues can significantly impact the site’s performance in search engines.

For these reasons and more, it’s clearly important to keep tabs on all of your site links. A small site with minimal content can easily handle checking for broken links manually. However, the more content your site has, the more difficult it becomes to conduct manual scans of your links.

Fixing broken links manually on a website and using automated methods each have their own benefits:

Benefits of Using Manual Methods to Fix Broken Links

1. Accuracy: When fixing broken links manually, you have full control and can ensure that each link is checked and corrected accurately. This allows for a more precise and tailored approach to resolving broken links.

2. Customization: Manual fixing allows you to review each broken link individually and determine the best course of action. You can update the URL, remove the link, or find alternative resources as needed.

3. Quality control: By manually fixing broken links, you can ensure that the replacement URLs are relevant, trustworthy, and provide value to your users. It allows for a more thorough evaluation of the content being linked to.

4. User experience: Manually fixing broken links allows you to consider the user experience in the process. You can choose appropriate anchor text, update navigation menus, and ensure a seamless browsing experience for your visitors.

5. Content review: While fixing broken links manually, you can review the content surrounding the broken links. This presents an opportunity to update outdated information, improve the overall quality of the content, and enhance the SEO performance of the page.

Benefits of Using Automated Methods to Fix Broken Links

1. Time-saving: Automated tools can scan your website and identify broken links quickly, saving you time and effort compared to manually checking each link individually.

2. Efficiency: With automated methods, you can fix broken links in bulk rather than addressing them one by one. This can be especially useful for large websites with a high volume of broken links.

3. Scalability: Automated tools can handle the detection and fixing of broken links on websites of any size. They can efficiently process a large number of links, ensuring comprehensive coverage.

4. Regular monitoring: Automated methods allow for regular and scheduled scans of your website, ensuring that new broken links are promptly identified and addressed.

5. Consistency: Using automated tools ensures a consistent approach to fixing broken links across your entire website. This helps maintain a unified user experience and prevents oversight of any broken links.

The choice between using manual and automated methods depends on your specific needs, resources, and preferences. The good news is, all of the risks associated with bad links are easily avoided if you make sure they are kept in proper working order.

Even better, is using a quality, automated dead link checker tool like Broken Link Checker removes the tedious and time consuming task of manually tracking and manging your broken links.

Take Link Maintenance to the Next Level with WPMU DEV’s BLC

Over 700,000 WordPress users depend on Broken Link Checker to keep their sites free of errors and performance issues caused by outdated and non-working URLs.

Our new cloud-based plugin version offers even more incredible value — enhanced speeds, no PHP/DB errors, ability to schedule scans and send email reports (including white labeled), plus the ease of managing unlimited sites from one central Hub – all while still (and always) remaining free.

Don’t let your site’s SEO and user experience take an unnecessary hit. Especially when a practical solution is directly within your reach.

Connect, scan, schedule, and fix broken links quickly and easily and keep your sites running optimally with the new Cloud-based Link Checker. Get it for free or as a WPMU DEV member.

Note: A WPMU DEV membership includes full access to all Hub features, hostingpro plugins, and unmatched 24/7 expert support.

Building And Dockerizing A Node.js App With Stateless Architecture With Help From Kinsta

This article is a sponsored by Kinsta

In this article, we’ll take a swing at creating a stateless Node.js app and dockerizing it, making our development environment clean and efficient. Along the way, we’ll explore the benefits of hosting containers on platforms like Kinsta that offers a managed hosting environment while supporting Docker containers as well as application and database hosting, enabling users to deploy and scale their applications with more flexibility and ease.

Creating A Node.js App
In case you’re newer to code, Node.js is a platform built on Chrome’s JavaScript engine that allows developers to create server-side applications using JavaScript. It is popular for its lightweight nature, efficient performance, and asynchronous capabilities.

Stateless apps do not store any information about the user’s session, providing a clean and efficient way to manage your applications. Let’s explore how to create a Node.js app in this manner.

Step 1: Initialize The Node.js Project

First, create a new directory and navigate to it:

mkdir smashing-app && cd smashing-app

Next, initialize a new Node.js project:

npm init -y

Step 2: Install Express

Express is a minimal and flexible Node.js web application framework that provides a robust set of features for web and mobile applications. Install Express with the following command:

npm install express

Step 3: Create Your Stateless App

Create a new file named “app.js” and add the following code:

const express = require("express");
const app = express();
const port = process.env.PORT || 3000;
app.get("/", (req, res) => {
  res.send("Welcome to our smashing stateless Node.js app!");
});
app.listen(port, () => {
  console.log(`Smashing app listening at http://localhost:${port}`);
});

Let’s explore this a bit. Here’s what each line does:

  • const express = require("express");
    This line imports the Express.js framework into the code, making it available to use.
  • const app = express();
    This line creates an instance of the Express.js framework called app. This app instance is where we define our server routes and configurations.
  • const port = process.env.PORT || 3000;
    This line sets the port number for the server. It looks for a port number set in an environment variable called PORT. If that variable is not set, it defaults to port 3000.
  • app.get("/", (req, res) => {}
    This line defines a route for the server when a GET request is made to the root URL (“/”).
  • res.send("Welcome to our smashing stateless Node.js app!");
    This line sends the string “Welcome to our smashing stateless Node.js app!” as a response to the GET request made to the root URL.
  • app.listen(port, () => {})
    This line starts the server and listens on the port number specified earlier.

Now, run the app with:

node app.js

Your Node.js app is now running at http://localhost:3000.

Stateless Architecture

Stateless architecture means that the server doesn’t store any information about the user’s session, resulting in several benefits:

  • Scalability
    Stateless applications can easily scale horizontally by adding more instances without worrying about session data.
  • Simplicity
    Without session data to manage, the application logic becomes simpler and easier to maintain.
  • Fault tolerance
    Stateless applications can recover quickly from failures because there’s no session state to be lost or recovered.

Okay, we’ve got our Node.js server running locally, but how can we package it up so that anyone can run it? Even people without Node.js installed, and have it run on any platform? That’s where Docker comes in.

Dockerizing The App

Docker is a tool that helps developers build, ship, and run applications in a containerized environment. It simplifies the process of deploying applications across different platforms and environments.

Step 1: Install Docker

First, make sure you have Docker installed on your machine. You can download it here.

Step 2: Create A Dockerfile

Create a new file named Dockerfile in your project directory and add the following code:

FROM node:18-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
ENV PORT=3000
CMD [ "node", "app.js" ]

Once again, let’s take a look at what this is doing in a little more detail:

  • FROM node:18-alpine
    This line specifies the base image for this Docker image. In this case, it is the official Node.js Docker image based on the Alpine Linux distribution. This gives Node.js to the Docker container, which is like a “virtual machine” but lighter and more efficient.
  • WORKDIR /usr/src/app
    This line sets the working directory inside the Docker container to /usr/src/app.
  • COPY . .
    This line copies all the files from the local directory to the working directory in the Docker container.
  • RUN npm install
    This line installs the dependencies specified in the package.json file.
  • ENV PORT=3000
    Using this directive, we make the app more configurable by using the PORT environment variable. This approach provides flexibility and allows hosting providers like Kinsta to connect the application to their infrastructure seamlessly.
  • CMD [ "node", "app.js" ]
    This line specifies the command to run when the Docker container starts. In this case, it runs the node command with app.js as the argument, which will start the Node.js application.

So, this Dockerfile builds a Docker image that sets up a working directory, installs dependencies, copies all the files into the container, exposes port 3000, and runs the Node.js application with the node command.

Step 3: Build And Run The Docker Container

Let’s now build this and run it locally to make sure everything works fine.

docker build -t smashing-app

When this succeeds, we will run the container:

docker run -p 3000:3000 smashing-app

Let’s break this down because that -p 3000:3000 thing might look confusing. Here’s what’s happening:

  1. docker run is a command used to run a Docker container.
  2. -p 3000:3000 is an option that maps port 3000 in the Docker container to port 3000 on the host machine. This means that the container’s port 3000 will be accessible from the host machine at port 3000. The first port number is the host machine’s port number (ours), and the second port number is the container’s port number.
  3. We can have port 1234 on our machine mapped to port 3000 on the container, and then localhost:1234 will point to container:3000 and we'll still have access to the app.
  4. smashing-app is the name of the Docker image that the container is based on, the one we just built.

Your Dockerized Node.js app should now be running at http://localhost:3000.

When running the Docker container, we can additionally pass a custom PORT value as an environment variable:

docker run -p 8080:5713 -d -e PORT=5713 smashing-app

This command maps the container's port 5713 to the host's port 8080 and sets the PORT environment variable to 5713 inside the container.

Using the PORT environment variable in the Dockerfile allows for greater flexibility and adaptability when deploying the Node.js app to various hosting providers, including Kinsta.

More Smashing Advantages Of Dockerizing A Node.js App

Dockerizing a Node.js app brings several advantages to developers and the overall application lifecycle. Here are some additional key benefits with code examples:

Simplified Dependency Management

Docker allows you to encapsulate all the dependencies within the container itself, making it easier to manage and share among team members. For example, let's say you have a package.json file with a specific version of a package:

{
  "dependencies": {
    "lodash": "4.17.21"
  }
}

By including this in your Dockerfile, the specific version of lodash is automatically installed and bundled within your container, ensuring consistent behavior across environments.

Easy App Versioning

Docker allows you to tag and version your application images, making it simple to roll back to previous versions or deploy different versions in parallel. For example, if you want to build a new version of your app, you can tag it using the following command:

docker build -t smashing-app:v2 .

You can then run multiple versions of your app simultaneously:

docker run -p 3000:3000 -d smashing-app:v1

docker run -p 3001:3000 -d smashing-app:v2
Environment Variables

Docker makes it easy to manage environment variables, which can be passed to your Node.js app to modify its behavior based on the environment (development, staging, production). For example, in your app.js file:

const express = require('express');
const app = express();
const port = process.env.PORT || 3000;
const env = process.env.NODE_ENV || 'development';
app.get('/', (req, res) => {
  res.send(`Welcome to our smashing stateless Node.js app running in ${env} mode!`);
});
app.listen(port, () => {
  console.log(`Smashing app listening at http://localhost:${port}`);
});

In your Dockerfile, you can set the NODE_ENV variable:

FROM node:18-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
ENV NODE_ENV=production
CMD [ "node", "app.js" ]

Or you can pass it when running the container:

docker run -p 3000:3000 -d -e NODE_ENV=production smashing-app

The TL;DR of this is that through Dockerizing node apps, we can eliminate a whole class of “works on my machine” problems while also boosting the reusability, testability, and portability of our Node.js applications. 🎉

Hosting Containers With Kinsta

Now that we have our stateless Node.js app Dockerized, you might be wondering where to host it. Kinsta is widely known for its application and database hosting. Let’s explore how we’d do this with Kinsta in a step-by-step manner.

  1. Login or sign-up to your Kinsta account.
  2. From there, you should be in your dashboard.
  3. Using the sidebar, navigate to Applications.
  4. From here, you should be able to Add a Service of type application.
  5. Once you add an application, you’ll be invited to connect your GitHub account to Kinsta so that Kinsta can automatically deploy your application when updates are pushed to it.
  6. You can now choose the repo containing the code you’d like to deploy, along with setting some basic details like the application’s name and environment variables.
  7. Next, we specify the build environment of our application. It is here we specify the location of the Dockerfile in our repo that we just created.
  8. Finally, we allocate computer resources for our container, enter our payment information, and we’re ready to go!

Kinsta will now build and deploy our application, and give us a public, secure link from where it is accessible. Our application is now published to the web!

Conclusion

In this tutorial, we’ve built a Node.js app and Dockerized it, making it easy to deploy across various environments. We’ve also explored the benefits of stateless architecture and touched upon some great choices for hosting containers, like Kinsta.

How To Protect Your App With A Threat Model Based On JSONDiff

Security changes constantly. There’s a never-ending barrage of new threats and things to worry about, and you can’t keep up with it all. It feels like every new feature creates expanding opportunities for hackers and bad guys.

Threat model documents give you a framework to think about the security of your application and make threats manageable. Building a threat model shows you where to look for threats, what to do about them, and how to prevent them in the future. It provides a tool to stay safe so you can focus on delivering a killer application, knowing that your security is taken care of.

This article will show you how to create a threat model document. We’ll review JSONDiff.com and build a threat model for it, and we’ll show how small architectural changes can have a gigantic impact on the security of your application.

Who Do You Trust?

Every time you use a computer, you trust many people. When you make an application, you’re asking other people to trust you, but you’re also asking them to trust everything you depend on.

Your threat model makes it clear who you’re trusting, what you’re trusting them with, and why you should trust them.

What Is A Threat Model?

A threat model is a document where you write down three things:

  1. The architecture of your application,
  2. The potential threats to your application,
  3. The steps you’re taking to mitigate those threats.

It’s really that simple. You don’t need complex tools or a degree in security engineering. All you need is an understanding of your application and a framework for where to look for threats.

This article will show how to build your own threat model using JSONDiff as a sample. You can also take a look at the complete threat model for JSONDiff to see the finished document.

Threat Models Start With Architecture

All threat models start with a deep understanding of your architecture. You need to understand the full stack of your application and everything it depends on. Documenting your architecture is always a good idea; you can start anytime. You’re architecting from the moment you start picking the tools you’ll use.

Here are some basic questions to answer for your architecture:

  • Where does my application run?
  • Where is my application hosted?
  • What dependencies does my application have?
  • Who has access to my application?
  • Where does my application store data?
  • Where does my application send data?
  • How does my application manage users and credentials?

Give a brief overview of your application and then document how it works. Start by drawing a picture of your application. Keep it simple. Show the major pieces that make your application run and how they interact.

Let’s start by looking at the overall architecture of JSONDiff.

JSONDiff is a simple application that runs in a browser. The source code is stored on GitHub.com, and it’s open source. It can run in two modes:

  1. The public version is at JSONDiff.com.
  2. A private version users can run in a Docker container.

We’ll draw the architecture in relation to what runs in the client and what runs on the server. For this drawing, we won’t worry about where the server is running and will just focus on the public version.

Drawing your architecture can be one of the trickiest steps because you’re starting with a blank page and have to choose a representation that makes sense for your application. Sometimes you’ll want to talk about larger pieces; other times, you’ll want to focus on smaller chunks and user actions. Ask yourself what someone would need to know to understand your security, and write that.

JSONDiff is a single-page web application using jQuery. In this case, it makes sense to focus on the pieces that run on the server, the pieces that run in the browser, and how they work.

The first step to any architecture is a brief description of what the application is. You need to set the stage and let anyone reading the architecture know some basic information:

  • What does the application do?
  • Who’s using it?
  • Why are they using it?

JSONDiff is a browser-based application that compares JSON data. It takes two JSON documents, compares them semantically, and shows the differences. JSONDiff is free for anyone and anywhere. It’s used by developers to find differences in their JSON documents that are difficult to find with a standard text-editor diff tool or in GitHub.

The architecture diagram looks like this:

The architecture is simple: Nginx hosts the site, and most of the code is in the jdd.js file. But it brings up many good questions:

  • How does JSONDiff load JSON data?
  • Does it ever send the data it loads anywhere?
  • Does it store the data?
  • Where do the ads come from?

Write down all of the questions your architecture diagram brings up, and answer them in your threat model. Having those questions written down gives you a place to start understanding the threats.

Let’s focus on the first question and show how to dig into it with a security mindset.

There are two ways to load the JSON data you want to compare. You can load it in the browser by copying and pasting it or by choosing a file. That interaction is very well understood, and there isn’t much of a threat there. You can also load the JSON data by specifying an URL, which opens a big can of worms.

Specifying a URL to load the data is a very useful feature. It makes comparing large documents easier, and you can send someone else a URL with the JSON documents already loaded. It also brings up a lot of issues.

The same-origin policy prevents JavaScript applications running in browsers from loading random URLs. There are very good reasons that this policy exists. JSONDiff is subverting the policy, and that should make your security spidey-sense tingle.

JSONDiff uses a proxy to enable this feature. The proxy.php file is a simple proxy that will load JSON data from anywhere.

Loading random data sounds like a recipe for a cross-site request forgery (CSRF) attack. That’s a risk.

All applications have risks; we manage those risks with mitigations. In this case, the proxy risk has two mitigations:

  1. The proxy can only load data that are already publicly available on the Internet.
  2. The file that’s loaded by the proxy is never executed.

Our threat model will include this risk and show how we mitigated it. In fact, each threat needs to show how much risk there is and what we did to mitigate each risk.

Let’s take a look at where threats appear in your application.

Threats

There are many categories of threats through the development and deployment lifecycles. It’s helpful to split threats into different categories and document those potential threats for our application, while we’re starting to plan, design, implement, deploy, and test that software or service.

For every threat we identify, we need to describe two pieces:

  • The threat
    What is the specific threat we’re worried about here? How could it be exploited in our application? How serious could that exploit be?
  • Mitigation
    How are we going to mitigate that threat?

Code Threats

Many threats start with the code you write. Here are a few categories of coding issues to think about:

Weak Cryptography

“Does your application use SSL or TLS for secure network connections?”

If you are, make sure that you’re using the latest recommended versions.

“Does your application encrypt data or passwords?”

Make sure you’re using the latest hashing algorithms and not the older ones like MD5 or SHA-1.

“Did you implement your own encryption algorithm?”

Don’t. Just don’t. There’s almost never a good reason to implement your own encryption.

SQL Injection

SQL injection attacks happen when a user enters values in an application that are sent directly to a database without being sanitized (like Bobby Tables). This can inject malicious code that alters the original SQL query to retrieve, change, or delete data inside the SQL database.

Avoid injection attacks by not trusting any inputs coming from users. Your threat model should address any place you’re taking user input and saving it anywhere.

JSONDiff never saves any of the JSON data it compares. If we added that feature, we’d be open to many types of injection attacks. It doesn’t matter if we saved the JSON to a SQL database like PostgreSQL, a NoSQL database like MongoDB, or a file system. We’d mitigate that threat by making sure to sanitize our inputs and never trusting data from users.

Cross-Site Scripting (XSS)

Malicious scripts can be injected into web applications, making browsers run those scripts in a trusted context; that allows them to steal user tokens, passwords, cookies, and session data. This injection attack happens when a user saves or references code from somewhere else and gets that code to run in the application security context.

JSONDiff doesn’t let users save anything, but you can build URLs to preload the documents to compare like this:

This is a clear threat to address in the threat model. If someone referenced malicious code in an URL like this and sent it to someone, they could run it without realizing the risk. JSONDiff mitigates this threat by using a custom parser for the inputs and making sure that none of them get executed. We can test that with ‘evil’ JSON and JavaScript files like this:

Consider all of the inputs to your application and how you’re making sure they can’t cause problems.

Cross-Site Request Forgery (CSRF)

CSRF attacks wait for you to log in and then use your credentials to steal data and make changes. Session-based unique CSRF tokens can be used to prevent such an attack. Examine everywhere your application uses sessions. What are you doing to make sure sessions can’t be shared or stolen?

JSONDiff doesn’t have sessions, so there’s nothing to steal. Adding the ability to manage sessions and login would create a large set of new threats. The threat model would need to address protecting the session token, making sure that sessions can’t be reused, and ensuring that sessions can’t be stolen, among other things.

Logging Sensitive Information

Your logs aren’t secure, so don’t put any sensitive information there. Logging passwords or other sensitive customer information is the most common security issue in building an application: developers log some activity or error, and that contains the token or password information or personal information about the user.

What are you doing to make sure that developers don’t log sensitive information? Make sure your code review includes looking at logging output. There are also password scanners you can run over your log files to find likely passwords.

Code Review And Separation Of Duties

Trust, but verify, as some people on your team will be malicious. Everyone on your team makes mistakes — trust your team, but verify.

The best way to verify this is to separate the roles within your team. Allowing one person to change code, test it, and push it to production without any oversight presents a risk. Separation of duties splits the stages of your pipeline to production into multiple stages. There are four clear stages in every application that you should separate as much as possible:

  1. Writing the code,
  2. Reviewing the changes,
  3. Testing the functionality,
  4. Deploying the application.

For small projects, these roles may overlap or be part of an automated process. Even when the pipeline is fully automatic, you can still separate the functions. For example, making sure that the owner of a given area didn’t write all the tests for that area ensures that someone else is verifying the functionality. In well-run projects, these roles can switch so everyone gets a turn to write code as well as review it or write tests as well as do deployments.

JSONDiff is an open-source application that makes review much easier. For closed-source applications, you can use the Pull Request mechanism in Git to ensure all code is reviewed for the issues mentioned above. Spend time with your team and teach them what they should look for during code review.

Static code analysis tools also help detect security threats and other issues. These tools include linters and code checkers like JSHint, along with more comprehensive security scanners. These tools look at your source code and find problems based on the specific programming language you’re using. OWASP maintains a list of static analysis tools.

Many security scanners use common vulnerabilities and exposures (CVE) databases to know what issues to look for. Integrating these tools into your build process ensures that all your changes will be scanned.

The code for JSONDiff was scanned by JSHint, and all issues were fixed, or so I thought. It turned out that I scanned the JavaScript, but I missed the server side. My co-author Terry ran the SonarQube lint scanner and found an error in the PHP proxy:

This small fix is a great example of how a second pair of eyes can help you find problems.

Third-Party Threats

Your application has dependencies and probably a lot of them. They may be from other groups or open-source projects. The list of all these dependencies and the versions they use makes up a Software Bill of Materials (SBOM).

When the teams who maintain the projects you depend on find security issues, they report them in a CVE database. Other security professionals report CVEs as well. Third-party scanners look at those databases and make sure you aren’t using dependencies with known security issues.

Static application security testing (SAST) tools like Snyk can also scan third-party threats and report vulnerabilities in the libraries you’re using. Those vulnerabilities are then scored by severity, so you know how seriously to take each threat.

Tools like NPM have built-in vulnerability checking for dependencies. Integrating vulnerability checks in your build process mitigates that threat.

Data Security Threats

Protecting your application means protecting the application data. Always make sure your data is transmitted and stored with confidentiality, integrity, and availability.

Here are some of the risks to data security:

  • Accidental data loss or destruction,
  • Malicious access to confidential data like financial data,
  • Unauthorized access from various partners or employees,
  • Natural disasters or uncontrollable hazards like earthquakes, floods, fire, or war.

To mitigate those risks, we can implement these actions:

  • Protect the data with strong passwords, and define the policy for password expiration.
  • Categorize the data with different classes and usage, and define the different roles that can access different levels of data.
  • Always do an authorization check to make sure only a permitted user with the corresponding role can access that level of data.
  • Deploy various security tools like firewalls and antivirus software.
  • Encrypt your data at rest (when it’s stored somewhere).
  • Encrypt your data in transit (when it’s moving between two points).

JSONDiff doesn’t store any data. Let’s think about the in-transit threat:

  • The threat
    JSONDiff loads data from any URL to compare. How are we protecting that data?
  • Mitigation
    JSON uses SSL encryption when loading data if it’s available and always uses SSL to encrypt data sent to the browser.

Runtime Threats

After the application is deployed and running, we need to consider the runtime threats.

The best way to find runtime threats is a penetration test from a team of experts. Pen-test teams pretend they’re hackers and attack your application. They attack your external interfaces and look for SQL injection, cross-site scripting, denial of service (DDOS) attacks, privilege escalation attacks, and many more problems.

If you can afford an external pen-test team, then use one, but you can also do this yourself. Pen-test tools like the OWASP ZAP proxy perform dynamic scanning on your application endpoints and report common threats.

Threats To Stability

Availability attacks try to disrupt your application instead of hacking it. High availability and redundant designs mitigate the threat of these attacks.

There are several things we can consider to build up plans for those threats:

  • High-availability infrastructure, including the network and server. If we deploy the application via the cloud, we can consider using multiple regions or zones and set up a load balancer.
  • Redundancy for the system and data. This will improve stability and availability, but the cost will be high. You can balance stability and cost: Only make your most critical components redundant.
  • Monitoring of system and setup alerts if the system might be running at capacity in various components. There could be a malicious activity that will destroy your infrastructure, and monitoring the health of your system availability will be critical.
  • Backup and restore plans. If security threats take the system down, how can we quickly bring it back up? We need to build a plan for backing up and restoring.
  • Handling outages of dependent services. We need to build up some fallback plans, design and implement circuit breakers, and keep dependent services from breaking the entire application.

Building A Data Recovery Plan

What can disrupt your application or system? Think about human error, hardware failure, data center power outages, natural disasters, and cybersecurity attacks.

Business continuity and disaster recovery (BCDR) design will be critical to ensure that your organization, users or customers, and employees can do business with minimal disruption.

For an organization like a company, you’ll need to create a business continuity plan. That means first assessing your people, IT infrastructure, and application. Identify people’s roles and responsibilities for your business continuity plan and recovery solutions.

If you’re deploying your application in a cloud-based environment, you need to deploy it across multiple regions or multiple cloud providers. The critical part is the data storage for the system and application: All data should have point-in-time replication, allowing your application or service to be restored soon from a secondary data center or a different country or continent.

Your BCDR solution should be regularly tested every year (or even more often), and your plan should be frequently reviewed and improved by the people in your organization.

The Worst-Case Scenario

Threat models provide a framework to imagine the worst-case scenario, which helps you think outside the box and come up with novel threats.

So what’s the worst-case scenario for JSONDiff? It probably involves the proxy.php script. We already know to focus on the proxy, and there have been some severe PHP exploits in the past. The proxy.php file is also the only part that runs on the server side. That makes it the weakest link.

If I was able to hack the proxy, I could change the way it works. Maybe I could fool it into returning different content. I can’t run malicious code with that content, but I could trick someone into thinking two JSON documents were the same when they weren’t; I might be able to do something malicious with that.

I could go even further and think about what would happen if someone hacked into the server and changed the contents of the code, but now I’m just back to credential management, which is already covered in the threat model.

This reminds us to keep up to date with PHP versions, so we get the latest security fixes.

Thinking of the worst-case scenario sends you in different directions and improves your threat model.

We’re Just Scratching The Surface

We’re just scratching the surface of all the threats to think about when building a threat model. Mitre has an excellent matrix of threats to think about when building your own threat model. OWASP also maintains a Top 10 list of security risks and a Threat Modeling Cheat Sheet that everyone should be familiar with.

The most important takeaway is that you should think about all the ways people interact with your application and all the ways your application interacts with other systems. Any time you can simplify those interactions, you’re reducing your vulnerability to threats.

For more complex threat models, making a threat diagram is also useful. Tools like draw.io have specific shapes for threat modeling diagrams:

What If I Can’t Mitigate A Threat?

You can’t mitigate every threat. For JSONDiff, a threat I have no control over is Google AdSense, which adds dynamic content to JSONDiff.com. I don’t get to check that content first. I can’t verify every ad that Google might show. I also can’t force Google to go through a security review process for my site. In the end, I just have to trust Google.

In the rare cases when you have a threat you can’t mitigate or minimize, the best you can do is settle for transparency. Be as open and honest about that threat as possible. Document it. Let your users or customers know, so they can make their own choices about whether the risk is worth it.

Build Your Threat Model Early

Threat models help the most when begun early in the process. Start putting your threat model together as soon as you pick technologies. Decisions about how you’ll manage users, where you’ll store data, and where your application runs all have a major impact on the threat model of your application.

Working on the threat model early, when it’s easier to make architectural changes, makes it easier to fend off threats.

Communicating Your Threat Model

The previous section showed you how to start creating your threat model. What should you do with it once you’re done?

There are a few potential audiences for your threat model:

  • Security reviewers
    If you create an application for any security-conscious company, it will want to do a security review. Your threat model will be a requirement for that process. Having a threat model ahead of time will give you a giant head start.
  • Auditors
    Security auditors will always look for a threat model. They want to make sure you’ve thought through the threats to your application.
  • Yourself
    Use your own threat model to manage your threats. Have the team keep it up to date while you’re adding new features. Making sure that team members update the threat model will force them to think of any potential threats they’re adding when they make changes.
  • Everyone
    If your project allows it, then share your threat model with everyone. Show the people who trust your application the potential threats and how you’re handling them. Openness reassures them and helps them appreciate all the work you’ve done to make your application secure.
Keep Improving Your Threat Model

We talked about the most important steps in constructing a threat model, but threats are a constantly moving target. We need to build up a management plan for security incidents, defining how to respond to any threats we learn about from internal or external sources.

Every incident you find should end up in your threat model. Document how you found it, how you fixed it, and what you did to make sure it never happens again. Every application has security issues; what matters is how well you handle them. This is a continuous process of improvement:

  1. Build the architecture to understand what the application is for.
  2. Identify the application threats.
  3. Think about how to mitigate the identified vulnerabilities.
  4. Validate the threat model with other experts in your area.
  5. Review the threat model, and make updates every time you find a new threat.
Threat Models Let Me Sleep At Night

I make threat models for myself. I want to sleep at night instead of staring at the ceiling and wondering what security holes I’ve missed. I want to focus on delighting my users without constantly worrying about security. Threat models give me a framework to do that.

I make threat models for my customers. I want them to know that I take their security seriously, and I’m thinking about keeping them secure. I want to show them what I’m doing and help them understand so they can judge for themselves.

Threat models are flexible and grow or shrink as much as you need. They provide a tool for you to reassure your users about security and allow you to sleep at night. Now you know why you need one for your application, too.

A Pragmatist’s Guide To Lean User Research

We don’t live in an ideal world. Most of us have too much work, too little time, and too small a budget. When it comes to digital projects, it seems like our clients or bosses always prioritize speed over quality.

To make matters worse, we read countless articles telling us how we should do things. These articles emphasize research and testing but do nothing more than leave us disillusioned and add to our imposter syndrome.

In this article, I want to try a different approach. Instead of telling you what the best practice is, I’ll explore some practical approaches to user research that we might be able to fit into our existing projects.

I know what you’re thinking:

“I won’t be allowed to do research. I’ll be told there’s no time.”

So let’s start there.

Lean User Research Saves Time Rather Than Costs It

The notion that all user research must take away from the available time for a project is flawed. Lean user research has the potential to save you time, especially on projects with multiple stakeholders.

Consider how much time is wasted on calls debating the best approach or in Figma endlessly revising the design because the client can’t make up their mind. Then there is the time of the other stakeholders, all of whom have to attend those meetings and provide feedback.

A small amount of user research can solve much of that. It can replace endless opinions, discussions, and revisions with data.

We don’t need to ask for extra time for research. Instead, we can replace some of those meetings with a quick survey or test and cut through all the discussion.

But what about the discovery you are supposed to do upfront? What about the research into your audience before you begin? Isn’t that best practice, and shouldn’t you be doing that?

Well, yes and no.

What About Upfront Research?

Yes, a discovery phase is best practice. It is our chance to challenge our assumptions about the users and their needs. However, we don’t always get to do what we should, and not every discovery phase needs to take a lot of work.

If you’re not careful, discovery phases can be a little wasteful. General research into your audience and needs may not always provide applicable insights. That’s because it’s only once we start work that we learn what questions to ask upfront. Of course, by that point, you have already used time on the discovery phase, and stakeholders may be reluctant to do any more research.

Simply carrying out exercises like customer journey mapping because you’ve read that you should do it upfront is not a good enough reason when time and money are tight.

So, if time is tight, don’t feel like you have to do a full-blown discovery phase just because articles like this tell you to. Instead, start by collating what the organization already knows about the user and their needs. Most organizations know more than you think about their audience. Whether it’s personas produced by marketing, surveys run in the past, or analytics data, it can often just be a matter of gathering together what already exists.

Once you have done that, you will have a clearer picture of what is missing. If there are some significant and obvious gaps in your knowledge, then some upfront research is worthwhile. However, it might be that you have enough to start, leaving more time for user research as issues arise.

Either way,

Your focus should be on answering specific questions, not improving your general understanding of the user.

Focus On Answering Specific Questions

User research can quickly become a time sink if not managed carefully. Adding more and more questions to surveys because “it would be interesting to know” will slow down the surveying process. Equally, you can waste hours simply watching user sessions back. While this context is helpful, it is better to conduct user research only when there is a specific question that needs answering.

For example, if you want to know why people aren’t buying on your website, run a one-question survey that asks why when people go to leave the site. Or, if stakeholders are concerned that users will miss a critical call to action, do a quick 5-second test to reassure them.

Focusing user research on answering these kinds of questions not only ensures a better result but also ensures that user research saves time. Without user research, discussions and debates around these topics can drag out and slow momentum. Additionally, by focusing user research on addressing a single question, it keeps it small and easy to incorporate into an existing project.

Many little bits of user research are easier to insert than a single significant discovery phase.

Of course, this is only true if the types of user research you do are lightweight.

Keep Your User Research Lightweight

When trying to keep our user research lean, tough decisions must be made. One of these is to move away from facilitated research, such as user interviews or usability testing, as they are too time-consuming.

Instead, we should focus on research that can be set up in minutes, provides results quickly, and can be understood at a glance. This leaves us primarily with surveys and unfacilitated testing.

Run Quick And Dirty Surveys

Personally, I love quick surveys to resolve areas of disagreement or uncertainty. If in doubt, I argue, it’s best to ask the user. Just a few examples of surveys I have run recently include:

  • Comparing two labels for a second on a website.
  • Identifying tasks users wanted to complete on a website.
  • Discovering why people weren’t signing up for a free trial.
  • Assessing whether people understood an infographic.

I could go on, but you get the idea. Short, focused surveys can help answer questions quickly.

Surveys are easy to create and depending on how you approach them, you can get results quickly. If time is more of a barrier than money, you can use an app like Pollfish to recruit the exact demographic of people you need for a few dollars per submission. You can usually get results in less than a day with only a few minutes of work to set up the survey.

If money is an obstacle, consider sharing your survey on social media, a mailing list, or your website. You could even share it with random people who aren’t involved in the project if you’re desperate. At least you’d get an outside perspective.

When your questions are about a design approach you’ve produced, you can turn to unfacilitated testing.

Try Some Unfaciliated Tests

Stakeholders often spend days debating and revising design concepts when quick tests could provide the answers they need. Generally, these design discussions revolve around four questions:

  • Did users see it?
  • Did users understand it?
  • Can people use it?
  • Will they like it?

Fortunately, there are quick tests that can help answer each of these questions.

Did Users See It?

If stakeholders are concerned that someone might miss a call to action or critical messaging, you can run a 5-Second Test. This test presents users with a digital product, such as a website or app, for five seconds before asking what they saw. Tools like Usability Hub and Maze provide a URL for the test that you can share with participants, similar to how you would distribute a survey. If users recall seeing the element in question, you know everything is good.

Did Users Understand It?

A slight variation of the test can also be used to answer the second question: did users understand it? Show the user your design for 5 seconds, then ask them to describe what they saw in their own words. If they accurately describe the concept, you can be sure of your approach.

Can People Use It?

When it comes to the “can people use it?” question, you have two options.

If you have a prototype, you can run unfacilitated usability testing with a tool like Maze:

  1. Define the task you need to see people complete;
  2. Provide Maze with the most direct route to complete the task;
  3. Give participants the URL Maze provides.

Maze will give you aggregated data on how long it took people to complete the task and the number of mistakes they made.

If you don’t have a prototype, the alternative is to do a first-click test:

  1. Show users a mockup of your website or app;
  2. Ask where they would click to complete a specific task.

According to a usability study by Bob Bailey and Cari Wolfson, if the first click is correct, users have an 87% chance of completing the action correctly, compared to just 46% if the first click is wrong. So, if people get their first-click correct, you can be reasonably confident they can successfully complete the task.

Usability Hub can help you run your first-click test. They will provide a heat map showing the aggregated results of where everyone clicked, so you don’t need to analyze the results manually. This allows you to get answers almost immediately.

Will People Like It?

The final question is, “Will people like it?” This is not easy to answer, as different stakeholders may have different opinions about what works.

To resolve this, I usually conduct a preference test or, ideally, a semantic differential survey.

First, I agree with stakeholders on the associations we want users to have with the design. These may include words like professional, friendly, inspiring, or serious.

In a semantic differential survey, users can then rate the design against those words. If the design scores well, we can be confident it will generate the desired response.

A Pragmatic Approach

I know this post will make user researchers uncomfortable, and I can fully understand why. The results you get back will be far from perfect and could possibly lead to false conclusions. However, it is better than the alternative. Resolving design decisions through internal discussion is always going to be inferior to getting user feedback.

This kind of lean user research can also be a great starting point for bigger things. If you can add even some user research to the process, stakeholders can start to see its benefits, and it can lead to bigger things.

Some may choose to pick holes in your approach, suggesting that you aren’t testing with the right people or with a big enough audience. They are, of course, correct. However, this provides you with an opportunity to point out you would happily do more research if only the time and budget were made available!

Further Reading On SmashingMag

Compare The Best Landing Page Creation Tools

So much goes into an effective landing page. It takes practice, testing, analytics, design skills, keyword research, and so much more. 

Fortunately, there are plenty of landing page creation tools that take the guesswork out of building and optimizing your landing pages. This guide covers the best ones.

Landing Page Builders

These are typically websites or web-based services that let you build a landing page by using an HTML editor or drag-and-drop functionality. Some will give you a basic editor with different landing page templates to choose from.

Unbounce

Unsplash landing page builder splash page

Unbounce is one of the most well-known landing page builders simply because it was one of the first web-based services that allowed people to build and test landing pages without relying on the IT department.

Here’s the pricing breakdown:

  • Launch—$74/month billed annually or $99 billed monthly for sites getting up to 20,000 unique monthly visitors
  • Optimize—$109/month billed annually or $145 billed monthly for sites getting up to 30,000 unique monthly visitors
  • Accelerate—$180/month billed annually or $240 billed monthly for sites getting up to 50,000 unique monthly visitors
  • Concierge—$469/month billed annually or $625 billed monthly for sites getting more than 100,000 monthly visitors

Additionally, you can test as many landing pages as you want, and Unbounce offers a variety of templates for web-based, email, and social media landing pages.

Instapage

Instapage landing page creation tool  homepage

Instapage is a bit different than your typical landing page builder in that it does come with a variety of templates for different uses (lead generation, click-through and “coming soon” pages), but what sets it apart is that it learns based on the visitors that come to your landing pages.

You can view real-time analytical data and easily determine the winners of your split tests, while tracking a variety of conversion types from button and link clicks, to thank you pages and shopping cart checkouts.

Instapage also integrates with a variety of marketing tools and platforms, including:

  • Google Analytics
  • Mouseflow
  • CrazyEgg
  • Mailchimp
  • Aweber
  • Constant Contact
  • Facebook
  • Google+
  • Twitter
  • Zoho
  • And more

A free option is available if you’d like to try it out, and a Starter package makes landing page creation and testing a bit easier on the wallet of startups and new entrepreneurs.

Real features like the aforementioned integrations start kicking in with the Professional package at $79/month, but if you’d like to get landing pages up and running quickly, it’s hard to beat the stylish templates that Instapage provides.

Launchrock

Launchrock landing page creation tool homepage

Launchrock is not so much a landing page builder as it is a social and list-building placeholder. Combining “coming soon” pages with list building capabilities, Launchrock also includes some interesting social features that encourage users to share the page with others.

For example, get X people to sign up, you’ll get Y. It also includes basic analytics and the ability to use your own domain name or a Launchrock branded subdomain (yoursite.launchrock.com). You can customize the page via the built-in HTML/CSS editor if you know how to code.

Launchrock is free and requires only an email address to get started.

Landing Page Testers/Trackers

While many landing page builders also include testing and tracking, they usually do one or the other well, but not both.

Of course, when you’re just starting out, it’s a good idea to take advantage of free trials and see which service works best for your needs.

Here are a few of the most popular ones available for testing and tracking your campaigns:

Optimizely

Optimizely landing page creation tool homepage

Optimizely is often touted as a good entry-level product for when you’re just starting out and working toward upgrading to something bigger and better as your business grows.

But with prices starting at $17/month and a free 30 day trial period, it’s a powerful product in its own right.

There are some limitations with the lower level packages. For example, multivariate testing is not available at the Bronze or Silver levels. It only becomes a feature at the Gold level, which will set you back $359/month.

On the upside, Optimizely lets you conduct an unlimited number of tests and also allows for mobile testing and personalization.

Although you do get an unlimited number of experiments, you can also edit these on-the-fly, but doing so will also cause you to lose count of which version of which page you were working on.

It can also leave some things to be desired when it comes to integration with Google Analytics, for example, it’s not able to segment custom data (like PPC traffic) or advanced analytics segments.

You can also tell Optimizely what you consider as “goal” points on your website — ranging from email subscription to buying and checkout, and it will track those items independently.

Overall, it does a great job with a simple and intuitive user interface and is ideal for those just starting to optimize their landing pages.

CrazyEgg

CrazyEgg landing page creation tool homepage.

CrazyEgg is the definitive heat map and visualization service to help you better understand how your website visitors are interacting with your landing pages.

Reports are available as “confetti” style, mouse clicks/movement tracking and scrolling heat maps.

This gives you an all in one picture to see where your visitors are engaging with your pages (and where you could improve that engagement).

CrazyEgg landing page creation tool confetti style report example.

An example of a CrazyEgg click heatmap. Warmer colors indicate more activity

Although CrazyEgg doesn’t consider itself a landing page testing and tracking solution, it does take you beyond the core information that Google Analytics gives you to show you actual user behavior on your landing pages.

Pricing starts at $9/month for up to 10,000 visitors with 10 active pages and daily reports available. A 30 day free trial is also available.

Hubspot

Hubspot landing page creation tool example

More than a tracking/testing service, Hubspot’s landing pages offer extremely customizable elements that let you tailor each page to precisely match your customers’ needs.

This lets you devise alternative segments for each “persona” you’ve created — driving engagement and conversion rates even higher.

The packages are pricey ($200/month starting out) for first-time landing page optimizers, but larger companies and organizations will see the value built in to the platform.

Beyond its smart segmenting, Hubspot also offers a drag and drop landing page builder and form builder. This is all in addition to its existing analytics, email marketing, SEO and other platforms.

Visual Website Optimizer

Visual Website Optimizer landing page creation tool example

If you’d like a more creative, hands-on approach to your landing pages, along with fill in the blanks simplicity, Visual Website Optimizer is as good as it gets.

Where this package really shines, however, is through its multivariate testing. It also offers behavioral targeting and usability testing along with heat maps, so you can see precisely how your visitors are interacting with your landing pages, and make changes accordingly.

You can also use the built-in WYSIWYG (what you see is what you get) editor to make changes to your landing pages without any prior knowledge of HTML, CSS or other types of coding.

Results are reported in real-time and as with Hubspot, you can create landing pages for specific segments of customers.

Pricing for all of these features is in the middle of all of the contenders, with the lowest available package starting at $50/month. Still, it’s a good investment for an “all in one” service where you don’t need the advanced features or tracking that other products provide.

Ion Interactive

Ion Interactive landing page creation tool example.

Ion Interactive’s landing page testing solution, could set you back several thousand per month, but it’s one of the most feature-packed options available, letting you create multi-page microsites, different touch-points of engagement, and completely scalable options with a variety of dynamic customizable options.

If you’d like to take the service for a test drive, you can have it “score” your page based on an in-house 13-point checklist. A free trial is also available, as is the opportunity to schedule a demo.

Of course, once you’ve decided on the best building, testing and tracking solution, there’s still work to be done.

Before you formally launch your new landing pages, it’s a good idea to get feedback and first impressions — not just from your marketing or design team, but from real, actual people who will be using your site for the first time.

Here are a few tools that can help you do just that.

Optimal Workshop


Optimal Workshop actually consists of three different tools. OptimalSort lets you see how users would sort your navigation and content, while Treejack lets you find areas that could lead to page abandonment when visitors can’t find what they’re looking for.

Chalkmark lets you get first impressions from users when uploading wireframes, screenshots or other “under construction” images.

Through these services, you can assign tasks to users to determine where they would go in order to complete them. You can also get basic heat maps to see how many users followed a certain route to complete the task.

You can buy any of the three services individually, or purchase the whole suite for $1,990/year. A free plan with limited functionality and number of participants is also available if you’d like to try before you buy.

Usabilla

Usabilla landing page tool homepage

Usabilla allows you to immediately capture user feedback on any device, including smartphones and tablets – a feature that sets it apart from most testing services.

Improvement is done via a simple feedback button which can be fully customized and encourages the customer to help you improve your site by reporting bugs, asking about features or just letting you know about the great shopping experience they had.

Usabilla also lets you conduct targeted surveys and exit surveys to determine why a customer may be leaving a page.

They also offer a service called Usabilla survey which is similar to other “first impression” design testing services and lets visitors give you feedback on everything from company names to wireframes and screenshots.

Pricing starts at $49/month and a free trial is available.

5 Second Test


Imagine you want visitors to determine the point of a certain page. What if they could only look at it for five seconds and then give you their opinion? Five second test makes this possible and it’s incredibly quick and easy to set up.

Case in point — you can try a sample test to see what a typical user would see. In my case, I was asked my first impressions of an app named “WedSpot” and what I’d expect to find by using such an app.

It’s simple questions like these that can actually give you some invaluable insights – and that for just five seconds of your users time.

It’s free to conduct and participate in user tests through Five Second test.

Other Helpful Tools

Beyond usability testing and user experience videos, there are a few other tools that your landing pages can benefit from:

Site Readability Test


Juicy Studio has released a readability test that uses three of the most common reading level algorithms to determine how easy or difficult it is to read the content on your site.

You’ll need to match the reading level with your intended audience but these tests will give you some insight on simplifying your language and making your pages more reading-accessible to everyone.

You simply type in your URL and get your results in seconds. You can also compare your results to other typical readings including Mark Twain, TV Guide, the Bible and more.

Pingdom Website Speed Test


Page loading time is a huge factor in your website’s bounce rate and lack of conversions. Simply put, if your page loads too slowly, visitors won’t wait around for it to finish.

They’ll simply leave and potentially go to your competition. Using Pingdom’s website speed test, you can see how fast (or slow) your website is loading.

Beyond the speed of your website itself, the service will also calculate your heaviest scripts, CSS, images, or other files that could be slowing down your pages.

You should note that testing is conducted from Amsterdam, the Netherlands, so depending on how close or far your server is from there will also factor into the equation.

It’s free to test your site on Pingdom.

Browser Shots


Although this is the last entry in our series of helpful tools, it is by no means any less important. Testing your landing pages in a multitude of browsers on a variety of operating systems is crucial to your pages’ overall success.

Fortunately, BrowserShots.org makes this process incredibly easy. You can test your pages on all current versions of the web’s most popular browsers, as well as older versions of those browsers.

It does take time for browser screenshots to be taken and uploaded for you to see the results. You can sign up for a paid account and see them faster, but for a free tool, it’s no problem to wait a little while and see just how accessible your page is to visitors on a variety of operating systems, browsers, and browser versions.

The Top Landing Page Creation Tools in Summary

The best landing page creation tools help you with keyword research, split testing, content creation, and everything else you need to drive conversions.

Remember, landing page creation is not a one-and-done process. So make sure you assess tools that will help you optimize your landing page after you’ve created them.

Keys To An Accessibility Mindset

How many times have you heard this when asking about web accessibility? “It’s something we’d like to do more of, but we don’t have the time or know-how.”

From a broad perspective, web accessibility and its importance are understood. Most people will say it’s important to create a product that can be used by a wide array of people with an even wider range of needs and capabilities. However, that is most likely where the conversation ends. Building an accessible product requires commitment from every role at every step of the process. Time, priorities, and education for all involved, so often get in the way.

Performing an accessibility audit can cost a lot of time and money. The results can cost even more with just design, development, and QA (Quality Assurance). An audit becomes even more expensive when considering the other heavy investment. For every role, the learning curve for accessibility can be steep.

There’s so much nuance and technical depth when learning about web accessibility. It’s easy to feel lost in the trees. Instead, this article will take a look at the forest as a whole and demonstrate three keys for approaching accessibility naturally.

The POUR Principles of Web Accessibility

It may sound too simple, but we can break web accessibility down into four core principles: Perceivable, Operable, Understandable, and Robust. These principles, known as POUR, are the perfect starting point for learning how to approach accessibility.

Perceivable

What does it mean for content to be perceivable?

Let’s say you’re experiencing this article by reading it. That would mean the content is perceivable to people who are sighted. Perhaps, you’re listening to it. That would mean the content is perceivable by people who engage with content audibly.

The more perceivable your content is, the more ways people can engage with it.

Common examples of perceivable content would be:

  • Images with alternative descriptive text,
  • Videos with captions and/or subtitles,
  • Indicating a state with more than just color.

A terrific real-world example of perceivable content is a crosswalk. When it is not safe to cross the street, there is a red icon of a standing figure and a slow, repeating beep. Then, once the streetlights change and people can cross safely, the icon changes to a green figure walking, and the beeping speeds up. The crosswalk communicates with understandable icons, colors, and sound to create a comprehensive and safe experience.

Operable

Operable content determines whether a person can use a product or navigate a website.

It is common for the person developing a product to create one that works for themselves. If that person uses a mouse and clicks around the website, that’s often the first, and sometimes the only, experience they develop. However, the ways for operating a website extend far beyond a traditional mouse and keyboard.

Some important requirements for operable content are the following:

  • All functionality available by mouse must be available by the keyboard.
  • Visible and consistent keyboard focus for all interactive elements.
  • Pages have clear titles and descriptive, sequential headings.

Understandable

What good is creating content if the people consuming it can not understand it?

Understandable content is more than defining acronyms and terms. A product must be consistent and empathetic in both its design and content.

Ways to create an understandable experience would include:

  • Defining content language(s) to allow assistive technologies to interpret correctly.
  • Navigations that are repeated across pages are in the same location.
  • Error messages are descriptive and, when possible, actionable.

In Jenni Nadler’s article, “When Life Gives You Lemons, Write Better Error Messages”, she describes her team’s approach to error messaging at Wix. With clear language and an empathetic tone, they’ve created a standard in understandable messaging.

Robust

In a way, many of us are already familiar with creating robust content.

If you’ve ever had to use a compiler like Babel to transpile JavaScript for greater support, you’ve created more robust content. Now, JavaScript is just one piece of the front end, and that same broad, reliable approach should be applied to writing semantic HTML.

Ways to create robust markup include:

  • Validating the rendered HTML to ensure devices can reliably interpret it.
  • Using markup to assign names and roles to non-native elements.

The POUR principles of web accessibility lay a broad (if a bit abstract) foundation. Yet, it can still feel like a lot to consider when facing roadmaps with other priorities. This depth of information and considerations can be enough to turn some people away.

Web accessibility is not all or nothing.

Even small improvements can have a big impact on the accessibility of a product. In the same way software development has moved away from the waterfall approach, we can look at web accessibility with the same incremental mindset.

Even so, sometimes it’s easier to learn more about something you already know than to learn about something anew. At least, that’s what this entire article relies upon.

With slight adjustments to how we approach the design and development of a product, we can create one that more closely aligns with the POUR principles of accessibility but in a way that feels natural and intuitive to what we already know.

Keys To An Accessibility Mindset

There’s a lot to learn about web accessibility. While the POUR principles make the process more approachable, it can still feel like a lot. Instead, by applying these keys to our approach, we can dramatically improve the accessibility of a product and reduce the risk of exhaustive refactors in the future.

Markup Must Communicate As Clearly As The Design

When working from a design, it’s common to build what we see. However, visual design is only one part of creating perceivable content.

Let’s consider the navigation of a website. When a person is on a specific page, we highlight the corresponding link in the navigation with a different background color. Visually, this makes the link stand out. But what about other methods of perception?

The content becomes more perceivable when its markup communicates as clearly as its design.

When dealing with the navigation, what exactly are we communicating with the contrasting styles? We’re trying to say, “this is the page you’re on right now.” While this works visually, let’s look at how our markup can communicate just as clearly.

<a aria-current="page" href="/products">Products</a>

By setting aria-current="page" on the anchor of the current page, we communicate with markup the same information as the design. This makes the content perceivable to assistive technologies, such as screen readers.

In this demo, we’ll hear the difference perceivable markup can make.

Even though navigation items often look like buttons, we understand that they function as links or anchors instead. This is the perfect example of marking up an element based on its function and not its appearance.

When using an anchor tag, we receive several expected functional benefits by default. The anchor will support keyboard focus. Hovering or focusing on an anchor will reveal the URL to preview. Lastly, whether with a keyboard shortcut or through the context (right-click) menu, a link can be opened in a new window or tab.

If we marked up a navigation item like it appeared, as a button, we would lose the last two expected behaviors of anchor tags. When we break the expectations of an element, accessibility will suffer the most.

The following demo highlights the functional differences when using the a, button, and div elements as a link. By navigating the demo with our keyboard, we can see the differences between each variation.

Without first looking at the altitude and ground speed values, I couldn’t tell which system was active. Maybe the imperial option was active since it was the same color as the data. But maybe the metric option was active because it was a different color.

While it may take us a moment to figure out which option is active, it’s an unnecessary one caused by indicating a state with only color.

In the following mockup, we underline the active option and increase its font weight. With these details, it’s now easier to understand the active state of the screen.

So much of creating perceivable content comes down to communicating in layers. When we write perceivable markup, we’re creating an extra layer of information. Designing is no different. If we indicate a state with only color, that’s one layer. When we add an underline and font weight, we add additional layers of communication.

People learn and experience in different ways. Consider a book that has an audio version and a movie adaptation. Some people will read the book. Others will listen to it. Others still will watch the movie. When we communicate in layers, more people benefit.

Review

Most people will agree that web accessibility is important. But they will also agree that it can be difficult. With so many combinations of hardware and software and so many nuances with each, accessibility can feel overwhelming.

It’s easy to become lost in the weeds of code samples and articles trying to help. One article may suggest an approach, while a second article suggests another. If we’re not able to test each scenario ourselves, it can often feel like guessing. Guessing can be disheartening, even discouraging. It can turn people away from accessibility.

Instead, we can have a dramatic impact on the accessibility of our work by not focusing on specific details but by adjusting how we approach a design from the start. One of the most challenging areas of accessibility is knowing when and where it’s needed. With the keys to an accessibility mindset, we can identify those areas and understand what they need. We may not know how to provide a perceivable or operable experience, but it’s easier to find the answer when you understand the question.

I should note, though, that applying these keys will not ensure your work is accessible. Will it make a positive impact? Yes. But accessibility extends far beyond design and development. For as long as a product is changing, a commitment to accessibility must remain at every step and in every role, from leadership on down.

Ensuring markup communicates as clearly as its design will help provide perceivable content. Writing functional markup instead of visual will help make that content operable. If the functional markup cannot be styled, then return to the first key, and make it perceivable.

Remember, creating an accessible experience for some doesn’t take away from others.

If we think back to the crosswalk example, who are some people who benefit from their design? Of course, those who are blind, even partially, can benefit. But what about a person looking down at their phone? The audible cue can grab their attention to let them know when it’s safe to cross. I’ve benefited from crosswalks in this way. How about a parent using the lights to teach their child how to cross? Everybody can benefit from the accessible design of a crosswalk. Of course, if a person wants to cross when they feel comfortable, regardless of the state of the crosswalk, they can. The accessible design does not prevent that experience. It enables that experience for others.

Accessible design is good design, and it all starts with our mindset.

Resources

The Ultimate WordPress Local Development Cheatsheet

Want to set up a local WordPress development environment without thumbing through pages and pages of documentation? Our WordPress local development cheatsheet will help you get up and running quick smart!

In this ‘no-fluff’ practical guide, we’ll cover briefly what WordPress local development is and some of the key benefits of using it, and we’ll then get straight into how to set up a local environment, install WordPress on your computer, and test your website before going live.

This guide covers the following:

What Is WordPress Local Development?

WordPress local development allows you to create a development environment for building, working, and testing WordPress sites on your computer without affecting your live site.

The local development environment replicates the production server, making it possible to test different scenarios and resolve issues before pushing changes to the live site.

Benefits of Local Development

Some of the key benefits of WordPress local development include:

  • Safe Testing Environment: The local development environment provides a safe space to test new features, plugins, and themes without affecting your live site.
  • Speed, Performance, and Efficiency: A local development environment is faster and more responsive than a remote server. This is because it runs on your computer, so your computer can access and process data much faster than a server, and there is no latency in communication between your machine and the server.
  • Cost-Effective: Setting up a local development environment eliminates the need for expensive hosting services and reduces the costs associated with deploying changes to a live site. You only need a computer and a text editor to get started.
  • Improved Collaboration: Multiple developers can work on a single project simultaneously without interfering with each other’s work.
  • Offline Development: With a local development environment, you can develop your site even when you’re offline.
  • Improved Security: Got a “top secret” project you want to work on? Since a local development environment runs on your machine, it is more secure than a remote server, so you can build and work on your site away from prying eyes. There is no risk of unauthorized access or hacking.

If you’re just getting started as a WordPress developer, see our introduction to WordPress local development article. If you’re already a little more experienced, check out our article on ways to improve your WordPress development workflow in a local environment.

Setting Up Your Local Development Environment

Before you can set up a local WordPress development environment, there are some things you’ll need.

What You’ll Need

In addition to a computer with enough storage space and processing power to support your development work, here’s all you need to set up a local development environment:

Local Server Software

You will need to install a local server software to run your local development environment.

XAMPP, MAMP, and WAMP are three popular options. Each of these local server software packages provide a complete development environment for web developers with all the necessary components (such as Apache web server, MySQL database, and PHP scripting language, in a single package), a control panel to manage these components and a tool to manage the database.

Each software package, however, also has its own unique features with key differences, so it’s important to choose one that meets your specific needs.

Let’s take a brief look at each:

XAMPP

XAMPP
XAMPP

XAMPP is a free, open-source, and easy-to-install web server software that provides a local development environment for web developers. It stands for Apache, MariaDB, PHP, and Perl, the four main components of XAMPP.

Some key features (and pros) of XAMPP:

  • Includes Apache web server, MariaDB database, and PHP and Perl scripting languages.
  • Supports multiple operating systems, including Windows, Mac, and Linux.
  • Easy-to-use control panel for managing web server and database components.
  • Option to install additional components such as phpMyAdmin for database management.

Cons:

  • Not as popular as MAMP or WAMP, so the community support may not be as strong.
  • More complex set-up compared to MAMP or WAMP, requiring more technical knowledge to install and configure components.

XAMPP is best for web developers who require a complete development environment with multiple components and are familiar with configuring and managing these components. It is also best for developers who work on multiple operating systems and need a cross-platform solution.

MAMP

MAMP
MAMP

MAMP is a local server software that provides a development environment for web developers. It stands for Macintosh, Apache, MySQL, and PHP, the four main components of MAMP.

Some key features (and pros) of MAMP:

  • Includes Apache web server, MySQL database, and PHP scripting language.
  • Supported by macOS operating system, but can also be used for Windows-based OS.
  • Easy-to-use control panel for managing web server and database components.
  • Option to install additional components such as phpMyAdmin for database management.

Cons:

  • Can only use PHP scripting language.
  • Fewer components compared to XAMPP, which may limit some developers’ needs.

MAMP is best for web developers who work on the macOS operating system.

For more information on using this option, check out our tutorial on how to develop WordPress locally using MAMP.

WampServer

WampServer
WampServer

WAMP is a local server software that provides a development environment for web developers. It stands for Windows, Apache, MySQL, and PHP, the four main components of WAMP.

Some key features (and pros) of WAMP:

  • Includes Apache web server, MySQL database, and PHP scripting language.
  • Supports Windows operating system.
  • Easy-to-use control panel for managing web server and database components.
  • Option to install additional components such as phpMyAdmin for database management.

Cons:

  • Only supports Windows, so developers using macOS or Linux may need to look elsewhere.
  • Fewer components compared to XAMPP, which may limit some developers’ needs.

WAMP is best for web developers who work on the Windows operating system and who require a complete development environment with basic components.

For more information about this option, check out our tutorial on how to develop WordPress locally using WAMP.

While XAMPP, MAMP, and WAMP are all excellent choices for web developers looking for a local development environment, there are other options available, including Local by Flywheel, DesktopServer, and (if you need to work on WordPress locally on more than one machine) even installing and running WordPress from a USB.

Text Editor

The other component you’ll need is a text editor for WordPress development specifically designed for working with programming languages such as PHP. A text editor is essential for editing code and making changes to your website.

Let’s look at a couple of popular options for text editors:

Sublime Text

Sublime Text
Sublime Text

Sublime Text is a popular text editor that is widely used by developers for coding and scripting purposes. It offers a clean, fast and intuitive interface, making it easy to work with large codebases.

Some key features of Sublime Text:

  • Syntax highlighting and code completion for over 80 programming languages
  • Customizable color schemes, key bindings, and macros
  • Advanced searching and editing tools such as multiple selections, split editing, and column editing
  • Instantly switch between projects with a project-specific settings system

Sublime Text is a great tool for developers who work on projects that require writing code in HTML, CSS, and JavaScript. It offers easy-to-use syntax highlighting, code completion, and editing tools that make the coding process fast and efficient.

Visual Studio Code

Visual Studio Code
Visual Studio Code

Visual Studio Code is a free, open-source code editor developed by Microsoft. It offers a range of features and tools to help developers create and manage large-scale projects.

Some key features of Visual Studio Code:

  • IntelliSense, a smart and advanced code completion and debugging tool
  • Built-in Git support and debugging
  • Supports multiple programming languages and has a large library of extensions
  • Customizable interface and workspace

For additional text editors, see our list of the best text editors for WordPress development.

Have you ticked all of the above requirements?

Computer meets required specs
Selected local server software
Selected text editor

Great! Then let’s move on to the next step…

Installing Local Server Software

For this example, we’ll install XAMPP on a Windows operating system. Use the same process described below to install your chosen local server software on your computer and follow the software package’s specific instructions:

  1. Download XAMPP: Go to the XAMPP official website and download the latest version of XAMPP for Windows.
  2. Install XAMPP: Double-click the downloaded file to start the installation process. Run the downloaded installer file and follow the on-screen instructions to install XAMPP. By default, XAMPP will be installed in the C:\xampp directory.
  3. Start XAMPP: After installation, open the XAMPP Control Panel from the Start menu or desktop shortcut. Start the Apache and MySQL modules by clicking on the “Start” buttons next to each module.
  4. Verify installation: To verify that XAMPP is working correctly, open a web browser and navigate to http://localhost. This should display the XAMPP welcome page.
  5. Create a virtual host: To create a virtual host, follow the steps outlined below.

XAMPP should now be installed and configured on your machine. You’re ready to start developing and testing your websites locally.

Note: The process of installing XAMPP or other local server software, such as MAMP or WAMP, may vary slightly depending on the operating system being used. For Mac and Linux operating systems, you can follow the installation instructions provided on the XAMPP website.

See our other XAMPP-related tutorials for additional information on setting up XAMPP, upgrading XAMPP, troubleshooting XAMPP, and migrating WordPress from a XAMPP localhost to the web.

Setting Up a Virtual Host

Setting up a virtual host in a local development environment allows developers to run multiple websites on their local machine, each with its own unique URL. This provides a more realistic testing environment and makes it easier to switch between different projects.

For the step-by-step guide below to set up a virtual host in your local development environment and start testing your websites:

1. Open the Apache configuration file: Open the configuration file for your local server software. For this example, we’re using XAMPP, so open the Apache configuration file, typically located at /etc/httpd/conf/httpd.conf or C:\xampp\apache\conf\httpd.conf.

2. Enable virtual hosting: Locate the section labeled “# Virtual Hosts” and uncomment the following line by removing the hash symbol (#) at the beginning of the line: #Include conf/extra/httpd-vhosts.conf.

3. Configure the virtual host: Open the virtual host configuration file, typically located at /etc/httpd/conf/extra/httpd-vhosts.conf or C:\xampp\apache\conf\extra\httpd-vhosts.conf.

4. Add a new virtual host: Add a new virtual host by creating a new block of code with the following format:

ServerName example.local
DocumentRoot "/path/to/document/root"
<Directory "/path/to/document/root">
AllowOverride All
Require all granted

Do this:

  • Replace “example.local” with the desired URL for the virtual host.
  • Replace “/path/to/document/root” with the full path to the document root directory for the virtual host.

5. Update the hosts file: The hosts file maps domain names to IP addresses. To make the virtual host accessible via the URL you specified, you’ll need to add an entry to the hosts file. The hosts file is typically located at /etc/hosts or C:\Windows\System32\drivers\etc\hosts. Add a new line with the following format: 127.0.0.1 example.local. Replace “example.local” with the URL specified in the virtual host configuration. Save the changes to the configuration file.

6. Restart Apache: Restart the Apache local web server to apply the changes.

7. Test the virtual host: Test your virtual host by visiting the URL in a web browser. The browser should display the content of the document root directory for the virtual host.

Creating a Database for Your Local WordPress Installation

The next step before setting up a WordPress project locally is to create a database for your local development environment.

Follow these step-by-step instructions to create a database in XAMPP:

1. Open the XAMPP Control Panel: Open the XAMPP Control Panel from the Start menu or desktop shortcut. Make sure the Apache and MySQL modules are running.

2. Access phpMyAdmin: To access phpMyAdmin, open a web browser and navigate to http://localhost/phpmyadmin. This will open the phpMyAdmin interface in your browser.

3. Create a new database: In the phpMyAdmin interface, click on the “Databases” tab. In the “Create database” section, enter a name for your new database and select the “utf8mb4_general_ci” collation. Then, click on the “Create” button.

4. Create a new user: To create a new user for the database, click on the “Users” tab and then the “Add user” button. In the “Add user” form, enter a username and password for the new user, and select “Local” as the host. Make sure to grant all privileges to the user by checking the “Grant all privileges on database” checkbox. Finally, click on the “Go” button.

5. Save your details: Write down or save your database name, username and password. You will need these to connect the database to WordPress later.

After completing the above steps, you will have successfully created a database for your local WordPress installation and local development environment.

You can now use this database to store and manage your data as you develop and test your WordPress site locally.

Have you completed all of the above steps?

Installed local server software
Set up virtual host
Created database

Great! Then let’s move on to the next step…

Installing WordPress Locally

Now that we have prepared our local environment, the next step is to download, install, and configure WordPress.

Downloading and Installing WordPress on Local Server

Follow the steps below to complete this process:

  1. Visit the WordPress website: Go to the official WordPress.org website and click on the “Download WordPress” button to download the latest version of WordPress.
  2. Extract the archive: The WordPress download will be a compressed ZIP file. Extract the contents of the archive to a directory on your computer.
  3. Move the extracted files to your local server: Move the contents of the extracted directory to the root directory of your local server. If you’re using XAMPP, for example, this is typically C:\xampp\htdocs on Windows or /Applications/XAMPP/htdocs on macOS.
  4. Create a database: (Note: if you have been following along, this step should already be done.) Before installing WordPress, you’ll need to create a database. You can do this using a tool like phpMyAdmin, which is included with most local server software like XAMPP and MAMP.
  5. Start the installation: Open your web browser and navigate to http://localhost/wordpress (or the equivalent URL for your local server). This will start the WordPress installation process.
  6. Choose the language: On the first screen, select your preferred language and click the “Continue” button.
  7. Fill in the database information: On the next screen, fill in the database information that you created in step 4. This includes the database name, database username, and database password.
  8. Fill in the site information: On the next screen, fill in the information for your local WordPress site. This includes the site title, username, password, and email address.
  9. Run the installation: Once you’ve filled in all the information, click the “Install WordPress” button to run the installation.
  10. Log in to your site: After the installation is complete, log in to your local WordPress site using the username and password you created in step 8 to start customizing and developing your local site.

You have now successfully downloaded and installed WordPress.

You can now start customizing and developing your site locally, with all the benefits of a local development environment, before deploying your site to a live server.

Configuring wp-config.php File

The wp-config.php file is a crucial component in the setup of a local WordPress installation and local development environment. This file contains configuration settings that control how WordPress interacts with your database and other important settings.

If you have followed the installation instructions above, your database credentials will be automatically added to the wp-config.php file.

If, for any reason, you need to manually configure the wp-config.php file, follow the instructions below:

1. Create a wp-config.php file: If your local WordPress installation doesn’t already have a wp-config.php file, you can create one by copying the wp-config-sample.php file and renaming it to wp-config.php.

2. Update database credentials: Open the wp-config.php file and update the following lines with the appropriate information:

define( 'DB_NAME', 'database_name' );
define( 'DB_USER', 'database_user' );
define( 'DB_PASSWORD', 'database_password' );
define( 'DB_HOST', 'localhost' );

Replace database_name, database_user, and database_password with the values you used when creating the database and user in a previous step.

3. Set the WordPress security keys: WordPress security keys add an extra layer of security to your site by encrypting information stored in cookies. You can generate a set of security keys at the official WordPress site. Copy the generated keys and paste them into your wp-config.php file, replacing the placeholder keys that are already there.

4. Enable debugging: For local development, it’s useful to enable debugging in WordPress. This will provide more detailed error messages and warnings that can help you troubleshoot issues with your site. To enable debugging, add the following line to your wp-config.php file:

define( 'WP_DEBUG', true );

5. Save the changes: Once you have made the changes to the wp-config.php file, save the file and close it.

Successfully configuring the wp-config.php file will ensure that your locally installed WordPress site is able to connect to the database, is secure, and provides helpful debugging information as you develop and test your site locally.

Importing a Live WordPress Site to Local Environment

Follow the steps below if you need to import a live WordPress site into your local environment:

Exporting the Live Site’s Database

To export the live site’s database, you’ll need to have access to the live site’s server.

Here are the steps to export the live site’s database (note: different server environments will perform this differently, but most should follow a similar process):

  1. Log into your live server’s control panel.
  2. Access the database: The first step is to access the database of the live site. You can do this using a tool like phpMyAdmin, which is often provided by your web hosting provider. Look for a section called “Databases” and click on “phpMyAdmin.”
  3. Select the database: Once you’ve logged into phpMyAdmin, select the database for your live site from the left-side panel.
  4. Export the database: Click on the “Export” button to start the export process.
  5. Choose the export format: On the export screen, choose the “Quick” export method, select the “SQL” format and make sure that the “Structure” and “Data” options are selected.
  6. Download the export file: Click the “Go” button to download the export file to your computer.

Importing the Database to the Local Server

To import the live site’s database to your local server, make sure your chosen local server software is already installed on your computer.

Here are the steps to import the live site’s database to your local server:

  1. Open phpMyAdmin in your local server software: Log into phpMyAdmin for your local server and select the database you created for your local WordPress installation.
  2. Import the database: Click on the “Import” button to import the data from the export file you just downloaded.
  3. Select the import file: On the import screen, click on the “Choose File” button, select the export file you just downloaded, and click the “Go” button to start the import process.

Replacing URLs in the Database

After importing the live site’s database, you will need to replace the URLs in the database to match your local development environment.

Here are the steps to replace URLs in the database:

1. Open phpMyAdmin in your local server software.
2. Select the imported database from the left-side panel.
3. Click on the “SQL” tab.
4. Enter the following query in the text area:

UPDATE wp_options SET option_value = replace(option_value, 'http://www.livesite.com', 'http://local.livesite.com') WHERE option_name = 'home' OR option_name = 'siteurl';
UPDATE wp_posts SET guid = replace(guid, 'http://www.livesite.com','http://local.livesite.com');
UPDATE wp_posts SET post_content = replace(post_content, 'http://www.livesite.com', 'http://local.livesite.com');

5. Replace “http://www.livesite.com” with the URL of your live site, and replace “http://local.livesite.com” with the URL of your local development environment.

6. Click on the “Go” button to execute the query.

Uploading the Live Site’s Files to the Local Environment

To upload the live site’s files to the local environment, you will need to have FTP access to your live site’s server.

Follow the steps below to upload the live site’s files to your local environment:

  1. Connect to your live site’s server using an FTP client such as FileZilla.
  2. Navigate to the root directory of your live site on the server.
  3. Download all the files to your local computer.
  4. Place the downloaded files in the root directory of your local development environment, which is usually located in the “htdocs” or “www” folder in XAMPP or other local server software.

Notes:

  1. If you already have a WordPress installation, the above folder won’t be empty and you will be prompted to replace existing files and directories, so replace all files except for the wp-config.php file to keep the same configurations, including the connected databases which have been populated with the live site’s data.
  2. Before uploading the live site’s files to the local environment, you may need to change the file permissions to make the files writable by your local server software.
  3. Also, make sure to test your local WordPress backup before making any changes.

That’s it! You have now successfully imported your live site into your local WordPress installation and local development environment.

Developing and Testing on Local WordPress Site

You’re finally ready to develop and test your site locally using the same data as your live site, giving you a true-to-life environment for testing and development.

Let’s go through the process:

Making Changes and Testing

  1. Log into the local WordPress site: Open your local WordPress site in your web browser and log in to the WordPress dashboard using your administrator credentials.
  2. Make changes to the site: You can make changes to your local WordPress site by editing themes, plugins, or custom code. Simply access these elements from the WordPress dashboard.
  3. Test changes: After making changes to your local WordPress site, it’s important to test the changes to make sure they work as expected. You can test changes by visiting the front-end of your site and checking that the changes have taken effect.

Debugging

  1. Use the Debugging mode: WordPress has a built-in debugging mode that makes it easier to identify and resolve issues on your site. To enable the debugging mode, you need to add the following code to your wp-config.php file: define( 'WP_DEBUG', true );.
  2. Check the error logs: If you’re having issues with your local WordPress site, you can check the error logs to see if there are any error messages or warning messages that can help you identify the issue. The error logs can be found in the WordPress debug log file, which is located in the wp-content directory.
  3. Use debugging tools: There are a number of debugging tools and plugins available for WordPress that can help you identify and resolve issues on your site. For example, the Query Monitor plugin provides detailed information about database queries, plugin usage, and more. See this tutorial for help with debugging WordPress: Debugging WordPress: How To Use WP_Debug

Testing Different Plugins and Themes

Installing, activating, and testing plugins and themes on a local WordPress site works in exactly the same way as it does on any other regular WordPress site. So, make sure to do the following while in testing mode:

  1. Install plugins: Install plugins on your local WordPress site to add new features or functionality to your site. To install a plugin, log in to the WordPress dashboard, go to the Plugins section, and click on the Add New button.
  2. Activate plugins: Activate the plugin you’re testing after installing it to use it on your site. To activate a plugin, go to the Plugins section of the WordPress dashboard and click on the Activate button next to the plugin you want to use.
  3. Test plugins: After activating a plugin, it’s important to test the plugin to make sure it’s working as expected. Test plugins by visiting the front-end of your site and checking that the plugin has taken effect.
  4. Install themes: Install themes on your local WordPress site to change the appearance of your site. To install a theme, log in to the WordPress dashboard, go to the Appearance section, and click on the Themes button.
  5. Activate themes: Activate the theme after installing it to change your site’s appearance. To activate a theme, go to the Appearance section of the WordPress dashboard and click on the Activate button next to the theme you want to use.
  6. Test themes: After activating a theme, it’s important to test the theme to make sure it’s working as expected. Test themes by visiting the front-end of your site and checking that the theme has taken effect.

Have you make all the changes you need, debugged issues, and tested different plugins and themes on your local site?

Great! Now you’re ready to make your local WordPress site live.

Deploying Local WordPress Site to Live Server

The final step in this process is to export all of your local WordPress files and database to your live hosting environment and make sure that all of your site’s changes, configurations, and URLs are working on your live site.

Exporting the Local Site’s Database

Follow the steps below to export your local WordPress site to your live server:

  1. Log in to the local site’s database using PHPMyAdmin.
  2. Select the database you want to export.
  3. Go to the “Export” tab.
  4. Choose the “Quick” export method.
  5. Select the “SQL” format.
  6. Click “Go” to download the SQL file to your computer.

Importing the Database to the Live Server

Follow the steps below to import your local WordPress database’s export file into your live site:

  1. Log in to the live server’s database using PHPMyAdmin.
  2. Create a new database for the live site.
  3. Go to the new database and select the “Import” tab.
  4. Choose the exported SQL file from your local site.
  5. Click “Go” to import the database.

Now that you have migrated the database over from your local site to your live site, let’s do the same for your site’s files.

Uploading the Local Site’s Files to the Live Server

Follow the steps below to upload your local WordPress site’s files into your live site:

  1. Prepare the files: Before uploading the local site’s files to the server, it’s a good idea to review and clean up the files. This may include removing any unnecessary files, such as backups or test files, to minimize the amount of data being uploaded.
  2. Connect to the server: You can connect to the server using a variety of methods, such as FTP or SFTP. You will need to use a client software, such as FileZilla, to connect to the server. You will need to provide your server host, username, and password to connect.
  3. Upload the files: Once you are connected to the server, you can upload the local site’s files to the server. You can upload the files in a number of ways, including uploading individual files or uploading the entire local site folder. Navigate to the root directory of the live site on the server. Upload all the local site’s files to the live site’s directory on the server, and replace the existing files if prompted.
  4. Update the database information: After uploading the files to the server, you will need to update the database information in the wp-config.php file to reflect the live site’s database information. Open the wp-config.php file in a text editor and update the database name, username, and password to match the live database.
  5. Update URLs in the database:  See the section below.
  6. Test the site: After uploading the local site’s files to the server, it’s a good idea to test the site to make sure everything is working correctly. This may involve testing the site’s functionality, links, and images to make sure they are working as expected.

Updating URLs in the database

You can update the URLs in your database using a text editor or by working directly in your database (make sure your database is fully backed up before making changes).

Updating URLs Using a Text Editor

Follow the steps below to update the URLs in your database using a text editor.

  1. Export the database: Before updating the URLs in the database, you will need to export the database. Use your database management tool (e.g. phpMyAdmin).
  2. Find and Replace the URLs: Once you have exported the database, you will need to find and replace the URLs in the database. You can do this using a text editor such as Sublime or Visual Studio Code. Search and replace the URLs, and make sure to replace the URLs carefully and thoroughly, including URLs in serialized data.
  3. Import the database: After updating the URLs in the database, you will need to import the database back into your local development environment. You can import the database using a database management tool, such as phpMyAdmin.
  4. Test the site: After importing the updated database, it’s a good idea to test the site to make sure everything is working correctly. This may involve testing the site’s functionality, links, and images to make sure they are working as expected.

Updating URLs in the Database

Follow the steps below to update the URLs directly in your database:

1. Log in to the live site’s database using PHPMyAdmin.
2. Select the live site’s database.
3. Go to the “SQL” tab.
4. Run the following SQL query to update the URLs:

UPDATE wp_options SET option_value = replace(option_value, 'http://old-url', 'http://new-url') WHERE option_name = 'home' OR option_name = 'siteurl';
UPDATE wp_posts SET guid = replace(guid, 'http://old-url','http://new-url');
UPDATE wp_posts SET post_content = replace(post_content, 'http://old-url', 'http://new-url');

Replace “old-url” with the URL of the local site and “new-url” with the URL of the live site.

5. Click “Go” to run the query.
6. This will update all references to the local site’s URL with the live site’s URL in the database, ensuring that all links and images on the live site work correctly.

If you have followed the above steps correctly, the URLs in your database should have successfully updated. After these steps, your local WordPress site should now be fully functional on the live server. Make sure to thoroughly test the live site to ensure that all features are working correctly, and make any necessary adjustments to ensure a seamless transition from the local development environment to the live server.

Local Development vs Webhost Staging Environment

While WordPress local development provides a safe and efficient environment to build, edit, and test WordPress websites, you may decide to work in a webhost staging environment instead (here are some good reasons why you may not want to develop WordPress locally).

Both local development environments and webhost staging environments, however, have their pros and cons.

Here is a brief overview of the pros and cons of using a WordPress local development versus a webhost staging environment:

Pros of Local Development Environment

  • Easy to Use: Local development environments are easy to use, even for beginner developers.
  • Flexibility: You have complete control over your local development environment, so you can configure it however you like.
  • Test Any Changes: With a local development environment, you can test any changes you make to your site without affecting the live version.

Cons of Local Development Environment

  • Not a Live Environment: A local development environment is not a live environment, so you cannot test your site with live data.
  • Limited Resources: Your local machine may have limited resources, such as memory and processing power, which can affect your site’s performance.
  • Not a True Representation: A local development environment may not accurately represent a live server environment, so testing may not be 100% accurate.

Pros of Webhost Staging Environment

  • Live Environment: A webhost staging environment is a live environment, so you can test your site with live data.
  • More Accurate Testing: A webhost staging environment is a more accurate representation of a live server environment, so testing is more reliable.
  • More Resources: A webhost staging environment typically has more resources available than a local development environment, so your site’s performance will be better.

Cons of Webhost Staging Environment

  • Cost: Setting up a webhost staging environment can be expensive, as you have to pay for hosting and a domain name.
  • Not as Fast: A webhost staging environment is not as fast as a local development environment because it runs on a remote server.

For smaller projects, a local development environment is a great option because it is free and easy to use. For larger projects, however, a webhost staging environment may be a better option because it is a live environment and provides more accurate testing.

Ultimately, the choice between these two methods will depend on your individual needs, preferences, and hosting options.

Note: We recommend avoiding shared hosting, and hosting on our Quantum plan instead for basic WordPress sites, but if you have reasons for choosing shared hosting, then check out our article on how to run WordPress local development on shared hosting.

All WMU DEV hosting plans (except for Quantum) include a staging environment. Refer to our staging documentation for more details on the benefits of using a staging environment to develop and test WordPress sites.

How To Scan a URL for Malicious Content and Threats in Java

At this point, we’ve all heard the horror stories about clicking on malicious links, and if we’re unlucky enough, perhaps we’ve been the subject of one of those stories.  

Here’s one we’ll probably all recognize: an unsuspecting employee receives an email from a seemingly trustworthy source, and this email claims there’s been an attempt to breach one of their most important online accounts. The employee, feeling an immediate sense of dread, clicks on this link instinctively, hoping to salvage the situation before management becomes aware. When they follow this link, they’re confronted with a login interface they’re accustomed to seeing – or so they believe. Entering their email and password is second nature: they input this information rapidly and click “enter” without much thought.

How to Remove the Powered by WordPress Footer Links

Do you want to remove the ‘powered by WordPress’ footer links on your site?

By default, most WordPress themes have a disclaimer in the footer, but this can make your site look unprofessional. It also leaves less space for your own links, copyright notice, and other content.

In this article, we will show you how to remove the powered by WordPress footer links.

How to remove the powered by WordPress footer links

Why Remove the WordPress Footer Credits?

The default WordPress themes use the footer area to show a ‘Proudly powered by WordPress’ disclaimer, which links to the official WordPress.org website.

The Powered by WordPress disclaimer

Many theme developers take this further and add their own credits to the footer.

In the following image, you can see the disclaimer added by the Astra WordPress Theme.

The Astra footer disclaimer

While great for the software developers, this ‘Powered by….’ footer can make your site seem less professional, especially if you’re running a business website.

It also lets hackers know that you’re using WordPress, which could help them break into your site.

For example, if you’re not using a custom login URL, then hackers can simply add /wp-admin to your site’s address and get to your login page.

This disclaimer also links to an external site, so it encourages people to leave your website. This can have a negative impact on your pageviews and bounce rate.

Is it legal to remove WordPress footer credit links?

It is perfectly legal to remove the footer credits link on your site because WordPress is free, and it is released under the GPL license.

Basically, this license gives you the freedom to use, modify, and even distribute WordPress to other people.

Any WordPress plugin or theme that you download from the official WordPress directory is released under the same GPL license. In fact, even most commercial plugins and themes are released under GPL.

This means you’re free to customize WordPress in any way you want, including removing the footer credits from your business website, online store, or blog.

With that in mind, let’s see how you can remove the powered by WordPress footer links.

Video Tutorial

If you don’t want the video or need more instructions, then simply use the quick links below to jump straight to the method you want to use.

Method 1. Removing the ‘Powered by’ Link Using the Theme Settings

Most good theme authors know that users want to be able to edit the footer and remove the credit links, so many include it in their theme settings.

To see whether your theme has this option, go to Appearance » Customize in your WordPress admin dashboard.

Launching the WordPress Customizer

You can now look for any settings that let you customize your site’s footer, and then click on that option.

For example, the Astra theme has a section called ‘Footer Builder.’

Customizing the Astra theme disclaimer

If you’re using this theme, then simply click on the ‘Footer’ section and select ‘Copyright.’

Doing so will open a small editor where you can change the footer text, or even delete it completely.

How to remove the 'powered by WordPress' disclaimer

No matter how you remove the footer disclaimer, don’t forget to click on ‘Publish’ to make the change live on your site.

If you’re using a block theme, then you can remove the footer disclaimer using Full Site Editing (FSE) and the block editor.

This is a quick and easy way to remove the ‘Powered by’ credit across your entire site, although it won’t work with all themes.

To launch the editor, go to Appearance » Editor.

How to launch the FSE

Then, scroll to your website’s footer and click to select the ‘Powered by’ disclaimer.

You can now replace it with your own content, or you can even delete the disclaimer completely.

Editing the 'Proudly powered by WordPress' credit using the full site editor

When you’re happy with how the footer looks, simply click on ‘Save.’ Now if you visit your site, you’ll see the change live.

Method 3. How To Remove the ‘Powered by’ Disclaimer Using a Page Builder

Many WordPress websites use the footer to communicate important information, such as their email address or phone number. In fact, visitors might scroll to the bottom of your site looking specifically for this content.

With that in mind, you may want to go one step further and replace the ‘Powered by’ text with a custom footer. This footer could contain links to your social media profiles, links to your affiliate partners, a list of your products, or other important information and links.

You can see the WPBeginner footer in the following image:

An example of a WordPress footer

The best way to create a custom footer is by using SeedProd. It is the best page builder plugin and comes with over 180 professionally-designed templates, sections, and blocks that can help you customize every part of your WordPress blog or website.

It also has settings that allow you to create a global footer, sidebar, header, and more.

First, you need to install and activate SeedProd. For more details, see our step-by-step guide on how to install a WordPress plugin.

Note: There’s also a free version of SeedProd that allows you to create all kinds of pages using the drag-and-drop editor. However, we’ll be using the premium version of SeedProd since it comes with the advanced Theme Builder.

After activating the plugin, SeedProd will ask for your license key.

SeedProd license key

You can find this information under your account on the SeedProd website. After entering the key, click on the ‘Verify Key’ button.

Once you’ve done that, go to SeedProd » Theme Builder. Here, click on the ‘Add New Theme Template’ button.

The SeedProd theme builder

In the popup, type in a name for the new theme template.

Once you’ve done that, open the ‘Type’ dropdown and choose ‘Footer.’

Creating a custom footer with SeedProd

SeedProd will show the new footer template across your entire site by default. However, you can limit it to specific pages or posts using the ‘Conditions’ settings.

For example, you may want to exclude the new footer from your landing pages, so it doesn’t distract from your main call to action.

When you’re happy with the information you’ve entered, click on ‘Save.’

This will load the SeedProd page builder interface.

At first, your template will show a blank screen on the right and your settings on the left. To start, click on the ‘Add Columns’ icon.

The SeedProd theme builder editor

You can now choose the layout that you want to use for your footer. This allows you to organize your content into different columns.

You can use any layout you want, but for this guide, we’re using a three-column layout.

Choosing a layout for the WordPress footer

Next, you can edit the footer’s background so that it matches your WordPress theme, company branding, or logo.

To change the background color, simply click on the section next to ‘Background Color’ and then use the controls to choose a new color.

Changing the background color of a WordPress footer

Another option is to upload a background image.

To do this, either click on ‘Use Your Own Image’ and then choose an image from the WordPress media library, or click on ‘Use a stock image.’

Adding an image to a custom WordPress footer

When you’re happy with the background, it’s time to add some content to the footer.

Simply drag any block from the left-hand menu and drop it onto your footer.

Adding blocks to the WordPress footer

After adding a block, click to select that block in the main editor.

The left-hand menu will now show all of the settings for customizing the block.

The SeedProd advanced theme builder

Simply keep repeating these steps to add more blocks to your footer.

You can also change where each block appears by dragging them around your layout.

A custom footer, created using the SeedProd theme builder

When you’re happy with your design, click on the ‘Save’ button.

Then, you can select ‘Publish’ to complete your design.

Publishing the SeedProd template part

For your new footer to show up on your website, you’ll need to finish building your WordPress theme with SeedProd.

After building your theme, go to SeedProd » Theme Builder. Then, click on the ‘Enable SeedProd Theme’ switch.

Now, if you visit your website you’ll see the new footer live.

How to enable a custom WordPress theme

For a step-by-step guide, please see our guide on how to create a custom WordPress theme.

Method 4. Removing the WordPress Disclaimer Using Code

If you can’t see any way to remove or modify the footer credits in the WordPress customizer, then another option is to edit the footer.php code.

This isn’t the most beginner-friendly method, but it will let you remove the credit from any WordPress theme.

Before making changes to your website’s code, we recommend creating a backup so you can restore your site in case anything goes wrong.

Keep in mind that if you edit your WordPress theme files directly, then those changes will disappear when you update the theme. With that being said, we recommend creating a child theme as this allows you to update your WordPress theme without losing customization.

First, you need to connect to your WordPress site using an FTP client such as FileZilla, or you can use a file manager provided by your WordPress hosting company. 

If this is your first time using FTP, then you can see our complete guide on how to connect to your site using FTP

Once you’ve connected to your site, go to /wp-content/themes/ and then open the folder for your current theme or child theme.

The FileZilla FTP client

Inside this folder, find the footer.php file and open it in a text editor such as Notepad.

In the text editor, look for a section of code that includes the ‘powered by’ text. For example, in the Twenty Twenty-One theme for WordPress, the code looks like this:

<div class="powered-by">
				<?php
				printf(
					/* translators: %s: WordPress. */
					esc_html__( 'Proudly powered by %s.', 'twentytwentyone' ),
					'<a href="' . esc_attr__( 'https://wordpress.org/', 'twentytwentyone' ) . '">WordPress</a>'
				);
				?>
			</div><!-- .powered-by -->

You can either delete this code entirely or customize it to suit your needs. For example, you may want to replace the ‘Proudly powered…’ disclaimer with your own copyright notice.

A custom disclaimer, created using FSE

After making your changes, save the file and upload it to your server. If you check your site, then the footer credit will have disappeared.

Warning! Avoid the CSS Method at All Costs!

Some WordPress tutorial sites may show you a CSS method that uses display: none to hide the footer credit links.

While it looks simple, it’s very bad for your WordPress SEO.

Many spammers use this exact technique to hide links from visitors while still showing them to Google, in the hopes of getting higher rankings.

If you do hide the footer credit with CSS, then Google may flag you as a spammer and your site will lose search engine rankings. In the worst-case scenario, Google may even delete you from their index so you never appear in search results.

Instead, we strongly recommend using one of the four methods we showed above. If you can’t use any of these methods, then another option is hiring a WordPress developer to remove the footer credit for you, or you might change your WordPress theme.

We hope this article helped you remove the powered by WordPress footer links. You may also want to check out our expert pick of the best contact form plugins and proven ways to make money online blogging with WordPress.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post How to Remove the Powered by WordPress Footer Links first appeared on WPBeginner.

Virtual Meetings Best Practices Starter Guide: Learn the Basics

For better or worse, it’s safe to say that virtual meetings have a special place in all of our hearts.

Think about it: Where would we all end up if we never had to ask, “can you hear me okay?” again?

Virtual meetings increase employees’ productivity, boost efficiency, and bring new meaning to collaboration. And virtual call software tools like Zoom and Microsoft Teams have empowered companies to do business on a global scale.

But virtual meetings can quickly get messy and disorganized without the proper guidelines, processes, and etiquette. That’s why it’s essential to know the best practices when having or attending virtual meetings.

How Virtual Meetings Work

Virtual meetings allow people worldwide to connect and collaborate, even when they’re miles apart.

A virtual meeting uses video conferencing technology, which connects two or more participants through a secure connection over the internet. This allows everyone in the meeting to talk to each other via audio and see each other via video.

In a virtual meeting, participants use their own computers or phones to join the call. The video conference software will require you to download an application, connect your device’s microphone and camera, and enter a specific room or URL address.

A few other features of video conferencing software include:

  • Screen Sharing: Allows everyone in the meeting to see what’s on one person’s screen. It is best for presentations, slide decks, tutorials, and employee onboarding.
  • Whiteboard: Enables users to sketch, draw, and collaborate on an online whiteboard. This feature is great for brainstorming ideas, problem-solving, and other creative tasks.
  • Chatroom: This lets you chat with one or more people in a virtual meeting through an internal messaging system. This is handy if someone needs to ask a question or discuss something privately.
  • Breakout Rooms: For private collaboration, this feature allows you to divide participants into separate rooms for private conversations.
  • Recording Capabilities: Many virtual meeting software applications let you save and share recordings of the entire meeting or specific sections. This can help you review important discussions and decisions.

Since so many different functionalities are involved, it’s essential to be aware of the best practices when having or attending virtual meetings.

1. Establish the purpose and agenda prior to the meeting

As with any meeting, virtual meetings should have a clear purpose and agenda before they start. Knowing the expected goals and content of the meeting can help participants stay focused, on-task, and organized during their call.

Before each meeting, make sure to develop a timeline that outlines the points you plan to cover. You could even send a message to the attendees beforehand in order to ensure everyone is aware of the meeting’s objective and any tasks they need to complete prior to the call.

2. Test your technology before the meeting

Regardless of what you use for video conferencing and collaboration, it’s always a good idea to give your technology an hour or two before the meeting to make sure that everything is working properly. Even if you spend most of your day in and out of meetings, a few minutes of downtime before the call can ensure a smooth and successful meeting.

A few common problems that can interrupt or compromise your meeting include:

  • Failed (or slowed) internet connection
  • Audio feedback loops
  • Incorrect settings on your microphone or camera
  • Lack of video quality
  • Unexpected software updates

Checking all the necessary components before your meeting can help you avoid any technical issues.

3. Avoid scheduling issues

Almost 47% of global workers are freelance, and many domestic companies leverage global contractors as a way of saving money. In many cases, web design, SEO, content marketing, advertising, and other areas of expertise can be outsourced.

But for American businesses, the majority of countries are located on the other side of the world. And if a company is based on the west coast (e.g., Los Angeles), international meetings can start in the evening or even in the middle of the night.

To avoid scheduling issues, it’s important to be mindful of different time zones and work with your team to set up a meeting at a mutually beneficial time for everyone.

Fortunately, multiple software solutions make this process simpler. For instance, if your team uses Google Workspace, the Calendar app allows you to quickly set up an international meeting and automatically notify everyone of any changes.

Doodle is another popular scheduling platform that can help you find a time and date for everyone involved.

4. Prioritize the privacy of your attendees

Although “Zoombombing” isn’t as big of a deal as it used to be, it’s still important to maintain the privacy of your attendees. It’s best to password-protect each meeting and disable downloads for any shared files to protect everyone’s data and prevent potential intrusions.

And if you’re hosting a virtual event, make sure all participants have agreed to be recorded or livestreamed because not everyone may be comfortable with it.

Don’t share your meeting link publicly. And if you’re using a platform like Zoom, enable the Waiting Room feature to ensure that only invited attendees can join the call.

Screenshot of a Zoom call customization screen with a red arrow pointing to the personal meeting room feature.
Customizing a personal meeting room for an upcoming Zoom conference call.

To enable these features using Zoom, simply navigate to the Personal Meeting Room option in the toolbar and click on the features you want to customize.

Screenshot of Zoom's video meeting security feature options.
Enabling the “Waiting Room” feature from inside a Zoom meeting.

You can also do this from within your meeting under the Security tab.

5. Turn off your microphone (and others’) accordingly

I think I speak for everyone when I say that background noise and distractions can be a major nuisance during virtual meetings. While this is not typically an issue for one-on-one calls, it’s important to remember that anything you say and do can be heard by everyone.

Even if you don’t think there is any noise in your room, office, or home, there may be some feedback that your computer’s microphone picks up. So make sure you are muted when you’re not actively speaking. 

If you are the meeting host, you should also proactively mute others who aren’t adding to the conversation or who may be creating unwanted background noise.

6. Look to your attendees when deciding whether to show your face or not

Some people will tell you that keeping your video camera off during calls is unprofessional, rude, or lazy. And they have a point—if you got on a call with us at Quick Sprout and only saw a square that reads “QS” in the corner, you might not think we knew too much about digital marketing.

But if you have ever sat on a video chat as the only one with your face showing, you know just how weird it can be.

There are a lot of valid reasons to keep your camera off during meetings. Not everyone has a great internet connection, for example. And some people may be too nervous to turn their cameras on.

In these cases, it’s best to mimic the behavior of your attendees and keep your camera off if they do the same. If they prefer not to show their faces, follow suit. Respect is key, and setting a precedent of respect helps to foster a more productive environment.

On larger calls, there will probably be at least one person whose camera is switched off. And if you’re running a sales presentation, you should consider leaving it on.

A good rule of thumb is to always be prepared to turn your camera on, but don’t be afraid to turn it off if the situation calls for it. You can also ask about this at the beginning of the meeting if you’re unsure what your meeting attendees would feel more comfortable with.

7. Consider your background while on camera

It is often best practice to blur or remove your background, as it can be distracting and sometimes inappropriate. Most video call platforms have a built-in feature that allows you to choose from a variety of backgrounds or upload your own background images.

You don’t have to do either of these things. But if you choose not to, you’ll want to make sure that your background is free of anything that could be distracting, unprofessional, or embarrassing.

Here are a few best practices to keep in mind when it comes to your video call background:

  • Consider the lighting in your home office or workspace. If too much light is entering the room, try closing some curtains or turning off lights for an optimal video experience.  If there is not enough light, turn on some additional lights or add a desk lamp for meetings.
  • Clean up the area behind you so that nothing is out of place. If things are lying everywhere, you will appear messy and unorganized.
  • Consider adding a virtual background to your video call if you want something more exciting than a blank wall. Zoom has plenty of fun backgrounds to choose from, and you can also upload your own images. However, don’t be deliberately distracting with your background—keep it light and professional.
  • Avoid having anything moving in the background (e.g., pets, people moving back and forth across the screen). You’ll want to keep your attendees focused on you. Plus, it comes off as unprofessional.

8. Respect others’ time

83% of people spend up to one-third of their workweek in meetings—a sizable chunk of time that could be spent on more productive tasks.

Whether you are the one hosting the meeting or attending it, respect others’ time and get right to the point. Don’t waste everyone’s time by taking too long to finish your thought or going off-topic.

Our earlier point about setting an agenda before the call is important here. Share that agenda with your attendees ahead of time, so they know what to expect and can be prepared for the meeting.

As a manager or meeting host, it’s also a good idea to consider whether your virtual meeting needs to be a meeting.

Could it be an email? What about a quick call? Asking yourself this before scheduling a meeting helps to ensure that everyone’s time is used efficiently.

9. Encourage two-way communication

As a virtual meeting host, it’s easy to think that you need to do all the talking. But keeping your meeting attendees engaged is the best way to ensure their retention of the material.

Encourage questions, comments, and suggestions throughout the meeting. Ask your attendees to share their thoughts on a specific topic or ask them for feedback on a project you’re working on.

This helps create an open dialogue and encourages everyone to participate actively in the conversation. Plus, it allows everyone to feel heard and respected—which is important in any type of meeting.

10. Follow up with everyone after the call

After your virtual meeting is over, it’s important to follow up with everyone who attended. Send out notes and reminders of what was discussed as well as any tasks that need to be completed.

This helps ensure that everyone is on the same page and that all tasks and objectives are completed in a timely manner.

It also gives you an opportunity to check in with your attendees and make sure that everyone retained the information presented.

When following up, there are a few things you should always include:

  • Meeting recap
  • Action items
  • Timeframe for completion
  • A “thank you” for attending

Depending on the context of the meeting, you may also include sales updates, progress reports, or any other relevant information.

Final Thoughts About Virtual Meeting Best Practices

Virtual meetings aren’t just a COVID-19 trend that will go away. They are here to stay, and etiquette is essential when attending or hosting one.

Depending on your company, its culture, and how its members like to do business, incorporating these tips will look different in practice. The best way to ensure your employees hit the ground running is with an employee onboarding program that covers virtual meeting etiquette. Our employee onboarding checklist can help you get started.

With the right preparation and etiquette, you can host or attend a productive virtual meeting that engages everyone involved—no matter where they are located.

Understanding App Directory Architecture In Next.js

Since Next.js 13 release, there’s been some debate about how stable the shiny new features packed in the announcement are. On “What’s New in Next.js 13?” we have covered the release announced and established that though carrying some interesting experiments, Next.js 13 is definitely stable. And since then, most of us have seen a very clear landscape when it comes to the new <Link> and <Image> components, and even the (still beta) @next/font; these are all good to go, instant profit. Turbopack, as clearly stated in the announcement, is still alpha: aimed strictly for development builds and still heavily under development. Whether you can or can’t use it in your daily routine depends on your stack, as there are integrations and optimizations still somewhere on the way. This article’s scope is strictly about the main character of the announcement: the new App Directory architecture (AppDir, for short).

Because the App directory is the one that keeps bringing questions due to it being partnered with an important evolution in the React ecosystem — React Server Components — and with edge runtimes. It clearly is the shape of the future of our Next.js apps. It is experimental though, and its roadmap is not something we can consider will be done in the next few weeks. So, should you use it in production now? What advantages can you get out of it, and what are the pitfalls you may find yourself climbing out of? As always, the answer in software development is the same: it depends.

What Is The App Directory Anyway?

It is the new strategy for handling routes and rendering views in Next.js. It is made possible by a couple of different features tied together, and it is built to make the most out of React concurrent features (yes, we are talking about React Suspense). It brings, though, a big paradigm shift in how you think about components and pages in a Next.js app. This new way of building your app has a lot of very welcomed improvements to your architecture. Here’s a short, non-exhaustive list:

  • Partial Routing.
    • Route Groups.
    • Parallel Routes.
    • Intercepting Routes.
  • Server Components vs. Client Components.
  • Suspense Boundaries.
  • And much more, check the features overview in the new documentation.

A Quick Comparison

When it comes to the current routing and rendering architecture (in the Pages directory), developers were required to think of data fetching per route.

  • getServerSideProps: Server-Side Rendered;
  • getStaticProps: Server-Side Pre-Rendered and/or Incremental Static Regeneration;
  • getStaticPaths + getStaticProps: Server-Side Pre-Rendered or Static Site Generated.

Historically, it hadn’t yet been possible to choose the rendering strategy on a per-page basis. Most apps were either going full Server-Side Rendering or full Static Site Generation. Next.js created enough abstractions that made it a standard to think of routes individually within its architecture.

Once the app reaches the browser, hydration kicks in, and it’s possible to have routes collectively sharing data by wrapping our _app component in a React Context Provider. This gave us tools to hoist data to the top of our rendering tree and cascade it down toward the leaves of our app.

import { type AppProps } from 'next/app';

export default function MyApp({ Component, pageProps }: AppProps) {
  return (
        <SomeProvider>
            <Component {...pageProps} />
        </SomeProvider>
}

The ability to render and organize required data per route made this approach an almost good tool for when data absolutely needed to be available globally in the app. And while this strategy will allow data to spread throughout the app, wrapping everything in a Context Provider bundles hydration to the root of your app. It is not possible anymore to render any branches on that tree (any route within that Provider context) on the server.

Here, enters the Layout Pattern. By creating wrappers around pages, we could opt in or out of rendering strategies per route again instead of doing it once with an app-wide decision. Read more on how to manage states in the Pages Directory on the article “State Management in Next.js” and on the Next.js documentation.

The Layout Pattern proved to be a great solution. Being able to granularly define rendering strategies is a very welcomed feature. So the App directory comes in to put the layout pattern front and center. As a first-class citizen of Next.js architecture, it enables enormous improvements in terms of performance, security, and data handling.

With React concurrent features, it’s now possible to stream components to the browser and let each one handle its own data. So rendering strategy is even more granular now — instead of page-wide, it’s component-based. Layouts are nested by default, which makes it more clear to the developer what impacts each page based on the file-system architecture. And on top of all that, it is mandatory to explicitly turn a component client-side (via the “use client” directive) in order to use a Context.

Building Blocks Of The App Directory

This architecture is built around the Layout Per Page Architecture. Now, there is no _app, neither is there a _document component. They have both been replaced by the root layout.jsx component. As you would expect, that’s a special layout that will wrap up your entire application.

export function RootLayout({ children }: { children: React.ReactNode }) {
    return (
        <html lang="en">
            <body>
                {children}
            </body>
        </html>
}

The root layout is our way to manipulate the HTML returned by the server to the entire app at once. It is a server component, and it does not render again upon navigation. This means any data or state in a layout will persist throughout the lifecycle of the app.

While the root layout is a special component for our entire app, we can also have root components for other building blocks:

  • loading.jsx: to define the Suspense Boundary of an entire route;
  • error.jsx: to define the Error Boundary of our entire route;
  • template.jsx: similar to the layout, but re-renders on every navigation. Especially useful to handle state between routes, such as in or out transitions.

All of those components and conventions are nested by default. This means that /about will be nested within the wrappers of / automatically.

Finally, we are also required to have a page.jsx for every route as it will define the main component to render for that URL segment (as known as the place you put your components!). These are obviously not nested by default and will only show in our DOM when there’s an exact match to the URL segment they correspond to.

There is much more to the architecture (and even more coming!), but this should be enough to get your mental model right before considering migrating from the Pages directory to the App directory in production. Make sure to check on the official upgrade guide as well.

Server Components In A Nutshell

React Server Components allow the app to leverage infrastructure towards better performance and overall user experience. For example, the immediate improvement is on bundle size since RSC won’t carry over their dependencies to the final bundle. Because they’re rendered in the server, any kind of parsing, formatting, or component library will remain on the server code. Secondly, thanks to their asynchronous nature, Server Components are streamed to the client. This allows the rendered HTML to be progressively enhanced on the browser.

So, Server Components lead to a more predictable, cacheable, and constant size of your final bundle breaking the linear correlation between app size and bundle size. This immediately puts RSC as a best practice versus traditional React components (which are now referred to as client components to ease disambiguation).

On Server Components, fetching data is also quite flexible and, in my opinion, feels closer to vanilla JavaScript — which always smooths the learning curve. For example, understanding the JavaScript runtime makes it possible to define data-fetching as either parallel or sequential and thus have more fine-grained control on the resource loading waterfall.

  • Parallel Data Fetching, waiting for all:
import TodoList from './todo-list'

async function getUser(userId) {
  const res = await fetch(`https://<some-api>/user/${userId}`);
  return res.json()
}

async function getTodos(userId) {
  const res = await fetch(`https://<some-api>/todos/${userId}/list`);
  return res.json()
}

export default async function Page({ params: { userId } }) {
  // Initiate both requests in parallel.
  const userResponse = getUser(userId)
  const  = getTodos(username)

  // Wait for the promises to resolve.
  const [user, todos] = await Promise.all([userResponse, todosResponse])

  return (
    <>
      <h1>{user.name}</h1>
      <TodoList list={todos}></TodoList>
    </>
  )
}
  • Parallel, waiting for one request, streaming the other:
async function getUser(userId) {
  const res = await fetch(`https://<some-api>/user/${userId}`);
  return res.json()
}

async function getTodos(userId) {
  const res = await fetch(`https://<some-api>/todos/${userId}/list`);
  return res.json()
}

export default async function Page({ params: { userId } }) {
  // Initiate both requests in parallel.
  const userResponse = getUser(userId)
  const todosResponse = getTodos(userId)

  // Wait only for the user.
  const user = await userResponse

  return (
    <>
      <h1>{user.name}</h1>
            <Suspense fallback={<div>Fetching todos...</div>}>
          <TodoList listPromise={todosResponse}></TodoList>
            </Suspense>
    </>
  )
}

async function TodoList ({ listPromise }) {
  // Wait for the album's promise to resolve.
  const todos = await listPromise;

  return (
    <ul>
      {todos.map(({ id, name }) => (
        <li key={id}>{name}</li>
      ))}
    </ul>
  );
}

In this case, <TodoList> receives an in-flight Promise and needs to await it before rendering. The app will render the suspense fallback component until it’s all done.

  • Sequential Data Fetching fires one request at a time and awaits for each:
async function getUser(username) {
  const res = await fetch(`https://<some-api>/user/${userId}`);
  return res.json()
}

async function getTodos(username) {
  const res = await fetch(`https://<some-api>/todos/${userId}/list`);
  return res.json()
}

export default async function Page({ params: { userId } }) {
  const user = await getUser(userId)


  return (
    <>
      <h1>{user.name}</h1>
            <Suspense fallback={<div>Fetching todos...</div>}>
            <TodoList userId={userId} />
            </Suspense>
    </>
  )
}

async function TodoList ({ userId }) {
  const todos = await getTodos(userId);

  return (
    <ul>
      {todos.map(({ id, name }) => (
        <li key={id}>{name}</li>
      ))}
    </ul>
  );
}

Now, Page will fetch and wait on getUser, then it will start rendering. Once it reaches <TodoList>, it will fetch and wait on getTodos. This is still more granular than what we are used to it with the Pages directory.

Important things to note:

  • Requests fired within the same component scope will be fired in parallel (more about this at Extended Fetch API below).
  • Same requests fired within the same server runtime will be deduplicated (only one is actually happening, the one with the shortest cache expiration).
  • For requests that won’t use fetch (such as third-party libraries like SDKs, ORMs, or database clients), route caching will not be affected unless manually configured via segment cache configuration.
export const revalidate = 600; // revalidate every 10 minutes

export default function Contributors({
  params
}: {
  params: { projectId: string };
}) {
    const { projectId }  = params
    const { contributors } = await myORM.db.workspace.project({ id: projectId })

  return <ul>{*/ ... */}</ul>;
}

To point out how much more control this gives developers: when within the pages directory, rendering would be blocked until all data is available. When using getServerSideProps, the user would still see the loading spinner until data for the entire route is available. To mimic this behavior in the App directory, the fetch requests would need to happen in the layout.tsx for that route, so always avoid doing it. An “all or nothing” approach is rarely what you need, and it leads to worse perceived performance as opposed to this granular strategy.

Extended Fetch API

The syntax remains the same: fetch(route, options). But according to the Web Fetch Spec, the options.cache will determine how this API will interact with the browser cache. But in Next.js, it will interact with the framework server-side HTTP Cache.

When it comes to the extended Fetch API for Next.js and its cache policy, two values are important to understand:

  • force-cache: the default, looks for a fresh match and returns it.
  • no-store or no-cache: fetches from the remote server on every request.
  • next.revalidate: the same syntax as ISR, sets a hard threshold to consider the resource fresh.
fetch(`https://route`, { cache: 'force-cache', next: { revalidate: 60 } })

The caching strategy allows us to categorize our requests:

  • Static Data: persist longer. E.g., blog post.
  • Dynamic Data: changes often and/or is a result of user interaction. E.g., comments section, shopping cart.

By default, every data is considered static data. This is due to the fact force-cache is the default caching strategy. To opt out of it for fully dynamic data, it’s possible to define no-store or no-cache.

If a dynamic function is used (e.g., setting cookies or headers), the default will switch from force-cache to no-store!

Finally, to implement something more similar to Incremental Static Regeneration, you’ll need to use next.revalidate. With the benefit that instead of being defined for the entire route, it only defines the component it’s a part of.

Migrating From Pages To App

Porting logic from Pages directory to Apps directory may look like a lot of work, but Next.js has worked prepared to allow both architectures to coexist, and thus migration can be done incrementally. Additionally, there is a very good migration guide in the documentation; I recommend you to read it fully before jumping into a refactoring.

Guiding you through the migration path is beyond the scope of this article and would make it redundant to the docs. Alternatively, in order to add value on top of what the official documentation offers, I will try to provide insight into the friction points my experience suggests you will find.

The Case Of React Context

In order to provide all the benefits mentioned above in this article, RSC can’t be interactive, which means they don’t have hooks. Because of that, we have decided to push our client-side logic to the leaves of our rendering tree as late as possible; once you add interactiveness, children of that component will be client-side.

In a few cases pushing some components will not be possible (especially if some key functionality depends on React Context, for example). Because most libraries are prepared to defend their users against Prop Drilling, many create context providers to skip components from root to distant descendants. So ditching React Context entirely may cause some external libraries not to work well.

As a temporary solution, there is an escape hatch to it. A client-side wrapper for our providers:

// /providers.jsx
‘use client’

import { type ReactNode, createContext } from 'react';

const SomeContext = createContext();

export default function ThemeProvider({ children }: { children: ReactNode }) {
  return (
    <SomeContext.Provider value="data">
      {children}
    </SomeContext.Provider>
  );
}

And so the layout component will not complain about skipping a client component from rendering.

// app/.../layout.jsx
import { type ReactNode } from 'react';
import Providers from ‘./providers’;

export default function Layout({ children }: { children: ReactNode }) {
    return (
    <Providers>{children}</Providers>
  );
}

It is important to realize that once you do this, the entire branch will become client-side rendered. This approach will take everything within the <Providers> component to not be rendered on the server, so use it only as a last resort.

TypeScript And Async React Elements

When using async/await outside of Layouts and Pages, TypeScript will yield an error based on the response type it expects to match its JSX definitions. It is supported and will still work in runtime, but according to Next.js documentation, this needs to be fixed upstream in TypeScript.

For now, the solution is to add a comment in the above line {/* @ts-expect-error Server Component */}.

Client-side Fetch On The Works

Historically, Next.js has not had a built-in data mutation story. Requests being fired from the client side were at the developer’s own discretion to figure out. With React Server Components, this is bound for a chance; the React team is working on a use hook which will accept a Promise, then it will handle the promise and return the value directly.

In the future, this will supplant most bad cases of useEffect in the wild (more on that in the excellent talk “Goodbye UseEffect”) and possibly be the standard for handling asynchronicity (fetching included) in client-side React.

For the time being, it is still recommended to rely on libraries like React-Query and SWR for your client-side fetching needs. Be especially aware of the fetch behavior, though!

So, Is It Ready?

Experimenting is at the essence of moving forward, and we can’t make a nice omelet without breaking eggs. I hope this article has helped you answer this question for your own specific use case.

If on a greenfield project, I’d possibly take App directory for a spin and keep Page directory as a fallback or for the functionality that is critical for business. If refactoring, it would depend on how much client-side fetching I have. Few: do it; many: probably wait for the full story.

Let me know your thoughts on Twitter or in the comments below.

Further Reading On SmashingMag

Zoom Alternatives and Competitors

Our favorite Zoom alternative is GoTo Meeting because it is secure, simple, and affordable. Contact an expert to get a free demo.

In most cases, Zoom is the go-to brand for video conferencing these days. But even though Zoom did make it onto our list of the top video conferencing software, there are other options for businesses. There are many other options available in the market today.

The Quick Sprout research team spent hundreds of hours analyzing video conferencing software and researching several popular solutions available. After analyzing the data and following a set criterion system, the team narrowed down the list to the top seven video conferencing companies.  

The 8 Best Video Conferencing Services

The best Zoom alternative is GoTo Meeting, which integrates seamlessly with existing business tools and is ideal for small businesses. Contact an expert to get a free demo

  • GoTo Meeting — Best video conferencing service for small businesses
  • RingCentral — Best video conferencing service with VoIP business phone plans
  • ClickMeeting — Best video conferencing software for webinars
  • Zoho Meeting — Affordable video conferencing service with basic features
  • Microsoft Teams — Best video conferencing software for internal communication
  • Zoom — Best video conferencing service for scalability
  • Join.me — Annual contract video conferencing plans for small meetings 
  • Webex — Best video conferencing software for cloud collaboration
Brand logos for the eight best video conferencing services - quicksprout.com's review.

You can go through the detailed comparison of different video conferencing software on our full top list.

GoTo Meeting – Best Video Conferencing Service for Small Businesses

GoTo Meeting brand logo.

GoTo Meeting is not simply a video conferencing tool. It provides various other advanced tools and integrations that turn it into a highly collaborative workspace for a business. This elaborate workspace is convenient for remote teams, business managers, and even owners working with a small team. Its impressive features include single-click start, multi-channel support, cloud collaboration, bandwidth adjustments, integrated scheduling, and more.

One considerable benefit of GoTo Meeting is that it easily integrates operational processes without requiring additional adjustments. GoTo Meeting provides a kit that includes an installation guide and pre-configured software for all its users. The kit can help transform a physical conference into a digital one and has advanced hardware and software options. 

How GoTo Meeting Compares to Zoom

Screenshot of Go To Meeting's meeting web page.
GoTo Meeting is an ideal video conferencing platform for small businesses.

GoTo Meeting is ideal for small and medium-sized businesses as it only allows a limited number of participants. It offers excellent collaborative tools for remote teams and unlimited cloud recording storage.

On the other hand, Zoom has a higher participant limit allowing upto1000 participants in a single meeting, albeit with no cloud-based storage of the recordings. When it comes to security, GoTo Meeting is far more secure than Zoom.

In terms of pricing, GoTo Meeting and Zoom offer similar pricing plans. Even though Zoom offers a Basic Free Plan for meetings of up to 100 people, its paid plans range between $14 to $19 per user per month. GoTo Meeting also charges between $14 to $19 monthly for its various subscription plans, but it has no free plans.

Go through our detailed review of GoTo Meeting to decide if it’s the right option for your business.

RingCentral — Best Video Conferencing Service With VoIP Business Phone Plans

RingCentral brand logo.

RingCentral allows businesses to get rid of traditional phone plans through its unique VoIP business phone services. Its video conferencing service is only one aspect of the extensive communication services offered, which include messaging, screen sharing, and more. However, if you only want the video conferencing service, you can buy the Meetings app as a standalone product.

The RingCentral video conferencing service offers different subscription plans based on the region of the user and the total number of users. Its Free Plan includes the option to host around 100 participant meetings, store recordings on the cloud, join meetings in the browser, and many more features. However, if you want to add more participants, or store the recordings for a longer time, consider the RingCentral Video Pro+ Plan.

How RingCentral Compares to Zoom

Screenshot of RingCentral's webpage for video meetings
RingCentral offers various pricing plans for businesses according to their needs.

RingCentral and Zoom are both easy to use and ideal for businesses of all types and sizes. They both offer various integration options and are easily affordable. The audio and video quality is also excellent.

RingCentral allows users to delete any messages sent by mistake, making it a better option for some businesses. It also offers other unique features to keep users engaged. An additional benefit of RingCentral is that it provides live training, video support, and in-person help to users when required.

When it comes to pricing, RingCentral is somewhat more expensive than Zoom, but it also offers more features. Zoom’s free and basic plans only include video conferencing, while RingCentral’s include some business phone services. 

Read the in-depth analysis of RingCentral to make an informed decision.

ClickMeeting — Best Video Conferencing Software For Webinars

ClickMeeting brand logo.

ClickMeeting is a webinar software platform, making it slightly different from other video conferencing software. It is excellent for hosting virtual events, online training, and marketing products with video demonstrations.

Besides webinar services, it offers traditional video conferencing options to enhance business collaboration and facilitate team meetings. A prominent feature of ClickMeeting is its ability to translate meetings in real-time and screen sharing between multiple people.

How ClickMeeting Compares to Zoom

Screenshot of ClickMeeting's live webinar web page showing a live webinar in action.
The special webinar feature of ClickMeeting helps it stand out.

ClickMeeting boasts a fast user interface and is easily navigable for beginners. It also offers interactive features like question-and-answer sessions, activity tracking, analytical reports, customizable forms, and more. 

Zoom is also user-friendly and can easily be set up. However, it lacks several features offered by ClickMeeting. Zoom has no event tracking or management features and lacks moderation and monitoring tools.

In terms of pricing, ClickMeeting offers a free trial and various pricing plans costing $30 to $45 per month, whereas Zoom has an elaborate Free Plan with additional paid plans ranging between $14 to $19 a month. ClickMeeting also offers a custom plan with a custom quote based on the functionalities you want. 

Read a detailed analysis of ClickMeeting on our website.

Zoho Meeting — Affordable Video Conferencing Service With Basic Features

Zoho Meeting brand logo.

Zoho Meeting is a simple video conferencing software that doesn’t offer any advanced and complicated features. Its main features are screen sharing, moderator controls, lock meetings, in-session chat, RSVP scheduling, and embedded meeting links. Additionally, users can give over control, remove other users, and switch a presenter while hosting a video conference. 

Zoho Meeting easily integrates with Zoho CRM, which is an ideal option for anyone using a Zoho product. Even though it offers no fancy features, it is a quality solution for businesses already using Zoho products or only requiring basic video conferencing features.

It has a few different pricing plans and charges a minimal fee. It also offers a Free Forever Plan with 100 meeting participants or webinar attendees and limited features.

How Zoho Meeting Compares to Zoom

Screenshot of Zoho Meeting's meeting web page describing video meeting and share screen features.
Zoho Meeting is a basic video conferencing software with additional functionalities like screen sharing.

Zoho Meeting and Zoom both offer boast an easy-to-use interface and many strong functionalities. However, Zoom provides much better customer service than Zoho Meeting. Additionally, it also provides detailed in-person and online training, unlike Zoho Meeting, which doesn’t offer the same.

Zoho Meeting has a whole ecosystem of tools and software you can access and use easily. Zoom doesn’t have its own ecosystem, but it works well in all browsers. Additionally, adding or inviting more people to a meeting is extremely easy in Zoho Meeting. In Zoom, attending a meeting doesn’t require downloading the application, and anyone can join a meeting from any device using a shareable link. 

In terms of pricing, ZohoMeeting offers a Free Forever Plan for $0 a month. Its paid packages are highly affordable, ranging from $2 to $19 a month. Zoom also has a free-of-cost plan, and the functionalities offered by the free Zoho Meeting plan are on par with it. Therefore, choose the option most suitable for the needs of your business.

Go through a thorough review of Zoho Meeting here.

Microsoft Teams — Best Video Conferencing Software For Internal Communication

Microsoft Teams brand logo.

Microsoft Teams is an excellent software for internal business communications. It has a video, audio, and chat feature that can be used for instant communication. It also supports meetings with up to 10,000 participants and offers mobile and desktop versions.  

Microsoft Teams is slightly more complex to use than other similar software available in the market. It has a complex onboarding and setup process, which makes it not so beginner friendly. Since Microsoft Teams is a product of Microsoft, companies using a Microsoft ecosystem may find it easier to incorporate Microsoft Teams into their system.  

In terms of pricing, Microsoft Teams offers different annual pricing options for different regions of the world. All the paid plans have 1TB of cloud storage, meeting recordings, app integrations, and more.

How Microsoft Teams Compares to Zoom

Screenshot of Microsoft Teams web page.
Microsoft Teams is excellent for internal communications in a business.

Microsoft Teams and Zoom are pretty similar to each other. They both provide excellent video conferencing services. Microsoft Teams offers additional collaborative tools along with video conferencing tools. Zoom also provides additional workspace features like a digital whiteboard, team messaging app, and more.

Overall, Microsoft Teams is better suited for business communication. Zoom can also be used for business dealings, but it is less secure in comparison. Users need to download Microsoft Teams to use it, whereas Zoom meetings can be joined directly through a browser without downloading an app.

Microsoft Teams offers free and separate paid plans for homes and businesses. The maximum capacity of participants allowed by Microsoft Teams is 1000, irrespective of your subscription plan.  

Read a thorough review of Microsoft Teams here.

Join.me  — Annual Contract Video Conferencing Plans For Small Meetings

Join.me brand logo.

Join.me is an excellent video conferencing solution for teams, individuals, and businesses. It has one of the fastest signup processes and is endorsed by startups and big enterprises alike. You can launch a meeting on its website without consulting a sales representative.

Join.me started as an independent platform but now has become a part of the GoTo Meeting software. However, it still retains its free version and offers its video conferencing services separately. Compared to GoTo Meeting, it is far simpler and has only a few features.

If you are starting a new meeting, you can invite people to join it through a link or an email. Join.me allows you to change your conferencing background, customize your URL, and share your screen with one click. People who aren’t using Join.me can still accept your meeting links with just one click.

Join.me is an exclusive video conferencing service. Users have to sign up and install the application to get started. Therefore, there needs to be more transparency in pricing. If you want to hold a meeting, you must have an account on Join.me. Its customer support can assist you with the initial procedure. However, if you are an attendee, you can simply attend the meeting by putting in a specific nine-digit Join.me ID.

How Join.me Compares to Zoom

Screenshot of Join.me's join a meeting page with enter 9-digit join.me ID field and join meeting button.
Join.me is a simple video conferencing software for individuals and small businesses.

Join.me is just as easy to use as Zoom. A benefit of Join.me is its easy screen-sharing options. It also provides excellent value for your money. However, Zoom takes the lead in terms of customer service and functionality. 

An advantage of Join.me over Zoom is that its video and audio quality are top-notch. Moreover, the many integration options offered by Join.me make it a more attractive option, specifically for individual users. 

One drawback of Join.me is its lack of transparency in pricing. The service only provides custom quotes based on users’ needs. Therefore, it is difficult to guess the amount of money charged to individuals or businesses.

Give Join.me’s review a read and decide if it suits your needs.

Webex — Best Video Conferencing Software For Cloud Collaboration

Webex brand logo.

Webex is a highly regarded video conferencing service. It allows users to host huge virtual events with as many as 100,000 participants and also enables users to hold interactive webinars for almost 3000 people.

The platform is ideal for people who run on-demand training lessons and businesses wanting to onboard employees in multiple locations. Its mobile app makes it easier to host and join meetings from anywhere at any time, and cloud collaboration features make it excellent for teams.

Screenshot of a Webex cloud call meeting in place from their cloud calling web page.
Webex is ideal for meetings and events in the cloud for all teams.

How Webex Compares to Zoom

Webex allows the highest number of participants among all the video conferencing services. It allows as many as 100,000 participants, which is much higher than the 1000-participant limit of Zoom. Moreover, it offers video recording and advanced screen-sharing options. 

Overall, Webex offers just as many sophisticated tools and features as Zoom. Even with a wide range of unique features, Webex charges a minimal fee for each subscription plan.

The four different pricing plans make it ideal for small teams wanting business-level features and tools. Its pricing plans range between $0 to $32 a month. Overall, it suits businesses requiring collaboration with teams in different locations. 

Read our Webex review before choosing it for your needs.