How To Quickly Set Up, Use & Resell Webmail: A Guide For Agencies And Resellers

Webmail is a robust IMAP-based email service and the latest exciting addition to WPMU DEV’s all-in-one WordPress management platform product suite.

In this comprehensive guide, we show you how to get started with Webmail, how to use its features, and how to resell professional business email to clients. We also provide information on the benefits of offering IMAP-based email services for WPMU DEV platform users and resellers.

Read the full article to learn all about Webmail or click on one of the links below to jump to any section:

Overview of Webmail

In addition to our current email hosting offerings, Webmail is a standalone service for Agency plan members that allows for greater flexibility in email account creation.

WPMU DEV’s Webmail:

  • Is affordably priced
  • Offers a superior email service with high standards of quality and reliability.
  • Does not require a third-party app to work.
  • Lets you set up email accounts on any domain you own or manage, whether it’s a root domain like mydomain.com or a subdomain such as store.mydomain.com.
  • Lets you provide clients with professional business email no matter where their domain is hosted (or whether the domain is associated with a site in your Hub or not)
  • Can be accessed from any device, even directly from your web browser.
  • Can be white labeled and resold under your own brand with Reseller.

Read more about the benefits of using Webmail.

Let’s show you now how to set your clients up with email accounts and a fully-functional mailbox in just a few clicks, using any domain, and no matter where their domain is hosted.

Getting Started With Webmail

Webmail is very quick and easy to set up.

If you’re an Agency member, just head on over to The Hub.

Now, all you need to do is get acquainted with the latest powerful tool in your complete WordPress site management toolbox…

Webmail Manager

The Hub lets you create, manage, and access IMAP email accounts for any domain you own from one central location, even domains that are not directly associated with a site in your Hub.

Click on Webmail on the main menu at the top of the screen…

The Hub - Webmail
Click Webmail to set up and manage your emails.

This will bring you to the Webmail Overview screen.

If you haven’t set up an email account yet, you’ll see the screen below. Click on the “Create New Email” button to get started.

Webmail screen with no email accounts set up yet!
Click the button to create a new email account in Webmail.

As mentioned earlier, Webmail gives you the choice of creating an email account from a domain you manage in The Hub, or a domain managed elsewhere.

For this tutorial, we’ll select a domain being managed in The Hub.

Select the domain you want to associate your email account with from the dropdown menu and click the arrow to continue.

Create New Email screen - Step 1 of 2
Select a domain managed in The Hub or elsewhere.

Next, create your email address, choose a strong password, and click on the blue arrow button to continue.

Create New Email screen - Step 2 of 2
Add your username and password to create your email address.

You will see a payment screen displaying the cost of your new email address and billing start date. Click the button to make the payment and create your new email account.

Email account payment screen.
Make the payment to complete setting up your email account.

Your new email account will be automatically created after payment has been successfully processed.

New user email has been created successfully.
Our new email has been created successfully…we’re in business!

The last step to make your email work correctly is to add the correct DNS records.

Fortunately, if your site or domain are hosted with WPMU DEV, Webmail Manager can easily and automatically do this for you too!

Note: If your domain is managed elsewhere, you will need to copy and manually add the DNS records at your registrar or DNS manager (e.g. Cloudflare).

Click on the View DNS Records button to continue.

This will bring up the DNS Records screen.

As our example site is hosted with WPMU DEV, all you need to do is click on the ADD DNS Records button and your records will be automatically created and added to your email account.

DNS Records screen - Add DNS Records button selected.
If your domain is hosted with WPMU DEV click the button to automatically add the correct DNS records to make your email work.

After completing this step, wait for the DNS records to propagate successfully before verifying the DNS.

You can use an online tool like https://dnschecker.org to check the DNS propagation status.

Note: DNS changes can take 24-48 hours to propagate across the internet, so allow some time for DNS propagation to occur, especially if the domain is hosted elsewhere.

Click the Verify DNS button to check if the DNS records have propagated.

DNS Records screen with Verify DNS button selected.
Click the Verify DNS button to check if your DNS records have propagated.

If your DNS records have propagated successfully, you will see green ticks for all records under the DNS Status column.

DNS Records screen showing green ticks in DNS Status for all records.
Your emails won’t be seen until all those ticks are green.

Your email account is now fully set up and ready to use.

Repeat the above process to create and add more emails.

Webmail overview screen showing an active domain.
Click on the + Create New Email button to add more emails.

Now that you know how to create a new email account, let’s look at how to manage your emails effectively.

Managing Your Emails

If you have set up one or more email accounts, navigate to the Webmail Manager screen any time to view a list of all connected domains, their status, number of email accounts associated with each domain, and additional options.

Webmail screen with added domain email accounts.
Manage all of your email accounts in the Webmail overview screen.

To manage your email accounts, click on a domain name or select Manage Domain Email from the Options dropdown menu (the vertical ellipsis icon).

Webmail screen - Manage Domain Email option selected.
Click on the vertical ellipsis and select Manage Domain Email to manage your email accounts.

This opens up the email management section for the selected domain.

The Email Accounts tab lists all the existing email accounts for that domain, status and creation date information, plus additional email management options that we’ll explore in a moment.

Webmail - Email Accounts tab
Email Accounts lists all the email accounts you have created for your domain.

Email accounts can have the following statuses: active, suspended, or disabled.

Active accounts can send and receive emails, provided DNS records have been set up and propagated correctly.

Suspended accounts occur if email activity is in violation of our webmail provider’s email sending policy.

A disabled account (see further below) only disables the sending and receiving of emails and webmail access for that email account. It does not affect billing.

Note: Unless you delete the account, you will still be charged for a disabled email account.

Email accounts tab listing email accounts with different statuses.
Email accounts can display an active, suspended, or disabled status.

Before we discuss managing individual email accounts, let’s look at other main features of Webmail Manager.

Email Forwarding

Email forwarding automatically redirects emails sent to one email address to another designated email address. It allows users to receive emails sent to a specific address without having to check multiple accounts. For example, emails sent to info@yourcompany.tld can be automatically forwarded to john@yourcompany.tld.

Every email account includes 10 email forwarders. This allows you to automatically forward emails to multiple addresses simultaneously (e.g. john@yourcompany.tld, accounts@yourcompany.tld, etc.).

To activate email forwarding hover over the arrow icon and turn its status to On and then click on Manage Email Forwarding to set up email forwarders.

Webmail - Email Accounts - Email Forwarding with status turned on and Manage Email Forwarding selected.
Turn Email Forwarding on and click on Manage Email Forwarding to set up forwarders for an email account.

This will bring up the Email Forwarding tab. Here, you can easily add, delete, and edit email forwarders.

If no email forwarders exist for your email account, click the Create Email Forwarder button to create the first one.

Email Forwarding screen with no forwarders set up yet.
Let’s create an email forwarder for this email account.

In the Add Email Forwarder screen, enter the forwarding email address where you would like incoming email messages to redirect to and click Save.

Webmail - Add Email Forwarder
You can create up to 10 email forwarders per email account.

As stated, you can add multiple forwarding email addresses to each email account (up to 10).

Webmail email forwarders.
Webmail’s Email Forwarding lets you easily add, delete, and edit email forwarders.

Webmail Login

With Webmail, all emails are stored on our servers, so in addition to being able to access and view emails on any device, every webmail account includes a mailbox that can be accessed online directly via Webmail’s web browser interface.

There are several ways to log in and view emails.

Access Webmail From The Hub

To log into webmail directly via The Hub, you can go to the Email Account Management > Email Accounts screen of your domain, click the envelope icon next to the email account, and click on the Webmail Login link…

Webmail - Email Accounts - Webmail Login
Click on the envelope icon in Email Accounts to access Webmail login.

Or, if you are working inside an individual email account, just click on the Webmail Login link displayed in all of the account’s management screens…

Webmail - Email Accounts - Email Information - Webmail Login
Click on the Webmail Login link of any email account management screen to access emails for that account.

This will log you directly into the webmail interface for that email account.

Webmail interface
Webmail’s intuitive and easy-to-use interface.

The Webmail interface should look familiar and feel intuitive to most users. If help using any of Webmail’s features is required, click the Help icon on the menu sidebar to access detailed help documentation.

Let’s look at other ways to access Webmail.

Access Webmail From The Hub Client

If you have set up your own branded client portal using The Hub Client plugin, your team members and clients can access and manage emails via Webmail with team user roles configured to give them access permissions and SSO (Single Sign-On) options enabled.

This allows users to seamlessly log into an email account from your client portal without having to enter login credentials.

Webmail menu link on a branded client portal.
Team members and clients can access Webmail directly from your own branded client portal.

Direct Access URL

Another way to log into Webmail is via Direct Access URL.

To access webmail directly from your web browser for any email account, enter the following URL into your browser exactly as shown here: https://webmail.yourwpsite.email/, then enter the email address and password, and click “Login.”

Webmail direct login
Log into webmail directly from your web browser.

Note: The above example uses our white labeled URL address webmail.yourwpsite.email to log into Webmail via a web browser. However, you can also brand your webmail accounts with your own domain so users can access their email from a URL like webmail.your-own-domain.tld.

For more details on how to set up your own branded domain URL, see our Webmail documentation.

Email Aliases

An email alias is a virtual email address that redirects emails to a primary email account. It serves as an alternative name for a single mailbox, enabling users to create multiple email addresses that all direct messages to the same inbox.

For instance, the following could all be aliases for the primary email address john@mysite.tld:

  • sales@mysite.tld
  • support@mysite.tld
  • info@mysite.tld

Webmail lets you create up to 10 email aliases per email account.

To create an alias for an email account, click on the vertical ellipsis icon and select Add Alias.

Webmail - Add Alias
Let’s add an alias to our email account.

Enter the alias username(s) you would like to create in the Add Alias modal and click Save.

Webmail - Add Alias screen with three aliases set up.
You can create up to 10 aliases for each email account.

Emails sent to any of these aliases will be delivered to your current email account.

Additional Email Management Features

In addition to the features and options found in the Email Accounts tab that we have just discussed, Webmail lets you manage various options and settings for each individual email account.

Let’s take a brief look at some of these options and settings.

Email Information

To manage an individual email account:

  1. Click on The Hub > Webmail to access the Email Accounts tab
  2. Click on the domain you have set up to use Webmail
  3. Click on the specific email account (i.e. the email address) you wish to manage.

Click on the Webmail management screens to access and manage individual email accounts.

The Email Information tab lets you edit your current email account and password and displays important information, such as status, creation date (this is the date your billing starts for this email account), storage used, and current email send limit.

Webmail - Email Accounts - Email Information tab.
Edit and view information about an individual email account in the Email Information tab.

In addition to the Email Information tab, you can click on the Email Forwarding tab to manage your email forwarders and the Email Aliases tab to manage your email aliases for your email account.

Note: Newly created accounts have send limits set up to prevent potential spamming and account suspension. These limits gradually increase over a two-week period, allowing email accounts to send up to 500 emails every 24 hours.

Email Information - Email limit increase.
Each email account’s send limits increase over two weeks and can send up to 500 emails per 24 hours.

Coming soon, you will also be able to add more storage to your email accounts if additional space is required.

Upgrade Storage modal
Upgrade your email account storage space (coming soon!)

Now that we have drilled down and looked at all the management tabs for an individual email account, let’s explore some additional features of the Webmail Manager.

Go back to The Hub > Webmail and click on one of the email accounts you have set up.

DNS Records

Click on the DNS Records tab to view the DNS Records of your email domain.

DNS Records Tab
Set up and verify your email DNS records in the DNS Records tab.

Note: The DNS Records tab is available to team members and client custom roles, so team members and clients can access these if you give them permission.

Configurations

Click on the Configurations tab to view and download configuration settings that allow you to set up email accounts in applications other than Webmail.

Webmail - Domain Email - Configurations
Download and use the configurations shown in this section to set up email accounts in other applications.

The Configurations tab is also available for both team member and client custom roles.

Client Association

If you want to allow clients to manage their own email accounts, you will need to set up your client account first, assign permissions to allow the client to view Webmail, then link the client account with the email domain in the Client Association tab.

After setting up your client in The Hub, navigate to the Client Association tab (The Hub > Webmail > Email Domain) and click on Add Client.

Webmail - Domain Email - Client Association
You can let clients manage their own email accounts by linking the email domain with their client account.

Select the client from the dropdown menu and click Add.

Webmail - Associate email with a client modal.
Linking the email domain with a client allows them to manage their email accounts.

Notes:

  • When you associate a client with an email domain, SSO for the email domain is disabled in The Hub. However, your client will be able to access Webmail login via The Hub Client plugin.
  • The Client Association tab is only made available for team member custom roles.

Reseller Integration

We’re currently working on bringing full auto-provisioning of emails to our Reseller platform. Until this feature is released, you can manually resell emails to clients and bill them using the Clients & Billing tool.

Once Webmail has been fully integrated with our Reseller platform, you will be able to rebrand Webmail as your own and resell everything under one roof: hosting, domains, templates, plugins, expert support…and now business emails!

Reseller price table example.
Resell professional business emails under your own brand!

If you need help with Reseller, check out our Reseller documentation.

Congratulations! Now you know how to set up, manage, and resell Webmail in your business as part of your digital services.

Email Protocols – Quick Primer

WPMU DEV offers the convenience of using both IMAP and POP3 email.

Not sure what IMAP is, how it works, or how IMAP differs from POP3? Then read below for a quick primer on these email protocols.

What is IMAP?

IMAP (Internet Message Access Protocol) is a standard protocol used to retrieve emails from a mail server. It allows users to access their emails from multiple devices like a phone, laptop, or tablet, because it stores emails on the server, rather than downloading them to a single device.

Since emails are managed and stored on the server, this reduces the need for extensive local storage and allows for easy backup and recovery.

Additional points about IMAP:

  • Users can organize emails into folders, flag them for priority, and save drafts on the server.
  • It supports multiple email clients syncing with the server, ensuring consistent message status across devices.
  • IMAP operates as an intermediary between the email server and client, enabling remote access from any device.
  • When users read emails via IMAP, they’re viewing them directly from the server without downloading them locally.
  • IMAP downloads messages only upon user request, enhancing efficiency compared to other protocols like POP3.
  • Messages persist on the server unless deleted by the user.
  • IMAP uses port 143, while IMAP over SSL/TLS uses port 993 for secure communication.

The advantages of using IMAP include the following:

  • Multi-Device Access: IMAP supports multiple logins, allowing users to connect to the email server from various devices simultaneously.
  • Flexibility: Unlike POP3, IMAP enables users to access their emails from different devices, making it ideal for users who travel frequently or need access from multiple locations.
  • Shared Mailbox: A single IMAP mailbox can be shared by multiple users, facilitating collaboration and communication within teams.
  • Organizational Tools: Users can organize emails on the server by creating folders and subfolders, enhancing their efficiency in managing email correspondence.
  • Email Functions Support: IMAP supports advanced email functions such as search and sort, improving user experience and productivity.
  • Offline Access: IMAP can be used offline, allowing users to access previously downloaded emails even without an internet connection.

There are some challenges to setting up and running your own IMAP service, which is why using a solution like WPMU DEV’s Webmail is highly recommended:

  • Hosting an IMAP service can be resource-intensive, requiring more server storage and bandwidth to manage multiple connections and the storage of emails.
  • IMAP requires implementing SSL encryption to ensure secure email communication.
  • Smaller businesses might find it challenging to allocate the necessary IT resources for managing an IMAP server efficiently.

IMAP vs POP3: What’s The Difference?

IMAP and POP3 are both client-server email retrieval protocols, but they are two different methods for accessing email messages from a server.

IMAP is designed for modern email users. It allows users to access your email from multiple devices because it keeps their emails on the server. When users read, delete, or organize their emails, these changes are synchronized across all devices.

For example, if you read an email on your phone, it will show as being read on your laptop as well.

POP3, on the other hand, is simpler and downloads emails from the server to a single device, then usually deletes them from the server. This means if users access their emails from a different device, they won’t see the emails that were downloaded to the first device.

For instance, if you download an email via POP3 on your computer, that email may not be accessible on your phone later.

Here are some of the key differences between IMAP and POP3:

Storage Approach

  • IMAP: Users can store emails on the server and access them from any device. It functions more like a remote file server.
  • POP3: Emails are saved in a single mailbox on the server and downloaded to the user’s device when accessed.

Access Flexibility

  • IMAP: Allows access from multiple devices, enabling users to view and manage emails consistently across various platforms.
  • POP3: Emails are typically downloaded to one device and removed from the server.

Handling of Emails

  • IMAP: Maintains emails on the server, allowing users to organize, flag, and manage them remotely.
  • POP3: Operates as a “store-and-forward” service, where emails are retrieved and then removed from the server.

In practice, IMAP is more suited for users who want to manage their emails from multiple devices or locations, offering greater flexibility and synchronization. POP could be considered for situations where email access is primarily from a single device, or there is a need to keep local copies of emails while removing them from the server to save space.

Essentially, IMAP prioritizes remote access and centralized email management on the server, while POP3 focuses on downloading and storing emails locally.

Professional Business Email For Your Clients

Integrating email hosting, particularly IMAP, with web hosting to create a seamless platform for managing client websites and emails under one roof is challenging, costly, and complex.

With WPMU DEV’s Webmail, you can enhance your email management capabilities and provide clients with affordable and professional business email no matter where their domain is hosted that is easy-to-use and does not require a third-party app.

Note: If you don’t require the full features of IMAP email for a site hosted with WPMU DEV, we also offer the option to create POP3 email accounts with our hosted email. These accounts can be linked to any email client of your choice, ensuring flexibility and convenience.

If you’re yet to set up a WPMU DEV account, we encourage you to become an Agency member. It’s 100% risk-free and includes everything you need to manage your clients and resell services like hosting, domains, emails, and more, all under your own brand.

If you’re already an Agency member, then head over to your Hub and click on Webmail to get started. If you need any help, our support team is available 24×7 (or ask our AI assistant) and you can also check out our extensive webmail documentation.

The End Of My Gatsby Journey

A fun fact about me is that my birthday is on Valentine’s Day. This year, I wanted to celebrate by launching a simple website that lets people receive anonymous letters through a personal link. The idea came up to me at the beginning of February, so I wanted to finish the project as soon as possible since time was of the essence.

Having that in mind, I decided not to do SSR/SSG with Gatsby for the project but rather go with a single-page application (SPA) using Vite and React — a rather hard decision considering my extensive experience with Gatsby. Years ago, when I started using React and learning more and more about today’s intricate web landscape, I picked up Gatsby.js as my render framework of choice because SSR/SSG was necessary for every website, right?

I used it for everything, from the most basic website to the most over-engineered project. I absolutely loved it and thought it was the best tool, and I was incredibly confident in my decision since I was getting perfect Lighthouse scores in the process.

The years passed, and I found myself constantly fighting with Gatsby plugins, resorting to hacky solutions for them and even spending more time waiting for the server to start. It felt like I was fixing more than making. I even started a series for this magazine all about the “Gatsby headaches” I experienced most and how to overcome them.

It was like Gatsby got tougher to use with time because of lots of unaddressed issues: outdated dependencies, cold starts, slow builds, and stale plugins, to name a few. Starting a Gatsby project became tedious for me, and perfect Lighthouse scores couldn’t make up for that.

So, I’ve decided to stop using Gatsby as my go-to framework.

To my surprise, the Vite + React combination I mentioned earlier turned out to be a lot more efficient than I expected while maintaining almost the same great performance measures as Gatsby. It’s a hard conclusion to stomach after years of Gatsby’s loyalty.

I mean, I still think Gatsby is extremely useful for plenty of projects, and I plan on talking about those in a bit. But Gatsby has undergone a series of recent unfortunate events after Netlify acquired it, the impacts of which can be seen in down-trending results from the most recent State of JavaScript survey. The likelihood of a developer picking up Gatsby again after using it for other projects plummeted from 89% to a meager 38% between 2019 and 2022 alone.

Although Gatsby was still the second most-used rendering framework as recently as 2022 — we are still expecting results from the 2023 survey — my prediction is that the decline will continue and dip well below 38%.

Seeing as this is my personal farewell to Gatsby, I wanted to write about where, in my opinion, it went wrong, where it is still useful, and how I am handling my future projects.

Gatsby: A Retrospective

Kyle Mathews started working on what would eventually become Gatsby in late 2015. Thanks to its unique data layer and SSG approach, it was hyped for success and achieved a $3.8 million funding seed round in 2018. Despite initial doubts, Gatsby remained steadfast in its commitment and became a frontrunner in the Jamstack community by consistently enhancing its open-source framework and bringing new and better changes with each version.

So... where did it all go wrong?

I’d say it was the introduction of Gatsby Cloud in 2019, as Gatsby aimed at generating continuous revenue and solidifying its business model. Many (myself included) pinpoint Gatsby’s downfall to Gatsby Cloud, as it would end up cutting resources from the main framework and even making it harder to host in other cloud providers.

The core framework had been optimized in a way that using Gatsby and Gatsby Cloud together required no additional hosting configurations, which, as a consequence, made deployments in other platforms much more difficult, both by neglecting to provide documentation for third-party deployments and by releasing exclusive features, like incremental builds, that were only available to Gatsby users who had committed to using Gatsby Cloud. In short, hosting projects on anything but Gatsby Cloud felt like a penalty.

As a framework, Gatsby lost users to Next.js, as shown in both surveys and npm trends, while Gatsby Cloud struggled to compete with the likes of Vercel and Netlify; the former acquiring Gatsby in February of 2023.

“It [was] clear after a while that [Gatsby] weren’t winning the framework battle against Vercel, as a general purpose framework [...] And they were probably a bit boxed in by us in terms of building a cloud platform.”

Matt Biilmann, Netlify CEO

The Netlify acquisition was the last straw in an already tumbling framework haystack. The migration from Gatsby Cloud to Netlify wasn’t pretty for customers either; some teams were charged 120% more — or had incurred extraneous fees — after converting from Gatsby Cloud to Netlify, even with the same Gatsby Cloud plan they had! Many key Gatsby Cloud features, specifically incremental builds that reduced build times of small changes from minutes to seconds, were simply no longer available in Netlify, despite Kyle Mathews saying they would be ported over to Netlify:

“Many performance innovations specifically for large, content-heavy websites, preview, and collaboration workflows, will be incorporated into the Netlify platform and, where relevant, made available across frameworks.”

— Kyle Mathews

However, in a Netlify forum thread dated August 2023, a mere six months after the acquisition, a Netlify support engineer contradicted Mathews’s statement, saying there were no plans to add incremental features in Netlify.

That left no significant reason to remain with Gatsby. And I think this comment on the same thread perfectly sums up the community’s collective sentiment:

“Yikes. Huge blow to Gatsby Cloud customers. The incremental build speed was exactly why we switched from Netlify to Gatsby Cloud in the first place. It’s really unfortunate to be forced to migrate while simultaneously introducing a huge regression in performance and experience.”

Netlify’s acquisition also brought about a company restructuring that substantially reduced the headcount of Gatsby’s engineering team, followed by a complete stop in commit activities. A report in an ominous tweet by Astro co-founder Fred Scott further exacerbated concerns about Gatsby’s future.

Lennart Jörgens, former full-stack developer at Gatsby and Netlify, replied, insinuating there was only one person left after the layoffs:

You can see all these factors contributing to Gatsby’s usage downfall in the 2023 Stack Overflow survey.

Biilmann addressed the community’s concerns about Gatsby’s viability in an open issue from the Gatsby repository:

“While we don’t plan for Gatsby to be where the main innovation in the framework ecosystem takes place, it will be a safe, robust and reliable choice to build production quality websites and e-commerce stores, and will gain new powers by ways of great complementary tools.”

— Matt Biilmann

He also shed light on Gatsby’s future focus:

  • “First, ensure stability, predictability, and good performance.
  • Second, give it new powers by strong integration with all new tooling that we add to our Composable Web Platform (for more on what’s all that, you can check out our homepage).
  • Third, make Gatsby more open by decoupling some parts of it that were closely tied to proprietary cloud infrastructure. The already-released Adapters feature is part of that effort.”

— Matt Biilmann

So, Gatsby gave up competing against Next.js on innovation, and instead, it will focus on keeping the existing framework clean and steady in its current state. Frankly, this seems like the most reasonable course of action considering today’s state of affairs.

Why Did People Stop Using Gatsby?

Yes, Gatsby Cloud ended abruptly, but as a framework independent of its cloud provider, other aspects encouraged developers to look for alternatives to Gatsby.

As far as I am concerned, Gatsby’s developer experience (DX) became more of a burden than a help, and there are two main culprits where I lay the blame: dependency hell and slow bundling times.

Dependency Hell

Go ahead and start a new Gatsby project:

gatsby new

After waiting a couple of minutes you will get your brand new Gatsby site. You’d rightly expect to have a clean slate with zero vulnerabilities and outdated dependencies with this out-of-the-box setup, but here’s what you will find in the terminal once you run npm audit:

18 vulnerabilities (11 moderate, 6 high, 1 critical)

That looks concerning — and it is — not so much from a security perspective but as an indication of decaying DX. As a static site generator (SSG), Gatsby will, unsurprisingly, deliver a static and safe site that (normally) doesn’t have access to a database or server, making it immune to most cyber attacks. Besides, lots of those vulnerabilities are in the developer tools and never reach the end user. Alas, relying on npm audit to assess your site security is a naive choice at best.

However, those vulnerabilities reveal an underlying issue: the whopping number of dependencies Gatsby uses is 168(!) at the time I’m writing this. For the sake of comparison, Next.js uses 16 dependencies. A lot of Gatsby’s dependencies are outdated, hence the warnings, but trying to update them to their latest versions will likely unleash a dependency hell full of additional npm warnings and errors.

In a related subreddit from 2022, a user asked, “Is it possible to have a Gatsby site without vulnerabilities?”

The real answer is disappointing, but as of March 2024, it remains true.

A Gatsby site should work completely fine, even with that many dependencies, and extending your project shouldn’t be a problem, whether through its plugin ecosystem or other packages. However, when trying to upgrade any existing dependency you will find that you can’t! Or at least you can’t do it without introducing breaking changes to one of the 168 dependencies, many of which rely on outdated versions of other libraries that also cannot be updated.

It’s that inception-like roundabout of dependencies that I call dependency hell.

Slow Build And Development Times

To me, one of the most important aspects of choosing a development tool is how comfortable it feels to use it and how fast it is to get a project up and running. As I’ve said before, users don’t care or know what a “tech stack” is or what framework is in use; they want a good-looking website that helps them achieve the task they came for. Many developers don’t even question what tech stack is used on each site they visit; at least, I hope not.

With that in mind, choosing a framework boils down to how efficiently you can use it. If your development server constantly experiences cold starts and crashes and is unable to quickly reflect changes, that’s a poor DX and a signal that there may be a better option.

That’s the main reason I won’t automatically reach for Gatsby from here on out. Installation is no longer a trivial task; the dependencies are firing off warnings, and it takes the development server upwards of 30 seconds to boot. I’ve even found that the longer the server runs, the slower it gets; this happens constantly to me, though I admittedly have not heard similar gripes from other developers. Regardless, I get infuriated having to constantly restart my development server every time I make a change to gatsby-config.js, gatsby-node.js files, or any other data source.

This new reality is particularly painful, knowing that a Vite.js + React setup can start a server within 500ms thanks to the use of esbuild.

Running gatsby build gets worse. Build times for larger projects normally take some number of minutes, which is understandable when we consider all of the pages, data sources, and optimizations Gatsby does behind the scenes. However, even a small content edit to a page triggers a full build and deployment process, and the endless waiting is not only exhausting but downright distracting for getting things done. That’s what incremental builds were designed to solve and the reason many people switched from Netlify to Gatsby Cloud when using Gatsby. It’s a shame we no longer have that as an available option.

The moment Gatsby Cloud was discontinued along with incremental builds, the incentives for continuing to use Gatsby became pretty much non-existent. The slow build times are simply too costly to the development workflow.

What Gatsby Did Awesomely Well

I still believe that Gatsby has awesome things that other rendering frameworks don’t, and that’s why I will keep using it, albeit for specific cases, such as my personal website. It just isn’t my go-to framework for everything, mainly because Gatsby (and the Jamstack) wasn’t meant for every project, even if Gatsby was marketed as a general-purpose framework.

Here’s where I see Gatsby still leading the competition:

  • The GraphQL data layer.
    In Gatsby, all the configured data is available in the same place, a data layer that’s easy to access using GraphQL queries in any part of your project. This is by far the best Gatsby feature, and it trivializes the process of building static pages from data, e.g., a blog from a content management system API or documentation from Markdown files.
  • Client performance.
    While Gatsby’s developer experience is questionable, I believe it delivers one of the best user experiences for navigating a website. Static pages and assets deliver the fastest possible load times, and using React Router with pre-rendering of proximate links offers one of the smoothest experiences navigating between pages. We also have to note Gatsby’s amazing image API, which optimizes images to all extents.
  • The plugin ecosystem (kinda).
    There is typically a Gatsby plugin for everything. This is awesome when using a CMS as a data source since you could just install its specific plugin and have all the necessary data in your data layer. However, a lot of plugins went unmaintained and grew outdated, introducing unsolvable dependency issues that come with dependency hell.

I briefly glossed over the good parts of Gatsby in contrast to the bad parts. Does that mean that Gatsby has more bad parts? Absolutely not; you just won’t find the bad parts in any documentation. The bad parts also aren’t deal breakers in isolation, but they snowball into a tedious and lengthy developer experience that pushes away its advocates to other solutions or rendering frameworks.

Do We Need SSR/SSG For Everything?

I’ll go on record saying that I am not replacing Gatsby with another rendering framework, like Next.js or Remix, but just avoiding them altogether. I’ve found they aren’t actually needed in a lot of cases.

Think, why do we use any type of rendering framework in the first place? I’d say it’s for two main reasons: crawling bots and initial loading time.

SEO And Crawling Bots

Most React apps start with a hollow body, only having an empty <div> alongside <script> tags. The JavaScript code then runs in the browser, where React creates the Virtual DOM and injects the rendered user interface into the browser.

Over slow networks, users may notice a white screen before the page is actually rendered, which is just mildly annoying at best (but devastating at worst).

However, search engines like Google and Bing deploy bots that only see an empty page and decide not to crawl the content. Or, if you are linking up a post on social media, you may not get OpenGraph benefits like a link preview.

<body>
  <div id="root"></div>

  <script type="module" src="/src/main.tsx"></script>
</body>

This was the case years ago, making SSR/SSG necessary for getting noticed by Google bots. Nowadays, Google can run JavaScript and render the content to crawl your website. While using SSR or SSG does make this process faster, not all bots can run JavaScript. It’s a tradeoff you can make for a lot of projects and one you can minimize on your cloud provider by pre-rendering your content.

Initial Loading Time

Pre-rendered pages load faster since they deliver static content that relieves the browser from having to run expensive JavaScript.

It’s especially useful when loading pages that are behind authentication; in a client-side rendered (CSR) page, we would need to display a loading state while we check if the user is logged in, while an SSR page can perform the check on the server and send back the correct static content. I have found, however, that this trade-off is an uncompelling argument for using a rendering framework over a CSR React app.

In any case, my SPA built on React + Vite.js gave me a perfect Lighthouse score for the landing page. Pages that fetch data behind authentication resulted in near-perfect Core Web Vitals scores.

What Projects Gatsby Is Still Good For

Gatsby and rendering frameworks are excellent for programmatically creating pages from data and, specifically, for blogs, e-commerce, and documentation.

Don’t be disappointed, though, if it isn’t the right tool for every use case, as that is akin to blaming a screwdriver for not being a good hammer. It still has good uses, though fewer than it could due to all the reasons we discussed before.

But Gatsby is still a useful tool. If you are a Gatsby developer the main reason you’d reach for it is because you know Gatsby. Not using it might be considered an opportunity cost in economic terms:

“Opportunity cost is the value of the next-best alternative when a decision is made; it’s what is given up.”

Imagine a student who spends an hour and $30 attending a yoga class the evening before a deadline. The opportunity cost encompasses the time that could have been dedicated to completing the project and the $30 that could have been used for future expenses.

As a Gatsby developer, I could start a new project using another rendering framework like Next.js. Even if Next.js has faster server starts, I would need to factor in my learning curve to use it as efficiently as I do Gatsby. That’s why, for my latest project, I decided to avoid rendering frameworks altogether and use Vite.js + React — I wanted to avoid the opportunity cost that comes with spending time learning how to use an “unfamiliar” framework.

Conclusion

So, is Gatsby dead? Not at all, or at least I don’t think Netlify will let it go away any time soon. The acquisition and subsequent changes to Gatsby Cloud may have taken a massive toll on the core framework, but Gatsby is very much still breathing, even if the current slow commits pushed to the repo look like it’s barely alive or hibernating.

I will most likely stick to Vite.js + React for my future endeavors and only use rendering frameworks when I actually need them. What are the tradeoffs? Sacrificing negligible page performance in favor of a faster and more pleasant DX that maintains my sanity? I’ll take that deal every day.

And, of course, this is my experience as a long-time Gatsby loyalist. Your experience is likely to differ, so the mileage of everything I’m saying may vary depending on your background using Gatsby on your own projects.

That’s why I’d love for you to comment below: if you see it differently, please tell me! Is your current experience using Gatsby different, better, or worse than it was a year ago? What’s different to you, if anything? It would be awesome to get other perspectives in here, perhaps from someone who has been involved in maintaining the framework.

Further Reading On SmashingMag

A Roundup Of WCAG 2.2 Explainers

WCAG 2.2 is officially the latest version of the Web Content Accessibility Guidelines now that it has become a “W3C Recommended” web standard as of October 5.

The changes between WCAG 2.1 and 2.2 are nicely summed up in “What’s New in WCAG 2.2”:

“WCAG 2.2 provides 9 additional success criteria since WCAG 2.1. [...] The 2.0 and 2.1 success criteria are essentially the same in 2.2, with one exception: 4.1.1 Parsing is obsolete and removed from WCAG 2.2.”

This article is not a deep look at the changes, what they mean, and how to conform to them. Plenty of other people have done a wonderful job of that already. So, rather than add to the pile, let’s round up what has already been written and learn from those who keep a close pulse on the WCAG beat.

There are countless articles and posts about WCAG 2.2 written ahead of the formal W3C recommendation. The following links were selected because they were either published or updated after the announcement and reflect the most current information at the time of this writing. It’s also worth mentioning that we’re providing these links purely for reference — by no means are they sponsored, nor do they endorse a particular person, company, or product.

The best place for information on WCAG standards will always be the guidelines themselves, but we hope you enjoy what others are saying about them as well.

Hidde de Vries: What’s New In WCAG 2.2?

Hidde is a former W3C staffer, and he originally published this WCAG 2.2 overview last year when a draft of the guidelines was released, updating his post immediately when the guidelines became a recommendation.

Patrick Lauke: What’s New In WCAG 2.2

Patrick is a current WCAG member and contributor, also serving as Principal Accessibility Specialist at TetraLogical, which itself is also a W3C member.

This overview goes deeper than most, reporting not only what is new in WCAG 2.2 but how to conform to those standards, including specific examples with excellent visuals.

James Edwards: New Success Criteria In WCAG 2.2

James is a seasoned accessibility consultant with TPGi, a provider of end-to-end accessibility services and products.

Like Patrick, James gets into thorough and detailed information about WCAG 2.2 and how to meet the updated standards. Watch for little asides strewn throughout the post that provide even further context on why the changes were needed and how they were developed.

GOV.UK: Understanding WCAG 2.2

It’s always interesting to see how large organizations approach standards, and governments are no exception because they have a mandate to meet accessibility requirements. GOV.UK published an addendum on WCAG 2.2 updates to its Service Manual.

Notice how the emphasis is on the impact the new guidelines have on specific impairments, as well as ample examples of what it looks like to meet the standards. Equally impressive is the documented measured approach GOV.UK takes, including a goal to be fully compliant by October 2024 while maintaining WCAG 2.1 AA compliance in the meantime.

Deque Systems: Deque Systems Welcomes and Announces Support for WCAG 2.2

Despite being more of a press release, this brief overview has a nice clean table that outlines the new standards and how they align with those who stand to benefit most from them.

Kate Kalcevich: WCAG 2.2: What Changes for Websites and How Does It Impact Users?

Kate really digs into the benefits that users get with WCAG 2.2 compliance. Photos of Kate’s colleague, Samuel Proulx, don’t provide new context but are a nice touch for remembering that the updated guidelines are designed to help real people, a point that is emphasized in the conclusion:

“[W]hen thinking about accessibility beyond compliance, it becomes clear that the latest W3C guidelines are just variations on a theme. The theme is removing barriers and making access possible for everyone.”
— Kate Kalcevich

Level Access: WCAG 2.2 AA Summary and Checklist for Website Owners

Finally, we’ve reached the first checklist! That may be in name only, as this is less of a checklist of tasks than it is a high-level overview of the latest changes. There is, however, a link to download “the must-have WCAG checklist,” but you will need to hand over your name and email address in exchange.

Chris Pycroft: WCAG 2.2 Is Here

While this is more of an announcement than a guide, there is plenty of useful information in there. The reason I’m linking it up is the “WCAG 2.2 Map” PDF that Chris includes in it. It’d be great if there was a web version of it, but I’ll take it either way! The map neatly outlines the success criteria by branching them off the four core WCAG principles.

Shira Blank and Joshua Stein: After More Than a Year of Delays, It Is Time to Officially Welcome WCAG 2.2

This is a nice overview. Nothing more, nothing less. It does include a note that WCAG 2.2 is slated to be the last WCAG 2 update between now and WCAG 3, which apparently is codenamed “Silver”? Nice.

Nathan Schmidt: Demystifying WCAG 2.2

True to its title, this overview nicely explains WCAG 2.2 updates devoid of complex technical jargon. What makes it worth including in this collection, however, are the visuals that help drive home the points.

Craig Abbott: WCAG 2.2 And What It Means For You

Craig’s write-up is a lot like the others in that it’s a high-level overview of changes paired with advice for complying with them. But Craig has a knack for discussing the changes in a way that’s super approachable and even reads like a friendly conversation. There are personal anecdotes peppered throughout the post, including Craig’s own views of the standards themselves.

“I personally feel like the new criteria for Focus Appearance could have been braver and removed some of the ambiguity around what is already often an accessibility issue.”
— Craig Abbott

Dennis Lembrée: WCAG 2.2 Checklist With Filter And Links

Dennis published a quick post on his Web Axe blog reporting on WCAG 2.2, but it’s this CodePen demo he put together that’s the real gem.

See the Pen WCAG 2.2 Checklist with Filter and Links [forked] by Web Overhauls.

It’s a legit checklist of WCAG 2.0 requirements you can filter by release, including the new WCAG 2.2 changes and which chapter of the specifications they align to.

Jason Taylor: WCAG 2.2 Is Here! What It Means For Your Business

Yet another explainer, this time from Jason Taylor at UsableNet. You’ll find a lot of cross-over between this and the others in this roundup, but it’s always good to read about the changes with someone else’s words and perspectives.

Wrapping Up

There are many, many WCAG 2.2 explainers floating around — many more than what’s included in this little roundup. The number of changes introduced in the updated guidelines is surprisingly small, considering WCAG 2.1 was adopted in 2018, but that doesn’t make them any less impactful. So, yes, you’re going to see plenty of overlapping information between explainers. The nuances between them, though, are what makes them valuable, and each one has something worth taking with you.

And we’re likely to see even more explainers pop up! If you know of one that really should be included in this roundup, please do link it up in the comments to share with the rest of us.

Generating Real-Time Audio Sentiment Analysis With AI

In the previous article, we developed a sentiment analysis tool that could detect and score emotions hidden within audio files. We’re taking it to the next level in this article by integrating real-time analysis and multilingual support. Imagine analyzing the sentiment of your audio content in real-time as the audio file is transcribed. In other words, the tool we are building offers immediate insights as an audio file plays.

So, how does it all come together? Meet Whisper and Gradio — the two resources that sit under the hood. Whisper is an advanced automatic speech recognition and language detection library. It swiftly converts audio files to text and identifies the language. Gradio is a UI framework that happens to be designed for interfaces that utilize machine learning, which is ultimately what we are doing in this article. With Gradio, you can create user-friendly interfaces without complex installations, configurations, or any machine learning experience — the perfect tool for a tutorial like this.

By the end of this article, we will have created a fully-functional app that:

  • Records audio from the user’s microphone,
  • Transcribes the audio to plain text,
  • Detects the language,
  • Analyzes the emotional qualities of the text, and
  • Assigns a score to the result.

Note: You can peek at the final product in the live demo.

Automatic Speech Recognition And Whisper

Let’s delve into the fascinating world of automatic speech recognition and its ability to analyze audio. In the process, we’ll also introduce Whisper, an automated speech recognition tool developed by the OpenAI team behind ChatGPT and other emerging artificial intelligence technologies. Whisper has redefined the field of speech recognition with its innovative capabilities, and we’ll closely examine its available features.

Automatic Speech Recognition (ASR)

ASR technology is a key component for converting speech to text, making it a valuable tool in today’s digital world. Its applications are vast and diverse, spanning various industries. ASR can efficiently and accurately transcribe audio files into plain text. It also powers voice assistants, enabling seamless interaction between humans and machines through spoken language. It’s used in myriad ways, such as in call centers that automatically route calls and provide callers with self-service options.

By automating audio conversion to text, ASR significantly saves time and boosts productivity across multiple domains. Moreover, it opens up new avenues for data analysis and decision-making.

That said, ASR does have its fair share of challenges. For example, its accuracy is diminished when dealing with different accents, background noises, and speech variations — all of which require innovative solutions to ensure accurate and reliable transcription. The development of ASR systems capable of handling diverse audio sources, adapting to multiple languages, and maintaining exceptional accuracy is crucial for overcoming these obstacles.

Whisper: A Speech Recognition Model

Whisper is a speech recognition model also developed by OpenAI. This powerful model excels at speech recognition and offers language identification and translation across multiple languages. It’s an open-source model available in five different sizes, four of which have an English-only variant that performs exceptionally well for single-language tasks.

What sets Whisper apart is its robust ability to overcome ASR challenges. Whisper achieves near state-of-the-art performance and even supports zero-shot translation from various languages to English. Whisper has been trained on a large corpus of data that characterizes ASR’s challenges. The training data consists of approximately 680,000 hours of multilingual and multitask supervised data collected from the web.

The model is available in multiple sizes. The following table outlines these model characteristics:

Size Parameters English-only model Multilingual model Required VRAM Relative speed
Tiny 39 M tiny.en tiny ~1 GB ~32x
Base 74 M base.en base ~1 GB ~16x
Small 244 M small.en small ~2 GB ~6x
Medium 769 M medium.en medium ~5 GB ~2x
Large 1550 M N/A large ~10 GB 1x

For developers working with English-only applications, it’s essential to consider the performance differences among the .en models — specifically, tiny.en and base.en, both of which offer better performance than the other models.

Whisper utilizes a Seq2seq (i.e., transformer encoder-decoder) architecture commonly employed in language-based models. This architecture’s input consists of audio frames, typically 30-second segment pairs. The output is a sequence of the corresponding text. Its primary strength lies in transcribing audio into text, making it ideal for “audio-to-text” use cases.

Real-Time Sentiment Analysis

Next, let’s move into the different components of our real-time sentiment analysis app. We’ll explore a powerful pre-trained language model and an intuitive user interface framework.

Hugging Face Pre-Trained Model

I relied on the DistilBERT model in my previous article, but we’re trying something new now. To analyze sentiments precisely, we’ll use a pre-trained model called roberta-base-go_emotions, readily available on the Hugging Face Model Hub.

Gradio UI Framework

To make our application more user-friendly and interactive, I’ve chosen Gradio as the framework for building the interface. Last time, we used Streamlit, so it’s a little bit of a different process this time around. You can use any UI framework for this exercise.

I’m using Gradio specifically for its machine learning integrations to keep this tutorial focused more on real-time sentiment analysis than fussing with UI configurations. Gradio is explicitly designed for creating demos just like this, providing everything we need — including the language models, APIs, UI components, styles, deployment capabilities, and hosting — so that experiments can be created and shared quickly.

Initial Setup

It’s time to dive into the code that powers the sentiment analysis. I will break everything down and walk you through the implementation to help you understand how everything works together.

Before we start, we must ensure we have the required libraries installed and they can be installed with npm. If you are using Google Colab, you can install the libraries using the following commands:

!pip install gradio
!pip install transformers
!pip install git+https://github.com/openai/whisper.git

Once the libraries are installed, we can import the necessary modules:

import gradio as gr
import whisper
from transformers import pipeline

This imports Gradio, Whisper, and pipeline from Transformers, which performs sentiment analysis using pre-trained models.

Like we did last time, the project folder can be kept relatively small and straightforward. All of the code we are writing can live in an app.py file. Gradio is based on Python, but the UI framework you ultimately use may have different requirements. Again, I’m using Gradio because it is deeply integrated with machine learning models and APIs, which is ideal for a tutorial like this.

Gradio projects usually include a requirements.txt file for documenting the app, much like a README file. I would include it, even if it contains no content.

To set up our application, we load Whisper and initialize the sentiment analysis component in the app.py file:

model = whisper.load_model("base")

sentiment_analysis = pipeline(
  "sentiment-analysis",
  framework="pt",
  model="SamLowe/roberta-base-go_emotions"
)

So far, we’ve set up our application by loading the Whisper model for speech recognition and initializing the sentiment analysis component using a pre-trained model from Hugging Face Transformers.

Defining Functions For Whisper And Sentiment Analysis

Next, we must define four functions related to the Whisper and pre-trained sentiment analysis models.

Function 1: analyze_sentiment(text)

This function takes a text input and performs sentiment analysis using the pre-trained sentiment analysis model. It returns a dictionary containing the sentiments and their corresponding scores.

def analyze_sentiment(text):
  results = sentiment_analysis(text)
  sentiment_results = {
    result[’label’]: result[’score’] for result in results
  }
return sentiment_results

Function 2: get_sentiment_emoji(sentiment)

This function takes a sentiment as input and returns a corresponding emoji used to help indicate the sentiment score. For example, a score that results in an “optimistic” sentiment returns a “😊” emoji. So, sentiments are mapped to emojis and return the emoji associated with the sentiment. If no emoji is found, it returns an empty string.

def get_sentiment_emoji(sentiment):
  # Define the mapping of sentiments to emojis
  emoji_mapping = {
    "disappointment": "😞",
    "sadness": "😢",
    "annoyance": "😠",
    "neutral": "😐",
    "disapproval": "👎",
    "realization": "😮",
    "nervousness": "😬",
    "approval": "👍",
    "joy": "😄",
    "anger": "😡",
    "embarrassment": "😳",
    "caring": "🤗",
    "remorse": "😔",
    "disgust": "🤢",
    "grief": "😥",
    "confusion": "😕",
    "relief": "😌",
    "desire": "😍",
    "admiration": "😌",
    "optimism": "😊",
    "fear": "😨",
    "love": "❤️",
    "excitement": "🎉",
    "curiosity": "🤔",
    "amusement": "😄",
    "surprise": "😲",
    "gratitude": "🙏",
    "pride": "🦁"
  }
return emoji_mapping.get(sentiment, "")

Function 3: display_sentiment_results(sentiment_results, option)

This function displays the sentiment results based on a selected option, allowing users to choose how the sentiment score is formatted. Users have two options: show the score with an emoji or the score with an emoji and the calculated score. The function inputs the sentiment results (sentiment and score) and the selected display option, then formats the sentiment and score based on the chosen option and returns the text for the sentiment findings (sentiment_text).

def display_sentiment_results(sentiment_results, option):
sentiment_text = ""
for sentiment, score in sentiment_results.items():
  emoji = get_sentiment_emoji(sentiment)
  if option == "Sentiment Only":
    sentiment_text += f"{sentiment} {emoji}\n"
  elif option == "Sentiment + Score":
    sentiment_text += f"{sentiment} {emoji}: {score}\n"
return sentiment_text

Function 4: inference(audio, sentiment_option)

This function performs Hugging Face’s inference process, including language identification, speech recognition, and sentiment analysis. It inputs the audio file and sentiment display option from the third function. It returns the language, transcription, and sentiment analysis results that we can use to display all of these in the front-end UI we will make with Gradio in the next section of this article.

def inference(audio, sentiment_option):
  audio = whisper.load_audio(audio)
  audio = whisper.pad_or_trim(audio)

  mel = whisper.log_mel_spectrogram(audio).to(model.device)

  _, probs = model.detect_language(mel)
  lang = max(probs, key=probs.get)

  options = whisper.DecodingOptions(fp16=False)
  result = whisper.decode(model, mel, options)

  sentiment_results = analyze_sentiment(result.text)
  sentiment_output = display_sentiment_results(sentiment_results, sentiment_option)

return lang.upper(), result.text, sentiment_output
Creating The User Interface

Now that we have the foundation for our project — Whisper, Gradio, and functions for returning a sentiment analysis — in place, all that’s left is to build the layout that takes the inputs and displays the returned results for the user on the front end.

The following steps I will outline are specific to Gradio’s UI framework, so your mileage will undoubtedly vary depending on the framework you decide to use for your project.

Defining The Header Content

We’ll start with the header containing a title, an image, and a block of text describing how sentiment scoring is evaluated.

Let’s define variables for those three pieces:

title = """🎤 Multilingual ASR 💬"""
image_path = "/content/thumbnail.jpg"

description = """
  💻 This demo showcases a general-purpose speech recognition model called Whisper. It is trained on a large dataset of diverse audio and supports multilingual speech recognition and language identification tasks.

📝 For more details, check out the [GitHub repository](https://github.com/openai/whisper).

⚙️ Components of the tool:

     - Real-time multilingual speech recognition
     - Language identification
     - Sentiment analysis of the transcriptions

🎯 The sentiment analysis results are provided as a dictionary with different emotions and their corresponding scores.

😃 The sentiment analysis results are displayed with emojis representing the corresponding sentiment.

✅ The higher the score for a specific emotion, the stronger the presence of that emotion in the transcribed text.

❓ Use the microphone for real-time speech recognition.

⚡️ The model will transcribe the audio and perform sentiment analysis on the transcribed text.
"""

Applying Custom CSS

Styling the layout and UI components is outside the scope of this article, but I think it’s important to demonstrate how to apply custom CSS in a Gradio project. It can be done with a custom_css variable that contains the styles:

custom_css = """
  #banner-image {
    display: block;
    margin-left: auto;
    margin-right: auto;
  }
  #chat-message {
    font-size: 14px;
    min-height: 300px;
  }
"""

Creating Gradio Blocks

Gradio’s UI framework is based on the concept of blocks. A block is used to define layouts, components, and events combined to create a complete interface with which users can interact. For example, we can create a block specifically for the custom CSS from the previous step:

block = gr.Blocks(css=custom_css)

Let’s apply our header elements from earlier into the block:

block = gr.Blocks(css=custom_css)

with block:
  gr.HTML(title)

with gr.Row():
  with gr.Column():
    gr.Image(image_path, elem_id="banner-image", show_label=False)
  with gr.Column():
    gr.HTML(description)

That pulls together the app’s title, image, description, and custom CSS.

Creating The Form Component

The app is based on a form element that takes audio from the user’s microphone, then outputs the transcribed text and sentiment analysis formatted based on the user’s selection.

In Gradio, we define a Group() containing a Box() component. A group is merely a container to hold child components without any spacing. In this case, the Group() is the parent container for a Box() child component, a pre-styled container with a border, rounded corners, and spacing.

with gr.Group():
  with gr.Box():

With our Box() component in place, we can use it as a container for the audio file form input, the radio buttons for choosing a format for the analysis, and the button to submit the form:

with gr.Group():
  with gr.Box():
    # Audio Input
    audio = gr.Audio(
      label="Input Audio",
      show_label=False,
      source="microphone",
      type="filepath"
    )

    # Sentiment Option
    sentiment_option = gr.Radio(
      choices=["Sentiment Only", "Sentiment + Score"],
      label="Select an option",
      default="Sentiment Only"
    )

    # Transcribe Button
    btn = gr.Button("Transcribe")

Output Components

Next, we define Textbox() components as output components for the detected language, transcription, and sentiment analysis results.

lang_str = gr.Textbox(label="Language")
text = gr.Textbox(label="Transcription")
sentiment_output = gr.Textbox(label="Sentiment Analysis Results", output=True)

Button Action

Before we move on to the footer, it’s worth specifying the action executed when the form’s Button() component — the "Transcribe" button — is clicked. We want to trigger the fourth function we defined earlier, inference(), using the required inputs and outputs.

btn.click(
  inference,
  inputs=[
    audio,
    sentiment_option
  ],
  outputs=[
    lang_str,
    text,
    sentiment_output
  ]
)

Footer HTML

This is the very bottom of the layout, and I’m giving OpenAI credit with a link to their GitHub repository.

gr.HTML(’’’
  <div class="footer">
    <p>Model by <a href="https://github.com/openai/whisper" style="text-decoration: underline;" target="_blank">OpenAI</a>
    </p>
  </div>
’’’)

Launch the Block

Finally, we launch the Gradio block to render the UI.

block.launch()
Hosting & Deployment

Now that we have successfully built the app’s UI, it’s time to deploy it. We’ve already used Hugging Face resources, like its Transformers library. In addition to supplying machine learning capabilities, pre-trained models, and datasets, Hugging Face also provides a social hub called Spaces for deploying and hosting Python-based demos and experiments.

You can use your own host, of course. I’m using Spaces because it’s so deeply integrated with our stack that it makes deploying this Gradio app a seamless experience.

In this section, I will walk you through Space’s deployment process.

Creating A New Space

Before we start with deployment, we must create a new Space.

The setup is pretty straightforward but requires a few pieces of information, including:

  • A name for the Space (mine is “Real-Time-Multilingual-sentiment-analysis”),
  • A license type for fair use (e.g., a BSD license),
  • The SDK (we’re using Gradio),
  • The hardware used on the server (the “free” option is fine), and
  • Whether the app is publicly visible to the Spaces community or private.

Once a Space has been created, it can be cloned, or a remote can be added to its current Git repository.

Deploying To A Space

We have an app and a Space to host it. Now we need to deploy our files to the Space.

There are a couple of options here. If you already have the app.py and requirements.txt files on your computer, you can use Git from a terminal to commit and push them to your Space by following these well-documented steps. Or, If you prefer, you can create app.py and requirements.txt directly from the Space in your browser.

Push your code to the Space, and watch the blue “Building” status that indicates the app is being processed for production.

Final Demo

Conclusion

And that’s a wrap! Together, we successfully created and deployed an app capable of converting an audio file into plain text, detecting the language, analyzing the transcribed text for emotion, and assigning a score that indicates that emotion.

We used several tools along the way, including OpenAI’s Whisper for automatic speech recognition, four functions for producing a sentiment analysis, a pre-trained machine learning model called roberta-base-go_emotions that we pulled from the Hugging Space Hub, Gradio as a UI framework, and Hugging Face Spaces to deploy the work.

How will you use these real-time, sentiment-scoping capabilities in your work? I see so much potential in this type of technology that I’m interested to know (and see) what you make and how you use it. Let me know in the comments!

Further Reading On SmashingMag

The Art Of Looking Back: A Critical Reflection For Individual Contributors

Have you ever looked back at your younger self and wondered, “What was I even thinking?” If you have, then you know how important it is to acknowledge change, appreciate growth, and learn from your mistakes.

Søren Kierkegaard, the first existentialist philosopher, famously wrote:

“Life can only be understood by looking backward, but it must be lived forwards.”
Søren Kierkegaard

By looking back at our past selves, we compare them not only to who we are today but to who we want to be tomorrow.

This process is called reflection.

Critical reflection is the craft of “bringing unconscious aspects of experience to conscious awareness, thereby making them available for conscious choice.” At its core, reflection focuses on challenging your takeaways from practical experiences, nudging you to explore better ways of achieving your goals.

Learning and growth are impossible without reflection. In the 1970s, David Kolb, an educational theorist, developed the “Cycle of Learning”, comprising four stages: Concrete Experience, Reflective Observation, Abstract Conceptualization, and Active Experimentation.

According to Kolb, each new experience yields learning when all of its aspects are analyzed, assessed, and challenged — in theory (through reflection and conceptualization) and in practice (through experimentation). In turn, new learning informs new experiences, therefore completing the circle: act, analyze, challenge, plan, repeat.

Reflection takes the central stage: it evaluates the outcomes of each concrete experience, informs future decisions, and leads to new discoveries. What’s more important, reflection takes every aspect of learning into consideration: from actions and feelings to thoughts and observations.

Design is, by nature, reflective. Ambiguity requires designers to be flexible and analyze the situation on the go. We need to adapt to the ever-changing environment, learn from every failure, and constantly doubt our expertise. Rephrasing Donald Schön, an American philosopher, instead of applying experience to a situation, designers should be open to the “situation’s back talk.”

On the other hand, designers often reflect almost unconsciously, and their reflections may lack structure and depth, especially at first. Reflection is the process of “thinking analytically” about all elements of your practice, but structureless reflection is not critical nor meaningful.

Luckily, a reflective framework exists to provide the necessary guidance.

Practicing Critical Reflection

In 1988, Professor Graham Gibbs published his book Learning by Doing, where he first introduced the Reflective Cycle — a framework represented by a closed loop of exercises, designed to help streamline the process of critical reflection.

In a nutshell, reflection comes down to describing the experience and your feelings towards it, analyzing your actions and thoughts, and devising an action plan. What sets it apart from a retrospective is continuity: the cycle is never complete, and every new iteration is built on the foundation of a previous one.

Imagine a situation: You are tasked with launching a survey and collecting at least 50 responses. However, a week later, you barely receive 15, despite having sent it to over a hundred people. You are angry and disappointed; your gut tells you the problem is in the research tool, and you are tempted to try again with another service.

Then, you take a deep breath and reflect on the experience.

Describe the situation

Begin by describing the situation in detail. Remember that a good description resembles a story, where every event is a consequence of past actions. Employ the “But & Therefore” rule of screenwriting and focus on drawing the connection between your actions and their outcomes.

First, provide a brief outline: What did you do, and how did it go?

Last week, I launched a research survey using Microsoft Forms, but despite my best efforts, it failed to collect a number of responses large enough to draw a meaningful conclusion. Upon analyzing the results, I noticed that a significant portion of the participants bounced from the question, which required them to choose a sector of a multi-layered pie chart.

Then, add some details: describe how you went about reaching your objective and what was your assumption at the time.

The technical limitations of Microsoft Forms made embedding a large image impossible, so I uploaded a low-resolution thumbnail and provided an external link (“Click to enlarge the image”). A large portion of participants, however, did not notice the link and couldn’t complete the task, stating that the image in the form was too small to comprehend. As a result, we have only collected 15 complete responses.

Recommendations

  • Avoid analyzing the experience at this stage. Focus on describing the situation in as many details as possible.
  • Disregard your experience and your gut urging you to solve a problem. Be patient, observant, and mindful.
  • Reflection doesn’t have to take place after the experience. In fact, you can reflect during the event or beforehand, trying to set the right expectations and plan your actions accordingly.

Describe Your Feelings

At this stage, focus on understanding your emotions before, during, and after the experience. Be mindful of the origin of your feelings and how they manifested and changed over time.

I was rather excited to see that Microsoft Forms offer a comprehensive set of survey tools. Moreover, I was captivated by the UI of the form, the option to choose a video background, and the feature that automatically calculated the time to complete the survey.

You will notice how describing your emotions helps you understand your motivations, beliefs, and convictions. In this particular example, by admitting to being enchanted by the platform’s interface, you essentially confirm that your decision was not a result of reasonable judgement or unbiased analysis.

I was somewhat disappointed to learn that I could not embed a large image, but I did not pay enough attention at the time.

This step is important: as your feelings change, so do your actions. A seemingly minor transition from “excitement” to “disappointment” is a signal that you have obviously overlooked. We will get back to it as we begin analyzing the situation.

Lastly, focus on your current state. How do you feel about your experience now when it’s over? Does any emotion stand out in particular?

I feel ashamed that I have overlooked such an obvious flaw and allowed it to impact the survey outcome.

Describing your emotions is, perhaps, the most challenging part of critical reflection. In traditional reflective practice, emotions are often excluded: we are commanded to focus on our actions, whether we are capable of acting rationally and disregard the feelings. However, in modern reflection, emotional reflection is highly encouraged.

Humans are obviously emotional beings. Our feelings determine our actions more than any facts ever could:

Our judgement is clouded by emotions, and understanding the connection between them and our actions is the key to managing our professional and personal growth.

Recommendations

  • Analyze your feelings constantly: before, during, and after the action. This will help you make better decisions, challenge your initial response, and be mindful of what drives you.
  • Don’t think you are capable of making rational decisions and don’t demand it of others, too. Emotions play an important role in decision–making, and you should strive to understand them, not obtain control over them.
  • Don’t neglect your and your team’s feelings. When reflecting on or discussing your actions, talk about how they made you feel and why.

Evaluate And Analyze

Evaluation and analysis is the most critical step of the reflective process. During this stage, you focus not only on the impact of your actions but on the root cause, challenging your beliefs, reservations, and decisions.

W3C’s guidelines for complex images require providing a long description as an alternative to displaying a complex image. Despite knowing that, I believed that providing a link to a larger image would be sufficient and that the participants would either be accessing my survey on the web or zooming in on their mobile devices.

Switching the focus from actions to the underlying motivation compliments the emotional reflection. It demonstrates the tangible impact of your feelings on your decisions: being positively overwhelmed has blinded you, and you didn’t spend enough time empathizing with your participant to predict their struggles.

Moreover, I chose an image that was quite complex and featured many layers of information. I thought providing various options would help my participants make a better-informed decision. Unfortunately, it may have contributed to causing choice overload and increasing the bounce rate.

Being critical of your beliefs is what sets reflection apart from the retelling. Things we believe in shape and define our actions — some of them stem from our experience, and others are imposed onto us by communities, groups, and leaders.

Irving Janis, an American research psychologist, in his 1972 study, introduced the term “groupthink”, an “unquestioned belief in the morality of the group and its choices.” The pressure to conform and accept the authority of the group, and the fear of rejection, make us fall victim to numerous biases and stereotypes.

Critical reflection frees us from believing the myths by doubting their validity and challenging their origins. For instance, your experience tells you that reducing the number of options reduces the choice overload as well. Critical reflection, however, nudges you to dig deeper and search for concrete evidence.

However, I am not convinced that the abundance of options led to overwhelming the participants. In fact, I managed to find some studies that point out how “more choices may instead facilitate choice and increase satisfaction.”

Recommendations

  • Learn to disregard your experience and not rely on authority when making important decisions. Plan and execute your own experiments, but be critical of the outcomes as well.
  • Research alternative theories and methods that, however unfamiliar, may provide you with a better way of achieving your goals. Don’t hesitate to get into uncharted waters.

Draw A Conclusion And Set A Goal

Summarize your reflection and highlight what you can improve. Do your best to identify various opportunities.

As a result, 85% of the participants dropped out, which severely damaged the credibility of my research. Reflecting on my emotions and actions, I conclude that providing the information in a clear and accessible format could have helped increase the response rate.
Alternatively, I could have used a different survey tool that would allow me to embed large images: however, that might require additional budget and doesn’t necessarily guarantee results.

Lastly, use your reflection to frame a SMART (Specific, Measurable, Achievable, Relevant, and Time-Bound) goal.

Only focus on goals that align with your professional and personal aspirations. Lock every goal in time and define clear and unambiguous success criteria. This will help you hold yourself accountable in the future.

As my next step, I will research alternative ways of presenting complex information that is accessible and tool-agnostic, as this will provide more flexibility, a better user experience to survey participants and ensure better survey outcomes for my future research projects. I will launch a new survey in 14 days and reflect accordingly.

At this point, you have reached the conclusion of this reflective cycle. You no longer blame the tool, nor do you feel disappointed or irate. In fact, you now have a concrete plan that will lead you to pick up a new, relevant, and valuable skill. More than that, the next time the thrill takes you, you will stop to think about whether you are making a rational decision.

Recommendations

  • SMART is a good framework, but not every goal has to fit it perfectly. Some goals may have questionable relevancy, and others may have a flexible timeline. Make sure you are confident that your goals are attainable, and constantly reflect on your progress.
  • Challenge your goals and filter out those that don’t make practical sense. Are your goals overly ambiguous? How will you know when you have achieved your goal?
Daily Reflection

Reflection guides you by helping you set clear, relevant goals and assess progress. As you learn and explore, make decisions, and overcome challenges, reflection becomes an integral part of your practice, channels your growth, and informs your plans.

Reflection spans multiple cognitive areas (“reflective domains”), from discipline and motivation to emotional management. You can analyze how you learn new things and communicate with your peers, how you control your emotions, and stay motivated. Reflecting on different aspects of your practice will help you achieve well–balanced growth.

As you collect your daily reflections, note down what they revolve around, for example, skills and knowledge, discipline, emotions, communication, meaningful growth, and learning. In time, you may notice how some domains will accumulate more entries than others, and this will signal you which areas to focus more on when moving forward.

Finally, one thing that can make a continuous, goal-driven reflective process even more effective is sharing.

Keeping a public reflective journal is a great practice. It holds you accountable, requires discipline to publish entries regularly, and demands quality reflection and impact analysis. It improves your writing and storytelling, helps you create more engaging content, and work with the audience.

Most importantly, a public reflective journal connects you with like-minded people. Sharing your growth and reflecting on your challenges is a great way to make new friends, inspire others, and find support.

Conclusion

In Plato’s “Apology,” Socrates says, “I neither know nor think I know.” In a way, that passage embodies the spirit of a reflective mindset: admitting to knowing nothing and accepting that no belief can be objectively accurate is the first step to becoming a better practitioner.

Reflection is an argument between your former and future selves: a messy continuous exercise that is not designed to provide closure, but to ask more questions and leave many open for further discussions. It is a combination of occasional revelations, uncomfortable questions, and tough challenges.

Reflection is not stepping out of your comfort zone. It is demolishing it, tearing it apart, and rebuilding it with new, better materials.

Don’t stop reflecting.

Further Reading on Smashing Magazine

Gatsby Headaches And How To Cure Them: i18n (Part 2)

In Part 1 of this series, we peeked at how to add i18n to a Gatsby blog using a motley set of Gatsby plugins. They are great if you know what they can do, how to use them, and how they work. Still, plugins don’t always work great together since they are often written by different developers, which can introduce compatibility issues and cause an even bigger headache. Besides, we usually use plugins for more than i18n since we also want to add features like responsive images, Markdown support, themes, CMSs, and so on, which can lead to a whole compatibility nightmare if they aren’t properly supported.

How can we solve this? Well, when working with an incompatible, or even an old, plugin, the best solution often involves finding another plugin, hopefully one that provides better support for what is needed. Otherwise, you could find yourself editing the plugin’s original code to make it work (an indicator that you are in a bad place because it can introduce breaking changes), and unless you want to collaborate on the plugin’s codebase with the developers who wrote it, it likely won’t be a permanent solution.

But there is another option!

Table of Contents

Note: Here is the Live Demo.

The Solution: Make Your Own Plugin!

Sure, that might sound intimidating, but adding i18n from scratch to your blog is not so bad once you get down to it. Plus, you gain complete control over compatibility and how it is implemented. That’s exactly what we are going to do in this article, specifically by adding i18n to the starter site — a cooking blog — that we created together in Part 1.

The Starter

You can go ahead and see how we made our cooking blog starter in Part 1 or get it from GitHub.

This starter includes a homepage, blog post pages created from Markdown files, and blog posts authored in English and Spanish.

What we will do is add the following things to the site:

  • Localized routes for the home and blog posts,
  • A locale selector,
  • Translations,
  • Date formatting.

Let’s go through each one together.

Create Localized Routes

First, we will need to create a localized route for each locale, i.e., route our English pages to paths with a /en/ prefix and the Spanish pages to a path with a /es/ prefix. So, for example, a path like my-site.com/recipes/mac-and-cheese/ will be replaced with localized routes, like my-site.com/en/recipes/mac-and-cheese/ for English and my-site.com/recipes/es/mac-and-cheese/ for Spanish.

In Part 1, we used the gatsby-theme-i18n plugin to automatically add localized routes for each page, and it worked perfectly. However, to make our own version, we first must know what happens underneath the hood of that plugin.

What gatsby-theme-i18n does is modify the createPages process to create a localized version of each page. However, what exactly is createPages?

How Plugins Create Pages

When running npm run build in a fresh Gatsby site, you will see in the terminal what Gatsby is doing, and it looks something like this:

success open and validate gatsby-configs - 0.062 s
success load plugins - 0.915 s
success onPreInit - 0.021 s
success delete html and css files from previous builds - 0.030 s
success initialize cache - 0.034 s
success copy gatsby files - 0.099 s
success onPreBootstrap - 0.034 s
success source and transform nodes - 0.121 s
success Add explicit types - 0.025 s
success Add inferred types - 0.144 s
success Processing types - 0.110 s
success building schema - 0.365 s
success createPages - 0.016 s
success createPagesStatefully - 0.079 s
success onPreExtractQueries - 0.025 s
success update schema - 0.041 s
success extract queries from components - 0.333 s
success write out requires - 0.020 s
success write out redirect data - 0.019 s
success Build manifest and related icons - 0.141 s
success onPostBootstrap - 0.164 s
⠀
info bootstrap finished - 6.932 s
⠀
success run static queries - 0.166 s — 3/3 20.90 queries/second
success Generating image thumbnails — 6/6 - 1.059 s
success Building production JavaScript and CSS bundles - 8.050 s
success Rewriting compilation hashes - 0.021 s
success run page queries - 0.034 s — 4/4 441.23 queries/second
success Building static HTML for pages - 0.852 s — 4/4 23.89 pages/second
info Done building in 16.143999152 sec

As you can see, Gatsby does a lot to ship your React components into static files. In short, it takes five steps:

  1. Source the node objects defined by your plugins on gatsby-config.js and the code in gatsby-node.js.
  2. Create a schema from the nodes object.
  3. Create the pages from your /src/page JavaScript files.
  4. Run the GraphQL queries and inject the data on your pages.
  5. Generate and bundle the static files into the public directory.

And, as you may notice, plugins like gatsby-theme-i18n intervene in step three, specifically when pages are created on createPages:

success createPages - 0.016 s

How exactly does gatsby-theme-i18n access createPages? Well, Gatsby exposes an onCreatePage event handler on the gatsby-node.js to read and modify pages when they are being created.

Learn more about creating and modifying pages and the Gatsby building process over at Gatsby’s official documentation.

Using onCreatePage

The createPages process can be modified in the gatsby-node.js file through the onCreatePage API. In short, onCreatePage is a function that runs each time a page is created by Gatsby. Here’s how it looks:

// ./gatsby-node.js
exports.onCreatePage = ({ page, actions }) => {
  const { createPage, deletePage } = actions;
  // etc.
};

It takes two parameters inside an object:

  • page holds the information of the page that’s going to be created, including its context, path, and the React component associated with it.
  • actions holds several methods for editing the site’s state. In the Gatsby docs, you can see all available methods. For this example we’re making, we will be using two methods: createPage and deletePage, both of which take a page object as the only parameter and, as you might have deduced, they create or delete the page.

So, if we wanted to add a new context to all pages, it would translate to deleting the pages being created and replacing them with new ones that have the desired context:

exports.onCreatePage = ({ page, actions }) => {
  const { createPage, deletePage } = actions;

  deletePage(page);

  createPage({
    ...page,
    context: {
      ...page.context,
      category: `vegan`,
    },
  });
};

Creating The Pages

Since we need to create English and Spanish versions of each page, it would translate to deleting every page and creating two new ones, one for each locale. And to differentiate them, we will assign them a localized route by adding the locale at the beginning of their path.

Let’s start by creating a new gatsby-node.js file in the project’s root directory and adding the following code:

// ./gatsby-node.js

const locales = ["en", "es"];

exports.onCreatePage = ({page, actions}) => {
  const {createPage, deletePage} = actions;

  deletePage(page);

  locales.forEach((locale) => {
    createPage({
      ...page,
      path: `${locale}${page.path}`,
    });
  });
};

Note: Restarting the development server is required to see the changes.

Now, if we go to http://localhost:8000/en/ or http://localhost:8000/es/, we will see all our content there. However, there is a big caveat. Specifically, if we head back to the non-localized routes — like http://localhost:8000/ or http://localhost:8000/recipes/mac-and-cheese/ — Gatsby will throw a runtime error instead of the usual 404 page provided by Gatsby. This is because we deleted our 404 page in the process of deleting all of the other pages!

Well, the 404 page wasn’t exactly deleted because we can still access it if we go to http://localhost:8000/en/404 or http://localhost:8000/es/404. However, we deleted the original 404 page and created two localized versions. Now Gatsby doesn’t know they are supposed to be 404 pages.

To solve it, we need to do something special to the 404 pages at onCreatePage.

Besides a path, every page object has another property called matchPath that Gatsby uses to match the page on the client side, and it is normally used as a fallback when the user reaches a non-existing page. For example, a page with a matchPath property of /recipes/* (notice the wildcard *) will be displayed on each route at my-site.com/recipes/ that doesn’t have a page. This is useful for making personalized 404 pages depending on where the user was when they reached a non-existing page. For instance, social media could display a usual 404 page on my-media.com/non-existing but display an empty profile page on my-media.com/user/non-existing. In this case, we want to display a localized 404 page depending on whether or not the user was on my-site.com/en/not-found or my-site.com/es/not-found.

The good news is that we can modify the matchPath property on the 404 pages:

// gatsby-node.js

const locales = [ "en", "es" ];

exports.onCreatePage = ({ page, actions }) => {
  const { createPage, deletePage } = actions;
  deletePage(page);
  locales.forEach((locale) => {
    const matchPath = page.path.match(/^\/404\/$/) ? (locale === "en" ? /&#42; : /${locale}/&#42;) : page.matchPath;
    createPage({
      ...page,
      path: ${locale}${page.path},
      matchPath,
    });
  });
};

This solves the problem, but what exactly did we do in matchpath? The value we are assigning to the matchPath is asking:

  • Is the page path /404/?
    • No: Leave it as-is.
    • Yes:
      • Is the locale in English?
        • Yes: Set it to match any route.
        • No: Set it to only match routes on that locale.

This results in the English 404 page having a matchPath of /*, which will be our default 404 page; meanwhile, the Spanish version will have matchPath equal /es/* and will only be rendered if the user is on a route that begins with /es/, e.g., my-site.com/es/not-found. Now, if we restart the server and head to a non-existing page, we will be greeted with our usual 404 page.

Besides fixing the runtime error, doing leave us with the possibility of localizing the 404 page, which we didn’t achieve in Part 1 with the gatsby-theme-i18n plugin. That’s already a nice improvement we get by not using a plugin!

Querying Localized Content

Now that we have localized routes, you may notice that both http://localhost:8000/en/ and http://localhost:8000/es/ are querying English and Spanish blog posts. This is because we aren’t filtering our Markdown content on the page’s locale. We solved this in Part 1, thanks to gatsby-theme-i18n injecting the page’s locale on the context of each page, making it available to use as a query variable on the GraphQL query.

In this case, we can also add the locale into the page’s context in the createPage method:

// gatsby-node.js

const locales = [ "en", "es" ];

exports.onCreatePage = ({page, actions}) => {
  const { createPage, deletePage } = actions;
  deletePage(page);
  locales.forEach((locale) => {
    const matchPath = page.path.match(/^\/404\/$/) ? (locale === "en" ? /&#42; : /${locale}/&#42;) : page.matchPath;
    createPage({
      ...page,
      path: ${locale}${page.path},
      context: {
        ...page.context,
        locale,
      },
      matchPath,
    });
  });
};

Note: Restarting the development server is required to see the changes.

From here, we can filter the content on both the homepage and blog posts, which we explained thoroughly in Part 1. This is the index page query:

query IndexQuery($locale: String) {
  allMarkdownRemark(filter: {frontmatter: {locale: {eq: $locale}}}) {
    nodes {
      frontmatter {
        slug
        title
        date
        cover_image {
          image {
            childImageSharp {
              gatsbyImageData
            }
          }
          alt
        }
      }
    }
  }
}

And this is the {markdownRemark.frontmatter__slug}.js page query:

query RecipeQuery($frontmatter__slug: String, $locale: String) {
  markdownRemark(frontmatter: {slug: {eq: $frontmatter__slug}, locale: {eq: $locale}}) {
    frontmatter {
      slug
      title
      date
      cover_image {
        image {
          childImageSharp {
            gatsbyImageData
          }
        }
        alt
      }
    }
    html
  }
}

Now, if we head to http://localhost:8000/en/ or http://localhost:8000/es/, we will only see our English or Spanish posts, depending on which locale we are on.

Creating Localized Links

However, if we try to click on any recipe, it will take us to a 404 page since the links are still pointing to the non-localized recipes. In Part 1, gatsby-theme-i18n gave us a LocalizedLink component that worked exactly like Gatsby’s Link but pointed to the current locale, so we will have to create a LocalizedLink component from scratch. Luckily is pretty easy, but we will have to make some preparation first.

Setting Up A Locale Context

For the LocalizedLink to work, we will need to know the page’s locale at all times, so we will create a new context that holds the current locale, then pass it down to each component. We can implement it on wrapPageElement in the gatsby-browser.js and gatsby-ssr.js Gatsby files. The wrapPageElement is the component that wraps our entire page element. However, remember that Gatsby recommends setting context providers inside wrapRootElement, but in this case, only wrapPageEement can access the page’s context where the current locale can be found.

Let’s create a new directory at ./src/context/ and add a LocaleContext.js file in it with the following code:

// ./src/context/LocaleContext.js

import * as React from "react";
import { createContext } from "react";

export const LocaleContext = createContext();
export const LocaleProvider = ({ locale, children }) => {
  return <LocaleContext.Provider value={locale}>{children}</LocaleContext.Provider>;
};

Next, we will set the page’s context at gatsby-browser.js and gatsby-ssr.js and pass it down to each component:

// ./gatsby-browser.js & ./gatsby-ssr.js

import * as React from "react";
import { LocaleProvider } from "./src/context/LocaleContext";

export const wrapPageElement = ({ element }) => {
  const {locale} = element.props.pageContext;
  return <LocaleProvider locale={locale}>{element}</LocaleProvider>;
};

Note: Restart the development server to load the new files.

Creating LocalizedLink

Now let’s make sure that the locale is available in the LocalizedLink component, which we will create in the ./src/components/LocalizedLink.js file:

// ./src/components/LocalizedLink.js

import * as React from "react";
import { useContext } from "react";
import { Link } from "gatsby";
import { LocaleContext } from "../context/LocaleContext";

export const LocalizedLink = ({ to, children }) => {
  const locale = useContext(LocaleContext);
  return <Link to={`/${locale}${to}`}>{children}</Link>;
};

We can use our LocalizedLink at RecipePreview.js and 404.js just by changing the imports:

// ./src/components/RecipePreview.js

import * as React from "react";
import { LocalizedLink as Link } from "./LocalizedLink";
import { GatsbyImage, getImage } from "gatsby-plugin-image";

export const RecipePreview = ({ data }) => {
  const { cover_image, title, slug } = data;
  const cover_image_data = getImage(cover_image.image.childImageSharp.gatsbyImageData);

  return (
    <Link to={/recipes/${slug}}>
      <h1>{title}</h1>
      <GatsbyImage image={cover_image_data} alt={cover_image.alt} />
    </Link>
  );
};
// ./src/pages/404.js

import * as React from "react";
import { LocalizedLink as Link } from "../components/LocalizedLink";

const NotFoundPage = () => {
  return (
    <main>
      <h1>Page not found</h1>
      <p>
        Sorry 😔 We were unable to find what you were looking for.
        <br />
        <Link to="/">Go Home</Link>.
      </p>
    </main>
  );
};

export default NotFoundPage;
export const Head = () => <title>Not Found</title>;
Redirecting Users

As you may have noticed, we deleted the non-localized pages and replaced them with localized ones, but by doing so, we left the non-localized routes empty with a 404 page. As we did in Part 1, we can solve this by setting up redirects at gatbsy-node.js to take users to the localized version. However, this time we will create a redirect for each page instead of creating a redirect that covers all pages.

These are the redirects from Part 1:

// ./gatsby-node.js

exports.createPages = async ({ actions }) => {
  const { createRedirect } = actions;

  createRedirect({
    fromPath: `/*`,
    toPath: `/en/*`,
    isPermanent: true,
  });

  createRedirect({
    fromPath: `/*`,
    toPath: `/es/*`,
    isPermanent: true,
    conditions: {
      language: [`es`],
    },
  });
};

// etc.

These are the new localized redirects:

// ./gatsby-node.js

exports.onCreatePage = ({ page, actions }) => {
  // Create localize version of pages...
  const { createRedirect } = actions;

  createRedirect({
    fromPath: page.path,
    toPath: `/en${page.path}`,
    isPermanent: true,
  });

  createRedirect({
    fromPath: page.path,
    toPath: `/es${page.path}`,
    isPermanent: true,
    conditions: {
      language: [`es`],
    },
  });
};

// etc.

We won’t see the difference right away since redirects don’t work in development, but if we don’t create a redirect for each page, the localized 404 pages won’t work in production. We didn’t have to do this same thing in Part 1 since gatsby-theme-i18n didn’t localize the 404 page the way we did.

Changing Locales

Another vital feature to add is a language selector component to toggle between the two locales. However, making a language selector isn’t completely straightforward because:

  1. We need to know the current page’s path, like /en/recipes/pizza,
  2. Then extract the recipes/pizza part, and
  3. Add the desired locale, getting /es/recipes/pizza.

Similar to Part 1, we will have to access the page’s location information (URL, HREF, path, and so on) in all of our components, so it will be necessary to set up another context provider at the wrapPageElement function to pass down the location object through context on each page. A deeper explanation can be found in Part 1.

Setting Up A Location Context

First, we will create the location context at ./src/context/LocationContext.js:

// ./src/context/LocationContext.js

import * as React from "react";
import { createContext } from "react";

export const LocationContext = createContext();
export const LocationProvider = ({ location, children }) => {
  return <LocationContext.Provider value={location}>{children}</LocationContext.Provider>;
};

Next, let’s pass the page’s location object to the provider’s location attribute on each Gatsby file:

// ./gatsby-ssr.js & ./gatsby-browser.js

import * as React from "react";
import { LocaleProvider } from "./src/context/LocaleContext";
import { LocationProvider } from "./src/context/LocationContext";

export const wrapPageElement = ({ element, props }) => {
  const { location } = props;
  const { locale } = element.props.pageContext;

  return (
    <LocaleProvider locale={locale}>
      <LocationProvider location={location}>{element}</LocationProvider>
    </LocaleProvider>
  );
};

Creating An i18n Config

For the next step, it will come in handy to create a file with all our i18n details, such as the locale code or the local name. We can do it in a new config.js file in a new i18n/ directory in the root directory of the project.

// ./i18n/config.js

export const config = [
  {
    code: "en",
    hrefLang: "en-US",
    name: "English",
    localName: "English",
  },

  {
    code: "es",
    hrefLang: "es-ES",
    name: "Spanish",
    localName: "Español",
  },
];

The LanguageSelector Component

The last thing is to remove the locale (i.e., es or en) from the path (e.g., /es/recipes/pizza or /en/recipes/pizza). Using the following simple but ugly regex, we can remove all the /en/ and /es/ at the beginning of the path:

/(\/e(s|n)|)(\/*|)/

It’s important to note that the regex pattern only works for the en and es combination of locales.

Now we can create our LanguageSelector component at ./src/components/LanguageSelector.js:

// ./src/components/LanguageSelector.js

import * as React from "react";
import { useContext } from "react";
// 1
import { config } from "../../i18n/config";
import { Link } from "gatsby";
import { LocationContext } from "../context/LocationContext";
import { LocaleContext } from "../context/LocaleContext";

export const LanguageSelector = () => {
// 2
  const locale = useContext(LocaleContext);
// 3
  const { pathname } = useContext(LocationContext);
// 4
  const removeLocalePath = /(\/e(s|n)|)(\/*|)/;
  const pathnameWithoutLocale = pathname.replace(removeLocalePath, "");
// 5
  return (
    <div>
      { config.map(({code, localName}) => {
        return (
          code !== locale && (
            <Link key={code} to={`/${code}/${pathnameWithoutLocale}`}>
              {localName}
            </Link>
          )
        );
      }) }
    </div>
);
};

Let’s break down what is happening in that code:

  1. We get our i18n configurations from the ./i18n/config.js file instead of the useLocalization hook that was provided by the gatsby-theme-i18n plugin in Part 1.
  2. We get the current locale through context.
  3. We find the page’s current pathname through context, which is the part that comes after the domain (e.g., /en/recipes/pizza).
  4. We remove the locale part of the pathname using the regex pattern (leaving just recipes/pizza).
  5. We render a link for each available locale except the current one. So we check if the locale is the same as the page before rendering a common Gatsby Link to the desired locale.

Now, inside our gatsby-ssr.js and gatsby-browser.js files, we can add our LanguageSelector, so it is available globally on the site at the top of all pages:

// ./gatsby-ssr.js & ./gatsby-browser.js

import * as React from "react";
import { LocationProvider } from "./src/context/LocationContext";
import { LocaleProvider } from "./src/context/LocaleContext";
import { LanguageSelector } from "./src/components/LanguageSelector";

export const wrapPageElement = ({ element, props }) => {
  const { location } = props;
  const { locale } = element.props.pageContext;

  return (
    <LocaleProvider locale={locale}>
      <LocationProvider location={location}>
        <LanguageSelector />
        {element}
      </LocationProvider>
    </LocaleProvider>
  );
};
Localizing Static Content

The last thing to do would be to localize the static content on our site, like the page titles and headers. To do this, we will need to save our translations in a file and find a way to display the correct one depending on the page’s locale.

Page Body Translations

In Part 1, we used the react-intl package for adding our translations, but we can do the same thing from scratch. First, we will need to create a new translations.js file in the /i18n folder that holds all of our translations.

We will create and export a translations object with two properties: en and es, which will hold the translations as strings under the same property name.

// ./i18n/translations.js

export const translations = {
  en: {
    index_page_title: "Welcome to my English cooking blog!",
    index_page_subtitle: "Written by Juan Diego Rodríguez",
    not_found_page_title: "Page not found",
    not_found_page_body: "😔 Sorry, we were unable find what you were looking for.",
    not_found_page_back_link: "Go Home",
  },
  es: {
    index_page_title: "¡Bienvenidos a mi blog de cocina en español!",
    index_page_subtitle: "Escrito por Juan Diego Rodríguez",
    not_found_page_title: "Página no encontrada",
    not_found_page_body: "😔 Lo siento, no pudimos encontrar lo que buscabas",
    not_found_page_back_link: "Ir al Inicio",
  },
};

We know the page’s locale from the LocaleContext we set up earlier, so we can load the correct translation using the desired property name.

The cool thing is that no matter how many translations we add, we won’t bloat our site’s bundle size since Gatsby builds the entire app into a static site.

// ./src/pages/index.js

// etc.

import { LocaleContext } from "../context/LocaleContext";
import { useContext } from "react";
import { translations } from "../../i18n/translations";

const IndexPage = ({ data }) => {
  const recipes = data.allMarkdownRemark.nodes;
  const locale = useContext(LocaleContext);

  return (
    <main>
      <h1>{translations[locale].index_page_title}</h1>
      <h2>{translations[locale].index_page_subtitle}</h2>
      {recipes.map(({frontmatter}) => {
        return <RecipePreview key={frontmatter.slug} data={frontmatter} />;
      })}
    </main>
  );
};

// etc.
// ./src/pages/404.js

// etc.

import { LocaleContext } from "../context/LocaleContext";
import { useContext } from "react";
import { translations } from "../../i18n/translations";

const NotFoundPage = () => {
  const locale = useContext(LocaleContext);

  return (
    <main>
      <h1>{translations[locale].not_found_page_title}</h1>
      <p>
        {translations[locale].not_found_page_body} <br />
        <Link to="/">{translations[locale].not_found_page_back_link}</Link>.
      </p>
    </main>
  );
};

// etc.

Note: Another way we can access the locale property is by using pageContext in the page props.

Page Title Translations

We ought to localize the site’s page titles the same way we localized our page content. However, in Part 1, we used react-helmet for the task since the LocaleContext isn’t available at the Gatsby Head API. So, to complete this task without resorting to a third-party plugin, we will take a different path. We’re unable to access the locale through the LocaleContext, but as I noted above, we can still get it with the pageContext property in the page props.

// ./src/page/index.js

// etc.

export const Head = ({pageContext}) => {
  const {locale} = pageContext;
  return <title>{translations[locale].index_page_title}</title>;
};

// etc.
// ./src/page/404.js

// etc.

export const Head = ({pageContext}) => {
  const {locale} = pageContext;
  return <title>{translations[locale].not_found_page_title}</title>;
};

// etc.
Formatting

Remember that i18n also covers formatting numbers and dates depending on the current locale. We can use the Intl object from the JavaScript Internationalization API. The Intl object holds several constructors for formatting numbers, dates, times, plurals, and so on, and it’s globally available in JavaScript.

In this case, we will use the Intl.DateTimeFormat constructor to localize dates in blog posts. It works by creating a new Intl.DateTimeFormat object with the locale as its parameter:

const DateTimeFormat = new Intl.DateTimeFormat("en");

The new Intl.DateTimeFormat and other Intl instances have several methods, but the main one is the format method, which takes a Date object as a parameter.

const date = new Date();
console.log(new Intl.DateTimeFormat("en").format(date)); // 4/20/2023
console.log(new Intl.DateTimeFormat("es").format(date)); // 20/4/2023

The format method takes an options object as its second parameter, which is used to customize how the date is displayed. In this case, the options object has a dateStyle property to which we can assign "full", "long", "medium", or "short" values depending on our needs:

const date = new Date();

console.log(new Intl.DateTimeFormat("en", {dateStyle: "short"}).format(date)); // 4/20/23
console.log(new Intl.DateTimeFormat("en", {dateStyle: "medium"}).format(date)); // Apr 20, 2023
console.log(new Intl.DateTimeFormat("en", {dateStyle: "long"}).format(date)); // April 20, 2023
console.log(new Intl.DateTimeFormat("en", {dateStyle: "full"}).format(date)); // Thursday, April 20, 2023

In the case of our blog posts publishing date, we will set the dateStyle to "long".

// ./src/pages/recipes/{markdownRemark.frontmatter__slug}.js

// etc.

const RecipePage = ({ data, pageContext }) => {
  const { html, frontmatter } = data.markdownRemark;
  const { title, cover_image, date } = frontmatter;
  const { locale } = pageContext;
  const cover_image_data = getImage(cover_image.image.childImageSharp.gatsbyImageData);

  return (
    <main>
      <h1>{title}</h1>
      <p>{new Intl.DateTimeFormat(locale, { dateStyle: "long" }).format(new Date(date))}</p>
      <GatsbyImage image={cover_image_data} alt={cover_image.alt} />
      <p dangerouslySetInnerHTML={{__html: html}}></p>
    </main>
  );
};

// etc.
Conclusion

And just like that, we reduced the need for several i18n plugins to a grand total of zero. And we didn’t even lose any functionality in the process! If anything, our hand-rolled solution is actually more robust than the system of plugins we cobbled together in Part 1 because we now have localized 404 pages.

That said, both approaches are equally valid, but in times when Gatsby plugins are unsupported in some way or conflict with other plugins, it is sometimes better to create your own i18n solution. That way, you don’t have to worry about plugins that are outdated or left unmaintained. And if there is a conflict with another plugin, you control the code and can fix it. I’d say these sorts of benefits greatly outweigh the obvious convenience of installing a ready-made, third-party solution.

Gatsby Headaches And How To Cure Them: i18n (Part 1)

Internationalization, or i18n, is making your content understandable in other languages, regions, and cultures to reach a wider array of people. However, a more interesting question would be, “Why is i18n important?”. The answer is that we live in an era where hundreds of cultures interact with each other every day, i.e., we live in a globalized world. However, our current internet doesn’t satisfy its globalized needs.

Did you know that 60.4% of the internet is in English, but only 16.2% percent of the world speaks English?

Source: Visual Capitalist

Yes, it’s an enormous gap, and until perfect AI translators are created, the internet community must close it.

As developers, we must adapt our sites’ to support translations and formats for other countries, languages, and dialects, i.e., localize our pages. There are two main problems when implementing i18n on our sites.

  1. Storing and retrieving content.
    We will need files to store all our translations while not bloating our page’s bundle size and a way to retrieve and display the correct translation on each page.
  2. Routing content.
    Users must be redirected to a localized route with their desired language, like my-site.com/es or en.my-site.com. How are we going to create pages for each locale?

Fortunately, in the case of Gatsby and other static site generators, translations don’t bloat up the page bundle size since they are delivered as part of the static page. The rest of the problems are widely known, and there are a lot of plugins and libraries available to address them, but it can be difficult to choose one if you don’t know their purpose, what they can do, and if they are compatible with your existing codebase. That’s why in the following hands-on guide, we will see how to use several i18n plugins for Gatsby and review some others.

The Starter

Before showing what each plugin can do and how to use them, we first have to start with a base example. (You can skip this and download the starter here). For this tutorial, we will work with a site with multiple pages created from an array of data, like a blog or wiki. In my case, I choose a cooking blog that will initially have support only for English.

Start A New Project

To get started, let’s start a plain JavaScript Gatsby project without any plugins at first.

npm init gatsby
cd my-new-site

For this project, we will create pages dynamically from markdown files. To be able to read and parse them to Gatsby’s data layer, we will need to use the gatsby-source-filesystem and gatsby-transformer-remark plugins. Here you can see a more in-depth tutorial.

npm i gatsby-source-filesystem gatsby-transformer-remark

Inside our gatsby-config.js file, we will add and configure our plugins to read all the files in a specified directory.

// ./gatsby-config.js

module.exports = {
  //...
  plugins: [
    {
      resolve: `gatsby-source-filesystem`,
      options: {
        name: `content`,
        path: `${__dirname}/src/content`,
      },
    },
    `gatsby-transformer-remark`,
  ],
};

Add Your Content

As you can see, we will use a new ./src/content/ directory where we will save our posts. We will create a couple of folders with each recipe’s content in markdown files, like the following:

├── src
│ ├── content
| | ├── mac-and-cheese
| | | ├── cover.jpg
| | | ├── index.en.md
| | ├── burritos
| | | ├── cover.jpg
| | | ├── index.en.md
| | ├── pizza
| | | ├── cover.jpg
| | | ├── index.en.md
│ ├── pages
│ ├── images

Each markdown file will have the following structure:

---
slug: "mac-and-cheese"
date: "2023-01-20"
title: "How to make mac and cheese"
cover_image:
    image: "./cover.jpg"
    alt: "Macaroni and cheese"
locale: "en"
---

Step 1
Lorem ipsum...

You can see that the first part of the markdown file has a distinct structure and is surrounded by --- on both ends. This is called the frontmatter and is used to save the file’s metadata. In this case, the post’s title, date, locale, etc.

As you can see, we will be using a cover.jpg file for each post, so to parse and use the images, we will need to install the gatsby-plugin-image gatsby-plugin-sharp and gatsby-transformer-sharp plugins (I know there are a lot 😅).

npm i gatsby-plugin-image gatsby-plugin-sharp gatsby-transformer-sharp

We will also need to add them to the gatsby-config.js file.

// ./gatsby-config.js

module.exports = {
  //...
  plugins: [
    {
      resolve: `gatsby-source-filesystem`,
      options: {
        name: `content`,
        path: `${__dirname}/src/content`,
      },
    },
    `gatsby-plugin-sharp`,
    `gatsby-transformer-sharp`,
    `gatsby-transformer-remark`,
    `gatsby-plugin-image`,
  ],
};

Querying Your Content

We can finally start our development server:

npm run develop

And go to http://localhost:8000/___graphql, where we can make the following query:

query Query {
  allMarkdownRemark {
    nodes {
      frontmatter {
        slug
        title
        date
        cover_image {
          image {
            childImageSharp {
              gatsbyImageData
            }
          }
          alt
        }
      }
    }
  }
}

And get the following result:

{
  "data": {
    "allMarkdownRemark": {
      "nodes": [
        {
          "frontmatter": {
            "slug": "/mac-and-cheese",
            "title": "How to make mac and cheese",
            "date": "2023-01-20",
            "cover_image": {
              /* ... */
            }
          }
        },
        {
          "frontmatter": {
            "slug": "/burritos",
            "title": "How to make burritos",
            "date": "2023-01-20",
            "cover_image": {
              /* ... */
            }
          }
        },
        {
          "frontmatter": {
            "slug": "/pizza",
            "title": "How to make Pizza",
            "date": "2023-01-20",
            "cover_image": {
              /* ... */
            }
          }
        }
      ]
    }
  }
}

Now the data is accessible through Gatsby’s data layer, but to access it, we will need to run a query from the ./src/pages/index.js page.

Go ahead and delete all the boilerplate on the index page. Let’s add a short header for our blog and create the page query:

// src/pages/index.js

import * as React from "react";
import {graphql} from "gatsby";

const IndexPage = () => {
  return (
    <main>
      <h1>Welcome to my English cooking blog!</h1>
      <h2>Written by Juan Diego Rodríguez</h2>
    </main>
  );
};

export const indexQuery = graphql`
  query IndexQuery {
    allMarkdownRemark {
      nodes {
        frontmatter {
          slug
          title
          date
          cover_image {
            image {
              childImageSharp {
                gatsbyImageData
              }
            }
            alt
          }
        }
      }
    }
  }
`;

export default IndexPage;

Displaying Your Content

The result from the query is injected into the IndexPage component as a props property called data. From there, we can render all the recipes’ information.

// src/pages/index.js

// ...
import {RecipePreview} from "../components/RecipePreview";

const IndexPage = ({data}) => {
  const recipes = data.allMarkdownRemark.nodes;

  return (
    <main>
      <h1>Welcome to my English cooking blog!</h1>
      <h2>Written by Juan Diego Rodríguez</h2>
      {recipes.map(({frontmatter}) => {
        return <RecipePreview key={frontmatter.slug} data={frontmatter} />;
      })}
    </main>
  );
};

// ...

The RecipePreview component will be the following in a new directory: ./src/components/:

// ./src/components/RecipePreview.js

import * as React from "react";
import {Link} from "gatsby";
import {GatsbyImage, getImage} from "gatsby-plugin-image";

export const RecipePreview = ({data}) => {
  const {cover_image, title, slug} = data;
  const cover_image_data = getImage(cover_image.image.childImageSharp.gatsbyImageData);

  return (
    <Link to={/recipes/${slug}}>
      <h1>{title}</h1>
      <GatsbyImage image={cover_image_data} alt={cover_image.alt} />
    </Link>
  );
};

Creating Pages From Your Content

If we go to http://localhost:8000/, we will see all our recipes listed, but now we have to create a custom page for each recipe. We can do it using Gatsby’s File System Route API. It works by writing a GraphQL query inside the page’s filename, generating a page for each query result. In this case, we will make a new directory ./src/pages/recipes/ and create a file called {markdownRemark.frontmatter__slug}.js. This filename translates to the following query:

query MyQuery {
  allMarkdownRemark {
    nodes {
      frontmatter {
        slug
      }
    }
  }
}

And it will create a page for each recipe using its slug as the route.

Now we just have to create the post’s component to render all its data. First, we will use the following query:

query RecipeQuery {
  markdownRemark {
    frontmatter {
      slug
      title
      date
      cover_image {
        image {
          childImageSharp {
            gatsbyImageData
          }
        }
        alt
      }
    }
    html
  }
}

This will query the first markdown file available in our data layer, but to specify the markdown file needed for each page, we will need to use variables in our query. The File System Route API injects the slug in the page’s context in a property called frontmatter__slug. When a property is in the page’s context, it can be used as a query variable under a $ followed by the property name, so the slug will be available as $frontmatter__slug.

query RecipeQuery {
  query RecipeQuery($frontmatter__slug: String) {
    markdownRemark(frontmatter: {slug: {eq: $frontmatter__slug}}) {
      frontmatter {
        slug
        title
        date
        cover_image {
          image {
            childImageSharp {
              gatsbyImageData
            }
          }
          alt
        }
      }
      html
    }
  }
}

The page’s component is pretty simple. We just get the query data from the component’s props. Displaying the title and date is straightforward, and the html can be injected into a p tag. For the image, we just have to use the GatsbyImage component exposed by the gatsby-plugin-image.

// src/pages/recipes/{markdownRemark.frontmatter__slug}.js

const RecipePage = ({data}) => {
  const {html, frontmatter} = data.markdownRemark;
  const {title, cover_image, date} = frontmatter;
  const cover_image_data = getImage(cover_image.image.childImageSharp.gatsbyImageData);

  return (
    <main>
      <h1>{title}</h1>
      <p>{date}</p>
      <GatsbyImage image={cover_image_data} alt={cover_image.alt} />
      <p dangerouslySetInnerHTML={{__html: html}}></p>
    </main>
  );
};

//...

The last thing is to use the Gatsby Head API to change the page’s title to the recipe’s title. This can be easily done since the query’s data is also available in the Head component.

// src/pages/recipes/{markdownRemark.frontmatter__slug}.js

//...

export const Head = ({data}) => <title>{data.markdownRemark.frontmatter.title}</title>;

Summing all up results in the following code:

// src/pages/recipes/{markdownRemark.frontmatter__slug}.js

import * as React from "react";
import {GatsbyImage, getImage} from "gatsby-plugin-image";
import {graphql} from "gatsby";

const RecipePage = ({data}) => {
  const {html, frontmatter} = data.markdownRemark;
  const {title, cover_image, date} = frontmatter;
  const cover_image_data = getImage(cover_image.image.childImageSharp.gatsbyImageData);

  return (
    <main>
      <h1>{title}</h1>
      <p>{date}</p>
      <GatsbyImage image={cover_image_data} alt={cover_image.alt} />
      <p dangerouslySetInnerHTML={{__html: html}}></p>
    </main>
  );
};

export const recipeQuery = graphqlquery RecipeQuery($frontmatter&#95;&#95;slug: String) {
    markdownRemark(frontmatter: {slug: {eq: $frontmatter&#95;&#95;slug}}) {
      frontmatter {
        slug
        title
        date
        cover&#95;image {
          image {
            childImageSharp {
              gatsbyImageData
            }
          }
          alt
        }
      }
      html
    }
  };

export default RecipePage;

export const Head = ({data}) => <title>{data.markdownRemark.frontmatter.title}</title>;

Creating Localized Content

With all this finished, we have a functioning recipe blog in English. Now we will use each plugin to add i18n features and localize the site (for this tutorial) for Spanish speakers. But first, we will make a Spanish version for each markdown file in ./src/content/. Leaving a structure like the following:

├── src
│ ├── content
| | ├── mac-and-cheese
| | | ├── cover.jpg
| | | ├── index.en.md
| | | ├── index.es.md
| | ├── burritos
| | | ├── cover.jpg
| | | ├── index.en.md
| | | ├── index.es.md
| | ├── pizza
| | | ├── cover.jpg
| | | ├── index.en.md
| | | ├── index.es.md
│ ├── pages
│ ├── images

Inside our new Spanish markdown files, we will have the same structure in our frontmatter but translated to our new language and change the locale property in the frontmatter to es. However, it’s important to note that the slug field must be the same in each locale.

gatsby-plugin-i18n

This plugin is displayed in Gatsby’s Internationalization Guide as its first option when implementing i18n routes. The purpose of this plugin is to create localized routes by adding a language code in each page filename, so, for example, a ./src/pages/index.en.js file would result in a my-site.com/en/ route.

I strongly recommend not using this plugin. It is outdated and hasn’t been updated since 2019, so it is kind of a disappointment to see it promoted as one of the main solutions for i18n in Gatsby’s official documentation. It also breaks the File System API, so you must use another method for creating pages, like the createPages function in the Gatsby Node API. Its only real use would be to create localized routes for certain pages, but considering that you must create a file for each page and each locale, it would be impossible to manage them on even medium sites. A 20 pages site with support for five languages would need 100 files!

gatsby-theme-i18n

Another plugin for implementing localized routes is gatsby-theme-i18n, which will be pretty easy to use in our prior example.

We will first need to install the gatsby-theme-i18n plugin and the gatsby-plugin-react-helmet and react-helmet plugins to help add useful language metadata in our <head> tag.

npm install gatsby-theme-i18n gatsby-plugin-react-helmet react-helmet

Next, we can add it to the gatsby-config.js:

// ./gatsby-config.js

module.exports = {
  //...
  plugins: [
    //other plugins ...
    {
      resolve: `gatsby-theme-i18n`,
      options: {
        defaultLang: `en`,
        prefixDefault: true,
        configPath: require.resolve(`./i18n/config.json`),
      },
    },
  ],
};

As you can see, the plugin configPath points to a JSON file. This file will have all the information necessary to add each locale. We will create it in a new ./i18n/ directory at the root of our project:

[
  {
    "code": "en",
    "hrefLang": "en-US",
    "name": "English",
    "localName": "English",
    "langDir": "ltr",
    "dateFormat": "MM/DD/YYYY"
  },

  {
    "code": "es",
    "hrefLang": "es-ES",
    "name": "Spanish",
    "localName": "Español",
    "langDir": "ltr",
    "dateFormat": "DD.MM.YYYY"
  }
]

Note: To see changes in the gatsby-config.js file, we will need to restart the development server.

And just as simple as that, we added i18n routes to all our pages. Let’s head to http://localhost:8000/es/ or http://localhost:8000/en/ to see the result.

Querying Localized Content

At first glance, you will see a big problem: the Spanish and English pages have all the posts from both locales because we aren’t filtering the recipes for a specific locale, so we get all the available recipes. We can solve this by once again adding variables to our GraphQL queries. The gatsby-theme-i18n injects the current locale into the page’s context, making it available to use as a query variable under the $locale name.

index page query:

query IndexQuery($locale: String) {
  allMarkdownRemark(filter: {frontmatter: {locale: {eq: $locale}}}) {
    nodes {
      frontmatter {
        slug
        title
        date
        cover_image {
          image {
            childImageSharp {
              gatsbyImageData
            }
          }
          alt
        }
      }
    }
  }
}

{markdownRemark.frontmatter__slug}.js page query:

query RecipeQuery($frontmatter__slug: String, $locale: String) {
  markdownRemark(frontmatter: {slug: {eq: $frontmatter__slug}, locale: {eq: $locale}}) {
    frontmatter {
      slug
      title
      date
      cover_image {
        image {
          childImageSharp {
            gatsbyImageData
          }
        }
        alt
      }
    }
    html
  }
}

Localizing Links

You will also notice that all Gatsby links are broken since they point to the non-localized routes instead of the new routes, so they will direct the user to a 404 page. To solve this, gatsby-theme-i18n exposes a LocalizedLink component that works exactly like Gatsby’s Link but points to the current locale. We just have to switch each Link component for a LocalizedLink.

// ./src/components/RecipePreview.js

+ import {LocalizedLink as Link} from "gatsby-theme-i18n";
- import {Link} from "gatsby";

//...

Changing Locales

Another vital feature to add will be a component to change from one locale to another. However, making a language selector isn’t completely straightforward. First, we will need to know the current page’s path, like /en/recipes/pizza, to extract the recipes/pizza part and add the desired locale, getting /es/recipes/pizza.

To access the page’s location information (URL, HREF, path, and so on) in all our components, we will need to use the wrapPageElement function available in the gatsby-browser.js and gatsby-ssr.js files. In short, this function lets you access the props used on each page, including a location object. We can set up a context provider with the location information and pass it down to all components.

First, we will create the location context in a new directory: ./src/context/.

// ./src/context/LocationContext.js

import * as React from "react";
import {createContext} from "react";

export const LocationContext = createContext();

export const LocationProvider = ({location, children}) => {
  return <LocationContext.Provider value={location}>{children}</LocationContext.Provider>;
};

As you can imagine, we will pass the page’s location object to the provider’s location attribute on each Gatsby file:

// ./gatsby-ssr.js & ./gatsby-browser.js

import * as React from "react";
import {LocationProvider} from "./src/context/LocationContext";

export const wrapPageElement = ({element, props}) => {
  const {location} = props;

  return <LocationProvider location={location}>{element}</LocationProvider>;
};

Note: Since we just created the gatsby-ssr.js and gatsby-browser.js files, we will need to restart the development server.

Now the page’s location is available in all components through context, and we can use it in our language selector. We have also to pass down the current locale to all components, and the gatsby-theme-i18n exposes a useful useLocalization hook that let you access the current locale and the i18n config. However, a caveat is that it can’t get the current locale on Gatsby files like gatsby-browser.js and gatsby-ssr.js, only the i18n config.

Ideally, we would want to render our language selector using wrapPageElement so it is available on all pages, but we can’t use the useLocazication hook. Fortunately, the wrapPageElement props argument also exposes the page’s context and, inside, its current locale.

Let’s create another context to pass down the locale:

// ./src/context/LocaleContext.js

import * as React from "react";
import {createContext} from "react";

export const LocaleContext = createContext();

export const LocaleProvider = ({locale, children}) => {
  return <LocaleContext.Provider value={locale}>{children}</LocaleContext.Provider>;
};

And use it in our wrapPageElement function:

// ./gatsby-ssr.js & ./gatsby-browser.js

import * as React from "react";
import {LocationProvider} from "./src/context/LocationContext";
import {LocaleProvider} from "./src/context/LocaleContext";

export const wrapPageElement = ({element, props}) => {
  const {location} = props;
  const {locale} = element.props.pageContext;

  return (
    <LocationProvider location={location}>
      <LocaleProvider locale={locale}>{element}</LocaleProvider>
    </LocationProvider>
  );
};

The last thing is how to remove the locale (es or en) from the path (/es/recipes/pizza). Using the following simple but ugly regex, we can remove all the /en/ and /es/ at the beginning of the path:

/(\/e(s|n)|)(\/*|)/

It’s important to note that the regex pattern only works for the en and es combination of locales.

Now we have to create our LanguageSelector.js:

// ./src/components/LanguageSelector

import * as React from "react";
import {useContext} from "react";
import {useLocalization} from "gatsby-theme-i18n";
import {Link} from "gatsby";
import {LocationContext} from "../context/LocationContext";
import {LocaleContext} from "../context/LocaleContext";

export const LanguageSelector = () => {
  const {config} = useLocalization();
  const locale = useContext(LocaleContext);
  const {pathname} = useContext(LocationContext);

  const removeLocalePath = /(\/e(s|n)|)(\/*|)/;
  const pathnameWithoutLocale = pathname.replace(removeLocalePath, "");

  return (
    <div>
      {config.map(({code, localName}) => {
        return (
          code !== locale && (
            <Link key={code} to={`/${code}/${pathnameWithoutLocale}`}>
              {localName}
            </Link>
          )
        );
      })}
    </div>
  );
};

Let’s break down what is happening:

  1. Get our i18n config through the useLocalization hook.
  2. Get the current locale through context.
  3. Get the page’s current pathname through context, which is the part that comes after the domain (like /en/recipes/pizza).
  4. We remove the locale part of the pathname using a regex pattern (leaving just recipes/pizza).
  5. We want to render a link for each available locale except the current one, so we will check if the locale is the same as the page before rendering a common Gatsby Link to the desired locale.

Now inside our gatsby-ssr.js and gatsby-browser.js files, we can add our LanguageSelector:

// ./gatsby-ssr.js & ./gatsby-browser.js

import * as React from "react";
import {LocationProvider} from "./src/context/LocationContext";
import {LocaleProvider} from "./src/context/LocaleContext";
import {LanguageSelector} from "./src/components/LanguageSelector";

export const wrapPageElement = ({element, props}) => {
  const {location} = props;
  const {locale} = element.props.pageContext;

  return (
    <LocationProvider location={location}>
      <LocaleProvider locale={locale}>
        <LanguageSelector />
        {element}
      </LocaleProvider>
    </LocationProvider>
  );
};

Redirecting Users

The last detail to address is that now the non-i18n routes like http://localhost:8000/ or http://localhost:8000/recipes/pizza are empty. To solve this, we can redirect the user to their desired locale using Gatsby’s redirect in gatsby-node.js.

// ./gatsby-node.js

exports.createPages = async ({actions}) => {
  const {createRedirect} = actions;

  createRedirect({
    fromPath: `/*`,
    toPath: `/en/*`,
    isPermanent: true,
  });

  createRedirect({
    fromPath: `/*`,
    toPath: `/es/*`,
    isPermanent: true,
    conditions: {
      language: [`es`],
    },
  });
};

Note: Redirects only work in production! Not in the local development server.

With this, each page that doesn’t start with the English or Spanish locale will be redirected to a localized route. The wildcard * at the end of each route says it will redirect them to the same path, e.g., it will redirect /recipes/mac-and-cheese/ to /en/recipes/mac-and-cheese/. Also, it will check for the specified language in the request’s origin and redirect to the locale if available; else, it will default to English.

react-intl

react-intl is an internationalization library for any React app that can be used with Gatsby without any extra configuration. It provides a component to handle translations and many more to format numbers, dates, times, etc. Like the following:

  • FormattedNumber,
  • FormattedDate,
  • FormattedTime.

It works by adding a provider called IntlProvider to pass down the current locale to all the react-intl components. Among others, the provider takes three main attributes:

  • message
    An object with all your translations.
  • locale
    The current page’s locale.
  • defaultLocale
    The default page’s locale.

So, for example:

  <IntlProvider messages={{}} locale="es" defaultLocale="en" >
      <FormattedNumber value={15000} />
      <br />
      <FormattedDate value={Date.now()} />
      <br />
      <FormattedTime value={Date.now()} />
      <br />
  </IntlProvider>,

Will format the given values to Spanish and render:

15.000

23/1/2023

19:40

But if the locale attribute in IntlProvider was en, it would format the values to English and render:

15,000

1/23/2023

7:42 PM

Pretty cool and simple!

Using react-intl With Gatsby

To showcase how the react-intl works with Gatsby, we will continue from our prior example using gatsby-theme-i18n.

We first will need to install the react-intl package:

npm i react-intl

Secondly, we have to write our translations, and in this case, we just have to translate the title and subtitle on the index.js page. To do so, we will create a file called messajes.js in the ./i18n/ directory:

// ./i18n/messages.js

export const messages = {
  en: {
    index_page_title: "Welcome to my English cooking blog!",
    index_page_subtitle: "Written by Juan Diego Rodríguez",
  },
  es: {
    index_page_title: "¡Bienvenidos a mi blog de cocina en español!",
    index_page_subtitle: "Escrito por Juan Diego Rodríguez",
  },
};

Next, we have to set up the IntlProvider in the gatsby-ssr.js and gatsby-browser.js files:

// ./gatsby-ssr.js & ./gatsby-browser.js

import * as React from "react";
import {LocationProvider} from "./src/context/LocationContext";
import {LocaleProvider} from "./src/context/LocaleContext";
import {IntlProvider} from "react-intl";
import {LanguageSelector} from "./src/components/LanguageSelector";
import {messages} from "./i18n/messages";

export const wrapPageElement = ({element, props}) => {
  const {location} = props;
  const {locale} = element.props.pageContext;

  return (
    <LocationProvider location={location}>
      <LocaleProvider locale={locale}>
        <IntlProvider messages={messages[locale]} locale={locale} defaultLocale="en">
          <LanguageSelector />
          {element}
        </IntlProvider>
      </LocaleProvider>
    </LocationProvider>
  );
};

And use the FormattedMessage component with an id attribute holding the desired translation key name.

// ./src/pages/index.js

// ...
import {FormattedMessage} from "react-intl";

const IndexPage = ({data}) => {
  const recipes = data.allMarkdownRemark.nodes;

  return (
    <main>
      <h1>
        <FormattedMessage id="index_page_title" />
      </h1>
      <h2>
        <FormattedMessage id="index_page_subtitle" />
      </h2>
      {recipes.map(({frontmatter}) => {
        return <RecipePreview key={frontmatter.slug} data={frontmatter} />;
      })}
    </main>
  );
};

// ...

And as simple as that, our translations will be applied depending on the current user’s locale. However, i18n isn’t only translating all the text to other languages but also adapting to the way numbers, dates, currency, and so on are formatted in the user’s regions. In our example, we can format the date on each recipe page to be formatted according to the current locale using the FormattedDate component.

// ./src/pages/recipes/{markdownRemark.frontmatter__slug}.js

//...
import {FormattedDate} from "react-intl";

const RecipePage = ({data}) => {
  const {html, frontmatter} = data.markdownRemark;
  const {title, cover_image, date} = frontmatter;
  const cover_image_data = getImage(cover_image.image.childImageSharp.gatsbyImageData);

  return (
    <main>
      <h1>{title}</h1>
      <FormattedDate value={date} year="numeric" month="long" day="2-digit" />
      <GatsbyImage image={cover_image_data} alt={cover_image.alt} />
      <p dangerouslySetInnerHTML={{__html: html}}></p>
    </main>
  );
};

//...

As you can see, we feed the component the raw date and specify how we want to display it. Then the component will automatically format it to the correct locale. And with the year, month, and day attributes, we can further customize how to display our date. In our example, the date 19-01-2023 will be formatted the following way:

English: January 19, 2023

Spanish: 19 de enero de 2023

If we want to add a localized string around the date, we can use react-intl arguments. Arguments are a way to add dynamic data inside our react-intl messages. It works by adding curly braces {} inside a message.

The arguments follow this pattern { key, type, format }, in which

  • key is the data to be formatted;
  • type specifies if the key is going to be a number, date, time, and so on;
  • format further specifies the format, e.g., if a date is going to be written like 10/05/2023 or October 5, 2023.

In our case, we will name our key postedOn, and it will be a date type in a long format:

// ./i18n/messages.js

export const messages = {
  en: {
    // ...
    recipe_post_date: "Written on {postedOn, date, long}",
  },
  es: {
    // ...
    recipe_post_date: "Escrito el {postedOn, date, long}",
  },
};
// ./src/pages/recipes/{markdownRemark.frontmatter__slug}.js

//...
import {FormattedMessage} from "react-intl";

const RecipePage = ({data}) => {
  const {html, frontmatter} = data.markdownRemark;
  const {title, cover_image, date} = frontmatter;
  const cover_image_data = getImage(cover_image.image.childImageSharp.gatsbyImageData);

  return (
    <main>
      <h1>{title}</h1>
      <FormattedMessage id="recipe_post_date" values={{postedOn: new Date(date)}} />
      <GatsbyImage image={cover_image_data} alt={cover_image.alt} />
      <p dangerouslySetInnerHTML={{__html: html}}></p>
    </main>
  );
};
//...

Note: For the date to work, we will need to create a new Date object with our date as its only argument.

Localizing The Page’s Title

The last thing you may have noticed is that the index page’s title isn’t localized. In the recipes pages’ case, this isn’t a problem since it queries the already localized title for each post, but the index page title doesn’t. Solving this can be tricky for two reasons:

  1. You can’t use Gatsby Head API directly with react-intl since the IntlProvider doesn’t exist for components created inside the Head API.
  2. You can’t use the FormattedMessage component inside the title tag since it only allows a simple string value, not a component.

However, there is a workaround for both problems:

  1. We can use react-helmet (which we installed with gatsby-theme-i18n) inside the page component where the IntlProvider is available.
  2. We can use react-intl imperative API to get the messages as strings instead of the FormmatedMessage component. In this case, the imperative API exposes a useIntl hook which returns an intl object, and the intl.messages property holds all our messages too.

So the index component would end up like this:

// ./src/pages/index.js

// ...
import {FormattedMessage, useIntl} from "react-intl";
import {Helmet} from "react-helmet";

const IndexPage = ({data}) => {
  const intl = useIntl();

  const recipes = data.allMarkdownRemark.nodes;

  return (
    <main>
      <Helmet>
        <title>{intl.messages.index_page_title}</title>
      </Helmet>
      <h1>
        <FormattedMessage id="index_page_title" />
      </h1>
      <h2>
        <FormattedMessage id="index_page_subtitle" />
      </h2>
      {recipes.map(({frontmatter}) => {
        return <RecipePreview key={frontmatter.slug} data={frontmatter} />;
      })}
    </main>
  );
};

// ...
react-i18next

react-i18next is a well-established library for adding i18n to our react sites, and it brings the same and more features, hooks, and utils of react-intl. However, a crucial difference is that to set up react-i18next, we will need to create a wrapper plugin in gatsby-node.js while you can use react-intl as soon as you install it, so I believe it’s a better option to use with Gatsby. However, there already are plugins to set up faster the react-i18next library like gatsby-plugin-react-i18next and gatsby-theme-i18n-react-i18next.

Conclusion

The current state of Gatsby and especially its plugin is precarious, and each year its popularity goes lower, so it’s important to know how to handle it and which plugins to use if you want to work with Gatsby. Despite all, I still believe Gatsby is a powerful tool and is still worth starting a new project with npm init gatsby.

I hope you found this guide useful and leave with a better grasp of i18n in Gatsby and with less of a headache. In the next article, we will explore an in-depth solution to i18n by creating your own i18n plugin!

How to Use WordPress for Document Management or File Management

Do you want to use WordPress to manage your files and documents?

You may have spreadsheets, images, and other documents that you need to share with the rest of your team. By uploading these files to WordPress, you can easily collaborate with other people, or simply keep these documents within easy reach on the WordPress dashboard.

In this article, we’ll show you how to use WordPress as a document management or file management system.

How to use WordPress for document management and file management

Why Use WordPress to Manage Documents and Files?

It’s easy to lose track of documents when you use lots of different tools. For example, you might share drafts using a platform like Google Drive, track the edits with a tool like Asana, and communicate with editors and guest bloggers using Slack.

The problem is that it’s easy to lose track of a project when you’re using so many different tools. By using WordPress to manage your documents, you can keep everything in one place. This will save you time and effort, and make sure you never lose important files.

That said, let’s see how to use WordPress to manage your documents and files easily.

Setting Up Your WordPress Document Management System

The easiest way to set up a document management system in WordPress is by using WP Document Revisions. This plugin allows you work on files with other people, store documents online, and see a complete revision history for each document.

First thing you need to do is install and activate the plugin. For more details, see our step-by-step guide on how to install a WordPress plugin.

Upon activation, you’ll see a new ‘Documents’ option in the left-hand menu. To upload a document to WordPress, head over to Documents » All Documents. Then, click the ‘Add Document’ button.

Document library add new document

Next, you need to give the document a title. This should be something that helps you identify the document, especially if you share the WordPress dashboard with other people such as guest bloggers.

With that done, click the ‘Upload New Version’ button.

Upload a file or document to WordPress

This opens the ‘Upload Document’ popup, which works similarly to the standard WordPress media library.

You can either drag and drop your document onto the popup, or click ‘Select File’ and then choose a file from your computer.

Upload new document

WP Document Revisions will now upload the file to WordPress.

With that done, you can set the document’s workflow state. If you share the dashboard with other people, then this lets everyone know that the document is an initial draft, under review, in progress, or in some other state. This can help you avoid misunderstandings and improve the editorial workflow in multi-author WordPress blogs.

Simply open the dropdown under ‘Workflow State’ and then choose an option from the list.

Using WordPress for document management and file management

Next, you may want to add a description, which will help other users understand what the file is about.

To do this, simply type into the text editor. This section includes all the standard text formatting options, so you can add a link and create bullet points and numbered lists, as well as add bold and italic formatting and more.

Adding a description to WordPress documents

You may also want to add a document image, which can help users understand the file or provide extra information, similar to an index or appendix.

The process is similar to adding a featured image to WordPress posts and pages. Simply select ‘Set Document Image’ and then either choose an image from the media library or upload a new file from your computer.

Adding an image file to a document in WordPress

When you upload a file, WP Document Revisions marks you as the document’s owner.

To assign this file to someone else, just open the ‘Owner’ dropdown and choose a new user from the list. This can help keep your documents organized, especially if you’ve added lots of users and authors to your WordPress blog.

Changing a document owner's in the WordPress admin area

By default, WP Document Revisions will publish the file privately, so only logged-in users can see it.

Another option is to publish the document to your WordPress website, so people can access it without logging into the dashboard.

Even if you publish the document, it’s still a good idea to add a password by clicking on the ‘Edit’ link next to ‘Visibility.’

Making files and documents live on a WordPress website

Then, select ‘Password protected’ and type a secure password into the ‘Password’ field.

With that done, click on ‘OK’ to save your changes.

How to password protect a file in WordPress

Don’t want to use a password? Then you can follow the same process described above, but this time select ‘Public.’

No matter how you publish the file, WP Document Revisions will show its URL directly below the title. People can see the file by visiting this URL.

To create a custom permalink instead, click on the ‘Edit’ button.

Changing the URL permalink in WordPress

Then, type in the new URL and click ‘OK.’

When you’re happy with the information you’ve entered, click on the ‘Update’ button to save your settings.

Managing Document Revisions and Workflow States in WordPress

WP Document Revisions also has powerful version control features. This can help you collaborate with other people, by showing a document’s entire history. You can even open previous versions of the file, and restore an earlier version at any point.

Every time you upload or update a document, you can type a note into the Revision Summary.

Revision summary box

These notes will appear in the revision log towards the bottom of the screen, next to the name of the person who made the update.

If the update included a new file upload, then you’ll also see a ‘Revert’ link.

Revision log and restore

Simply click the link to restore this version of the document. Even if you revert to an earlier version of the file, the history will remain intact so you won’t lose any information.

Customizing and Creating Your Own Workflow States

Workflow states make it easy to see whether a document is an initial draft, in progress, or some other state. Similar to how you save blog posts as drafts or published, states can improve the editorial workflow.

WP Document Revisions comes with four default workflow states: final, in progress, initial draft, and under review. You may need to change these default states, or add more states. For example, if you’re creating a client portal then you might make a ‘under client review’ state.

To change the workflow states, go to Documents » Workflow States. If you want to customize an existing state, then just hover over it and click on the ‘Edit’ button.

Customize existing workflow states

This opens an editor where you can change the name, slug, and description of the workflow state. This is similar to how you edit categories and tags in WordPress.

Once you’re done making changes, click the ‘Update’ button.

Edit existing workflow state

You can also add new workflow states.

In Documents » Workflow States, type in a new name, slug, and description. Then, click the ‘Add New Workflow State’ button.

Add new workflow state

Managing User Roles and Document Access in WordPress

WP Document Revisions assigns different document editing capabilities to people, based on their user role. For example, authors can’t edit documents published by other people or read privately-published documents.

The default permissions should be a good fit for most websites. However, if you want to review and change any of these settings, then the easiest way is by using Members. This plugin allows you to customize the permissions for every user role, and even create completely new roles.

The first thing you need to do is install and activate Members. For more details, see our step-by-step guide on how to install a WordPress plugin.

Upon activation, go to the Members » Roles page to see all the different user roles on your WordPress website.

Changing who can access and edit documents in WordPress

Here, hover your mouse over the user role that you want to modify.

You can then go ahead and click on ‘Edit’ when it appears, which opens the user role editor.

How to edit user roles in WordPress

The left column shows all the different types of content such as reusable blocks and WooCommerce products.

In the left-hand menu, click on ‘Documents.’

Changing the file and document permissions

You’ll now see all the permissions this user role has, such as the ability to delete another person’s files or edit their own documents.

Simply click on the ‘Grant’ or ‘Deny’ checkbox for each permission.

Granting and denying permissions in WordPress

When you’re happy with the changes you’ve made, click on ‘Update.’

For a more detailed look at the Members plugin, please see our guide on how to add or remove capabilities to user roles in WordPress.

Saving custom user permissions in WordPress

After installing this plugin, you can even control who has access to each document. Simply head over to Documents » All Documents.

Here, hover over any file and click on the ‘Edit’ link when it appears.

Editing a document's settings in WordPress

Now, scroll to the new ‘Content Permissions’ box. Here, you’ll find a list of all the user roles on your WordPress blog or website.

Just check the box next to each role that needs to access this document.

Restricting document access based on user role

In this section, you’ll also see a Paid Memberships tab. This allows you to restrict access to paying members.

For more information, please see our ultimate guide to creating a WordPress membership site.

Paid membership settings

When you’re happy with the changes, click on ‘Update’ to save your settings.

We hope this article helped you learn how to use WordPress for document management or file management. You may also want to see our guide on how to create a free business email address and our expert pick of the best live chat software for small businesses.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post How to Use WordPress for Document Management or File Management first appeared on WPBeginner.

How to Add Click to Tweet Boxes in Your WordPress Posts

Do you want to add a ‘click to tweet’ box in your WordPress posts?

These simple boxes allow readers to share quotes from your posts with a single click. This makes them a great way to get more engagement on social media and drive extra traffic to your website.

In this article, we will show you how to add click to tweet boxes in your WordPress posts.

How to add click to tweet boxes in your WordPress posts

Why Add Click to Tweet Boxes in Your WordPress Posts?

A ‘click to tweet’ button makes it easy for readers to share quotes from your posts and pages.

Visitors can simply click a button to create a tweet that contains the quote, plus a link to the page or post where the quote is featured.

An example of a click to tweet box in WordPress

Depending on how the box is set up, the tweet may even tag your Twitter account.

When social media users see lots of people posting your content, they’re more likely to engage with you.

How to add a 'click to tweet' box to a WordPress website

In this way, click to tweet boxes can increase your blog traffic, get you more followers, and create a buzz around your brand on social media.

All of this can translate to more sales on your online store, new subscribers for your email newsletter, and much more.

With that being said, let’s see how you can easily add click to tweet boxes in your WordPress blog posts.

How to Add Click to Tweet Boxes in Your WordPress Posts

The easiest way to create a click to tweet box is by using Better Click To Tweet. This plugin allows you to add a quote box to any page or post using either a shortcode or a block.

The first thing you need to do is install and activate the Better Click To Tweet plugin. For more details, see our step-by-step guide on how to install a WordPress plugin.

Upon activation, head over to Settings » Better Click To Tweet to configure the plugin’s settings. In the ‘Your Twitter Handle’ field, type the account you want to tag in tweets that get shared.

Configuring the Better Click to Tweet WordPress plugin

There’s no authentication process, so you can add any Twitter account to the plugin’s settings, including an account that you don’t own.

You can also override this setting for individual click to tweet boxes, so it’s easy to tag lots of different accounts across your WordPress blog.

If you use custom short URLs, then make sure to check the box next to ‘Use short URL.’ This will force the plugin to show the WordPress shortlink instead of the full URL, which is important if you use tools to track link clicks in WordPress. Again, you can override this setting for individual click to tweet boxes.

With that done, click on ‘Save Changes.’

How to Add a Click to Tweet Box Using the WordPress Block

You can add a click to tweet box to any page or post using shortcode or a block. Since it’s the easiest method, let’s start with the block.

Simply open the page or post where you want to create a box and then click on the ‘+’ button.

In the popup that appears, start typing in ‘Better Click to Tweet.’ When the right block shows up, click to add it to the page.

Adding a Better Click to Tweet button to WordPress

You can now type in the quote you want to use.

By default, the plugin shows a ‘Click to Tweet’ prompt, but you can replace this with your own messaging. For example, if you’re running a giveaway or contest in WordPress then you might encourage readers to quote the tweet, in order to enter the competition.

An example of a Twitter giveaway

To do this, simply click to select the block.

Then, type your custom messaging into the ‘Prompt’ field.

Customizing the quote tweet block

By default, the plugin will tag the account you added in its settings, but you can override this and tag a different account instead.

To make this change, simply type a different username into the ‘Twitter Username’ field.

Changing the linked Twitter account

Tagging your Twitter account is a great way to get more followers and engagement. However, if you simply want to get more visitors to your website then you can remove this tag, so the quoted tweet simply contains a link.

To do this, click to disable the ‘Include the username in Tweet?’ toggle.

By default, the plugin includes a link to the page or post where the quote box is featured. If you prefer, then you can use a different link instead. This can be useful if you want to get more visitors to a specific page, such as the landing page for a product or service that’s mentioned in the blog post.

To do this, simply type the URL into the ‘Custom URL’ field.

You can also mark the link as nofollow, which is useful if you’re linking to a third-party website such as a client or affiliate marketing partner.

Adding a custom link to a social media block

Another option is removing the link, so the tweet just has the tagged account. This is a good option if you simply want to get more engagement on Twitter, rather than drive people to your website.

To do this, click to disable the ‘Include URL in Tweet’ toggle.

Removing the URL from a click to tweet social media block

When you’re happy with how the quote box is set up, click on the ‘Publish’ or ‘Update’ button to make it live. Now if you visit your WordPress website, you’ll see the quote box in action.

How to Add a Click to Tweet Box Using a Shortcode

If you want to show the same quote on multiple pages, then adding and configuring each box separately can take a lot of time and effort. Instead, it may be easier to paste the same shortcode into multiple locations.

You can also add a box to your WordPress theme’s sidebar or similar section, using a shortcode. For more information on how to place the shortcode, please see our guide on how to add a shortcode in WordPress.

To start, you may want to use the following shortcode:[bctt tweet="Quotable Tweet"]

This will create a tweet that tags the account linked in the plugin’s settings and includes a URL to the current page or post. Be sure to change the words “quotable tweet” in the shortcode to whatever message you want users to share.

If you don’t want to tag an account, then you can use the following instead:[bctt tweet="Quotable Tweet" via="no"]

Want to include a different URL in the tweet? Here’s the shortcode:[bctt tweet="Quotable Tweet." url="http://example.com"]

To remove the link completely, just set it to url="no.” You can also mark the link as nofollow by adding the following to the shortcode: nofollow="yes.”

Bonus: How to Add a Twitter Feed in WordPress

A click to tweet box is a quick and easy way to get engagement on Twitter. However, there are other ways to promote your social media accounts including adding a feed that shows your recent tweets and updates automatically as you make new posts.

The easiest way to do this is by using Smash Balloon Twitter Feed, which is the best Twitter plugin for WordPress.

A Twitter feed, created using Smash Balloon

This plugin allows you to embed actual tweets in WordPress blog posts, so readers can easily comment, like, and retweet the original post.

You can quote your own tweets, or even tweets from a third party. For example, you might embed posts from an industry influencer, an advertising partner, or a happy customer.

For more information, please see our guide on how to add social media feeds to WordPress.

We hope this tutorial helped you learn how to add click to tweet boxes in your WordPress posts. You may also want to learn how to create a contact form in WordPress, or see our expert picks for the best WordPress social media plugins for WordPress.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post How to Add Click to Tweet Boxes in Your WordPress Posts first appeared on WPBeginner.

How to Install Template Kits in WordPress (Step-by-Step)

Do you want to install template kits in WordPress?

Designing a website can take a lot of time and effort, especially if you don’t have any previous experience. Thankfully, template kits allow you to apply a professional design across your entire WordPress website with the click of a button.

In this article, we will show you how you can easily install website template kits in WordPress.

How to install template kits in WordPress (step-by-step)

Why Install Template Kits in WordPress?

WordPress template kits are a collection of pre-designed templates, layouts, and other elements that allow you to create a professional-looking website without having to write code.

Template kits are designed to be used together, so you can simply install a kit and use the same design across your entire site.

A good template kit has designs for all the most common pages including an about page, a contact page, and a custom home page. They may also provide templates for areas that appear across multiple pages, such as a header and footer template.

There are some template kits that you can customize to suit any kind of website, similar to multi-purpose WordPress themes. Other templates are designed for a specific industry, such as fashion, venture marketing, and tech blogging kits.

No matter what template kit you use, with the right page builder plugin you can customize it to perfectly suit your business and branding.

With that being said, let’s see how you can design a beautiful website fast, by installing template kits in WordPress.

How to Choose the Best WordPress Template Kits

Template kits have many names, as some companies call them website kits, website templates, or WordPress starter templates. No matter what the name, the best place to find template kits is by installing a drag-and-drop page builder plugin.

SeedProd is the best page builder plugin with over 1 million users. It allows you to create a complete WordPress website without having to write a single line of code.

SeedProd comes with 90 ready-made blocks that you can add to any page, and over 180 templates that you can use to create landing pages, sales pages, and more.

The SeedProd drag and drop page builder

SeedProd also has a growing library of professional website kits that you can add to your site with a single click. After choosing a kit, you can customize every part of the design using SeedProd’s drag-and-drop editor.

Since it’s the fastest and easiest method, in this guide we’ll show you how to install template kits using SeedProd.

Step 1. Install a WordPress Page Builder Plugin

First, you need to install and activate the SeedProd plugin. For more details, see our step-by-step guide on how to install a WordPress plugin.

Note: There is also a free version of SeedProd that allows you to create beautiful coming soon pages, maintenance pages, and more no matter what your budget. However, in this guide, we’ll be using the premium plugin as it has lots of different template kits. Just be aware that you’ll need a Pro plan or higher to use the template kits.

Upon activation, head over to SeedProd » Settings and enter your license key.

Adding a license to the SeedProd page builder plugin

You can find this information under your account on the SeedProd website. After entering the license key, click on the ‘Verify Key’ button.

With your license key active, you’re ready to install a template kit.

Step 2. Choose a WordPress Template Kit

SeedProd’s site kits work seamlessly with its WordPress theme builder, so head over to SeedProd » Theme Builder to get started. Here, click on the Theme Template Kits button.

The SeedProd thee builder feature

You’ll now see SeedProd’s website kit library.

To take a closer look at any template, simply hover your mouse over it and then click on the magnifying glass icon when it appears.

Previewing a website starter kit using SeedProd

This opens the template kit in a new tab.

Since it’s a complete website kit, you can see more pages and designs by clicking on the different links, buttons, and menu items.

An example of a website starter kit, installed using SeedProd

SeedProd has template kits for different industries and niches like restaurant websites, travel blogs, marketing consultancies, and more.

When you find a template kit you want to use, simply hover over it and then click on the checkmark icon when it appears.

Choosing a website template kit using SeedProd

SeedProd will now add all the different templates to the WordPress dashboard.

To take a look, go to SeedProd » Theme Builder. You may see slightly different options depending on the kit you’re using.

A list of template kit parts in the WordPress dashboard

SeedProd’s templates are disabled by default, so they won’t immediately change how your site looks by overriding your current WordPress theme.

Step 3. Customize Your Template Kit in WordPress

Before making the kit live, you need to replace the demo content. You may also want to change the kit’s branding to better match your own business. For example, you can add custom fonts, change the colors, add your own logo, and more.

The templates you see may vary depending on the kit. However, most kits have a header and footer template, so we’ll show you how to customize these templates as an example.

How to Customize a Header Template Kit in WordPress

The header is the first thing visitors see when they arrive at your site. With that in mind, it should introduce your brand and provide easy access to your site’s most important content.

To customize the header template, simply hover over it and then click on ‘Edit Design.’

Customizing a template kit using SeedProd

This loads the SeedProd editor, with the header template to the right of the screen.

On the left-hand side, you’ll see a menu with different options.

Adding blocks to a website template part

Most header templates come with a placeholder logo, so let’s start by replacing it. Simply click to select the placeholder logo and the left-hand menu will show all the settings you can use to customize the block.

Simply hover over the image in the left-hand menu and then click on the Select Image button when it appears.

Adding a logo to a website header template

Most template kits come with alternative logos and images that you can add to your WordPress website.

You can choose one of these images from the WordPress media library or upload a new file from your computer.

Adding a custom logo to a website starter kit

After replacing the logo, you can change its alignment and size, add image alt text, and more using the settings in the left-hand menu.

When you’re happy with how the logo looks, it’s a good idea to update the menu.

Most header templates come with a placeholder menu that you can easily customize by adding your own text and links. To get started, click to select the Nav Menu block.

Adding a navigation menu to a custom page design

You can either build a menu in SeedProd, or you can display any navigation menu you’ve created in the WordPress dashboard.

To build a new menu using SeedProd, simply hover your mouse over any menu item that you want to delete. Then, click on the trash can icon when it appears.

Removing items from a WordPress navigation menu

To add a new item to the menu, click on the ‘Add New Item’ button, which creates a new placeholder item.

Next, simply click on the item to expand it.

Creating a custom navigation menu using WordPress

You’ll now see some new settings where you can type in the text and link you want to use.

You can also set the link to open in a new tab, or you can mark it as no follow.

How to install a template kit using SeedProd

Simply repeat these steps to add more items to the menu. You can also rearrange items in the menu using drag and drop.

Another option is to simply display a menu you’ve already created in the WordPress dashboard. To do this, click on ‘WordPress Menu’ and choose a menu from the dropdown.

Showing a WordPress navigation menu

After making these changes, you may want to add more content to the header. For example, you might encourage visitors to follow you on social media by adding ‘like’ and ‘share’ buttons to the header.

In the left-hand menu, simply find the block you want to add and then drag it onto your layout.

Adding social icons to a website template kit

You can then customize the block using the settings in the left-hand menu.

When you’re happy with how the header template looks, click on ‘Save’ to store your settings.

Publishing a website starter kit

How to Customize the Footer Template in WordPress

The footer is the perfect place to add useful information such as a dynamic copyright date or your phone number. You can also link to important content like your contact form, blog, and social media profiles.

With that in mind, most SeedProd template kits come with a footer template. To edit this template go to SeedProd » Theme Builder and then hover over the ‘footer’ template. When the ‘Edit Design’ link appears, give it a click.

How to customize a footer template

This opens the footer template in the SeedProd editor. You can now delete unwanted blocks, replace the placeholder content, and add more blocks following the exact same process described above.

Many business owners use the footer to show their contact information, such as their business email address. However, if you’re using WPForms then you can easily add a contact form to your website’s footer. This allows people to contact you from any page or post.

Adding a contact form to a template kit using WPForms

If you’re looking for more ideas, then you can see our checklist of things to add to the footer of your WordPress website.

Most footer templates come with placeholder text that you can replace with your own content. Simply click to select each text box and then type your messaging into the small text editor that appears.

The editor has all the standard formatting options, so you can highlight important text or add links that will appear across your WordPress blog or website.

Adding text to a footer template

Many footer templates come with a ready-made Nav Menu block that contains some placeholder links.

You can replace these dummy menu items with links to your own content by following the same process described above. For example, you might include links to your site’s privacy policy, blog, online store, and other important content.

Adding a custom navigation menu to a website footer template

When you’re happy with how the footer looks, click on the Save button to store your changes.

Step 4. Edit Your Global Template Kit Settings

Often, you’ll want to change the template’s default fonts, backgrounds, colors, and more to match your branding. Instead of making these changes to each template, you can save time by editing the kit’s Global CSS settings.

In your WordPress dashboard, go to SeedProd » Theme Builder and hover over the Global CSS template. You can then click on the ‘Edit Design’ link when it appears.

Editing the global CSS settings

In the left-hand menu, SeedProd lists all the different elements you can change, such as the fonts, forms, layout, and more.

To see what changes you can make, simply click any option.

Changing a template kit's global CSS settings

You can now adjust its settings. For example, you can change the colors used for the kit’s headers, paragraph text, links, and more.

SeedProd will automatically apply these changes across the entire template kit.

Changing the colors in a website template kit

When you’re happy with the changes you’ve made, click on the ‘Save’ button.

Step 5. Enabling Your SeedProd Template Kit

You can now customize every SeedProd template by following the same process described above. When you’re happy with how the templates are set up, it’s time to make the kit live.

In the WordPress dashboard, go to SeedProd » Theme Builder and click on the ‘Enable SeedProd Theme’ toggle so that it shows ‘Yes.’

Enabling a template kit in WordPress

Now, if you visit your WordPress website you’ll see the new design live.

We hope this article helped you install template kits in WordPress. You may also want to see our guide on how to choose the best web design software, or see our expert pick of the must-have WordPress plugins.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post How to Install Template Kits in WordPress (Step-by-Step) first appeared on WPBeginner.

Instantly Turn Keywords Into SEO Links: This SmartCrawl Tool Automates It For You

SmartCrawl‘s Automatic Linking feature allows you to automatically turn specific keywords or phrases into internal or external links within your site’s content, saving you time and effort, making interlinking a breeze, and boosting your website’s overall SEO.

Interlinking web pages is essentially what makes the web the web. Smart and effective interlinking of web pages will improve your site’s SEO and play a crucial role in increasing the visibility and success of your website.

In this comprehensive guide, we’ll cover practical uses of the SmartCrawl plugin’s Automatic Linking feature and how it can help automate an important aspect of your website’s SEO.

We’ll cover the following:

The Importance of Linking Web Content For Improved SEO

Internal and external linking are essential components of SEO that help to improve your site’s visibility and ranking on search engines and user navigation.

Internal linking refers to the practice of linking pages within the same website. It is an effective way to guide users through your website, make it easier for search engines to crawl and index pages, and establish a hierarchical page structure.

Internal linking can also help distribute link equity throughout a website, which can improve the ranking of individual pages.

For more details, see our comprehensive guide to internal link building.

External linking, on the other hand, involves linking to other websites or pages that are not within the same domain.

External linking can provide additional information or resources for users, and can also help establish your website’s authority and relevance in your particular field or industry.

What is SmartCrawl’s Automatic Linking and How Does It Work?

SmartCrawl’s powerful Automatic Linking feature automates your site’s internal and external page linking and improves your site’s SEO.

It works by allowing you to select the post types for which you want to enable auto-linking and the post types or taxonomies that can be linked to.

This means that you can choose which areas of your website you want to apply the automatic linking feature and select exactly which post type(s) the plugin should automatically insert links in. Every post type active on your site will then be available for keyword linking.

For example, let’s say you run a web development business and you offer a website building, web hosting, and web maintenance package called “Total Business Care Service” which has an information page where clients can purchase the service.

Additionally, let’s say you run a blog on your site where you post articles about topics related to WordPress information and want to link to the WordPress news blog any time you type the words “WordPress news.”

Without the SmartCrawl plugin, you would have to manually create these links each time you write the text in your pages and posts.

With SmartCrawl, you can enable and set up automatic linking in a few simple steps, and save yourself a bunch of time by letting the plugin do the work of linking the text to your internal and external pages automatically.

SmartCrawl - Automatic content linking
Let SmartCrawl automatically link to the internal and external pages you specify for certain keywords.

Step-By-Step Guide to Using SmartCrawl’s Automatic Linking

Using SmartCrawl’s Automatic Linking feature is super easy.

First, make sure that you have installed and activated SmartCrawl Pro.  Automatic linking is not available on the free version of the plugin.

Next, go to SmartCrawl > Settings > Advanced Tools. Here is where you’ll find the Automatic Links section.

Note: If this is the first time you are using this feature, click the Activate button.

SmartCrawl - Advanced Tools: Activate Automatic Linking.
Activate SmartCrawls’ Automatic Linking feature.

This will enable the functionality on your website and display the Automatic Linking screen.

SmartCrawl's Automatic Linking Screen.
SmartCrawl’s Automatic Linking Screen.

The feature has four main tabs that let you configure exactly how you want the plugin to handle the automatic linking of keywords on your site.

Let’s go briefly through each tab:

Post Types

This tab lets you choose which areas of your website to apply the automatic linking feature. Use this tab to select the post types that you want to insert links from.

SmartCrawl: Automatic Linking - Post Types tab.
Every active post type on your site is available for keyword linking.

After selecting the post types to insert links in, a “Link to” field will display. Use the dropdown menu to select the post types or taxonomies to link to.

SmartCrawl: Automatic Linking - Post Types tab - Link to field dropdown menu.
Select the post types or taxonomies to link to.

You have now specified the areas of your website where automatic linking will apply.

Remember to save your settings before continuing.

Custom Links

This section is where you take control of your linking strategy.

Add any keywords or key phrases that you want to automatically link to specific URLs (internal or external) here.

SmartCrawl: Automatic Linking - Custom Links tab.
Specify your automatic internal and external links in the Custom Links tab.

SmartCrawl will now automatically create links throughout your site using the keywords and URLs you have specified.

SmartCrawl: Automatic Linking example
SmartCrawl automatically creates the links in your content.

Exclusions

Use the Exclusions tab to ensure that certain keywords or URLs will not be linked to.

SmartCrawl: Automatic Linking - Exclusions tab.
Exclude keywords and URLs from being automatically linked.

Settings

The Settings tab lets you specify global settings for your automatic linking strategy when using SmartCrawl.

For example, you can set minimum title and taxonomy lengths, maximum limits for links, allow auto-links to empty taxonomies, prevent linking in heading tags, and even process RSS feeds.

Other options include case-sensitive matching, preventing duplicate links, opening links in new tabs, adding nofollow autolinks, and preventing linking on no-index pages, image captions, and caching on autolinked content.

SmartCrawl: Automatic Linking - Settings tab.
The Settings tab puts you in complete control of SmartCrawl’s automatic linking feature.

With all of these options, you can customize SmartCrawl to meet your specific linking needs.

Learn about all of the options and settings described above in our SmartCrawl Automatic Linking documentation.

Practical Examples of SmartCrawl’s Automatic Linking Usage

So, what are some practical uses of SmartCrawl’s automatic linking feature?

Let’s go through some examples:

Example #1 – Boost Internal Linking to Blog Posts

Suppose you’ve written a great blog post about WordPress themes. You can set up an automatic linking rule in SmartCrawl to target the keyword “WordPress themes” which will automatically link this keyword on all of your existing and new posts to this article.

Example #2 – Automatically Link to Top Product Pages

If you have an eCommerce store, you can use automatic linking to turn your top-selling items’ names into links that direct users to the relevant product pages on your site.

This will help to drive more traffic to your popular and best-selling products and boost sales.

Example #3 – Increase Visibility of Pillar Content

Use automatic linking to increase the visibility and boost the rankings of your cornerstone or pillar content by turning the keywords you are targeting for this content into site wide links.

For example, suppose you have a page that contains an article about XYZ Widget with a step-by-step tutorial and detailed instructions that your customers need to follow to ensure that they use the product correctly. You can create an automatic linking rule in SmartCrawl that targets the keyword “XYZ Widget instructions” and automatically links to this important page.

Example #4 – Cross-Promote Related Content

Use automatic linking to connect topic-related blog posts, guides, or how-to articles on your site, providing users with easy access to related information and keeping them engaged longer with your content and your site.

Example #5 – Link to Useful External Resources

Have you got a relevant resource on an external site that your readers might benefit from? Turn specific keywords mentioning these resources into links that will direct users to those external pages.

Example #6 – Boost Affiliate Marketing Revenue

Similar to the previous example, if you promote affiliate products or services on your site, you can use SmartCrawl’s automatic linking feature to create anchor text links connecting specific brand names or product/service categories to their respective destination pages or sites with your embedded affiliate link.

SmartCrawl Automatic Linking example.
SmartCrawl’s Automatic Linking feature is perfect for affiliate marketing!

Sitewide Automatic Linking – Faster Than Blinking

SmartCrawl’s powerful and time-saving Automatic Linking feature helps you take your linking strategy to the next level while simultaneously improving your site’s SEO and user navigation experience.

Check out our documentation section to learn more about using the automatic links feature or access SmartCrawl Pro and start boosting your traffic, search rankings, and sales conversions today by becoming a WPMU DEV member.

JavaScript Snippets For Better UX and UI

JavaScript can be used to significantly improve the user experience (UX) and user interface (UI) of your website. In this article, we will discuss some JavaScript snippets that you can use to boost the UX and UI of your website.

UNLIMITED DOWNLOADS: 500,000+ WordPress & Design Assets

Sign up for Envato Elements and get unlimited downloads starting at only $16.50 per month!

Smooth Scrolling

Smooth scrolling is a popular UX feature that makes scrolling through web pages smoother and more fluid. With this feature, instead of abruptly jumping to the next section of the page, the user will be smoothly transitioned to the next section.

To add smooth scrolling to your website, you can use the following JavaScript code:

$('a[href*="#"]').on('click', function(e) {
  e.preventDefault()

  $('html, body').animate(
    {
      scrollTop: $($(this).attr('href')).offset().top,
    },
    500,
    'linear'
  )
})

This code will create a smooth scrolling effect whenever the user clicks on a link that includes a # symbol in the href attribute. The code targets all such links and adds a click event listener to them. When the user clicks on a link, the code will prevent the default action of the link (i.e., navigating to a new page) and instead animate the page to scroll smoothly to the section of the page specified by the link’s href attribute.

Dropdown Menus

Dropdown menus are a common UI element that can help to organize content and improve the navigation of your website. With JavaScript, you can create dropdown menus that are easy to use and intuitive for your users.

To create a basic dropdown menu with JavaScript, you can use the following code:

var dropdown = document.querySelector('.dropdown')
var dropdownToggle = dropdown.querySelector('.dropdown-toggle')
var dropdownMenu = dropdown.querySelector('.dropdown-menu')

dropdownToggle.addEventListener('click', function() {
  if (dropdownMenu.classList.contains('show')) {
    dropdownMenu.classList.remove('show')
  } else {
    dropdownMenu.classList.add('show')
  }
})

This code will create a simple dropdown menu that can be toggled by clicking on a button with the class dropdown-toggle. When the button is clicked, the code will check if the dropdown menu has the class show. If it does, the code will remove the class, hiding the dropdown menu. If it doesn’t, the code will add the class, showing the dropdown menu.

Modal Windows

Modal windows are another popular UI element that can be used to display important information or to prompt the user for input. With JavaScript, you can create modal windows that are responsive, accessible, and easy to use.

To create a basic modal window with JavaScript, you can use the following code:

var modal = document.querySelector('.modal')
var modalToggle = document.querySelector('.modal-toggle')
var modalClose = modal.querySelector('.modal-close')

modalToggle.addEventListener('click', function() {
  modal.classList.add('show')
})

modalClose.addEventListener('click', function() {
  modal.classList.remove('show')
})

This code will create a modal window that can be toggled by clicking on a button with the class modal-toggle. When the button is clicked, the code will add the class show to the modal window, displaying it on the page. When the close button with the class modal-close is clicked, the code will remove the show class, hiding the modal window.

Sliders

Sliders are a popular UI element that can be used to display images or other types of content in a visually appealing and engaging way. With JavaScript, you can create sliders that are easy to use and customizable to fit your website’s design.

To create a basic slider with JavaScript, you can use the following code:

var slider = document.querySelector('.slider')
var slides = slider.querySelectorAll('.slide')
var prevButton = slider.querySelector('.prev')
var nextButton = slider.querySelector('.next')
var currentSlide = 0

function showSlide(n) {
  slides[currentSlide].classList.remove('active')
  slides[n].classList.add('active')
  currentSlide = n
}

prevButton.addEventListener('click', function() {
  var prevSlide = currentSlide - 1
  if (prevSlide &lt; 0) {
    prevSlide = slides.length - 1
  }
  showSlide(prevSlide)
})

nextButton.addEventListener('click', function() {
  var nextSlide = currentSlide + 1
  if (nextSlide &gt;= slides.length) {
    nextSlide = 0
  }
  showSlide(nextSlide)
})

This code will create a slider that can be navigated by clicking on buttons with the classes prev and next. The code uses the showSlide function to show the current slide and hide the previous slide whenever the slider is navigated.

Form Validation

Form validation is an essential UX feature that can help to prevent errors and improve the usability of your website’s forms. With JavaScript, you can create form validation that is responsive and user-friendly.

To create form validation with JavaScript, you can use the following code:

var form = document.querySelector('form')

form.addEventListener('submit', function(e) {
  e.preventDefault()
  var email = form.querySelector('[type="email"]').value
  var password = form.querySelector('[type="password"]').value

  if (!email || !password) {
    alert('Please fill in all fields.')
  } else if (password.length &lt; 8) {
    alert('Your password must be at least 8 characters long.')
  } else {
    alert('Form submitted successfully!')
  }
})

This code will validate a form’s email and password fields when the form is submitted. If either field is empty, the code will display an alert message prompting the user to fill in all fields. If the password field is less than 8 characters long, the code will display an alert message prompting the user to enter a password that is at least 8 characters long. If the form passes validation, the code will display an alert message indicating that the form was submitted successfully.

In conclusion, JavaScript is a powerful tool that can be used to enhance the UX and UI of your website. By using these JavaScript snippets, you can create a more engaging and user-friendly experience for your users. However, it is important to use these JavaScript snippets wisely and sparingly to ensure that they do not negatively impact the performance of your website.

How To Design An Effective User Onboarding Flow

This article is a sponsored by Feathery.io

What is it that causes users to give up on an app before ever stepping inside or really giving it a fair chance? It could be the onboarding process.

Sometimes half the battle in getting users to adopt the products you build is to first get them in them so they can see how awesome they are. With a high-quality user onboarding flow, you can easily increase conversions, user satisfaction, and user activation within the product.

In this post, we’re going to look at what it takes to design an effective user onboarding flow that maximizes how many engaged users you get inside your web app. We’ll be using Feathery — a powerful, no-code onboarding form builder — to demonstrate how to do this.

What Can You Accomplish With An Onboarding Process?

The onboarding flow is a multi-step process that helps users get started with a new SaaS product. In most cases, the flow appears right after signup and bridges the user into the app. And for more complex SaaS, onboarding flows can appear as tooltips and guided tours inside the product.

With a particularly effective onboarding flow, you’ll be able to:

  • Increase conversion rates.
  • Reduce user abandonment.
  • Increase user activation within the product.
  • Improve overall user satisfaction.
  • Decrease churn.

Ultimately, your user onboarding process sets the tone for what’s to come.

You can personalize the product based on user responses, demonstrate how easy it is to get started, or reinforce the overall value of adopting the product into their workflow.

How To Design A Great User Onboarding Flow With Feathery

As I break down the steps below, I’ll demonstrate how to use Feathery to create an effective user onboarding flow similar to Duolingo’s popular onboarding flow.

Note: After you sign up for Feathery, you’ll have a short onboarding process of your own to complete. In the end, you can choose to use a template or start with a blank canvas. For the purposes of this walkthrough, I’ll be starting from scratch.

Step 1: Design The Container Style

Although each step in the flow will contain different content, you want the general design and format to remain consistent throughout. Not only should this design improve the usability of the onboarding process, but it should give users an idea of what your app will look like.

To make changes to the appearance of your onboarding screens, click on the container in Step 1. A “Style” panel will open on the right.

From this panel, you can modify the following:

  • Layout,
  • Background color or imagery,
  • Corner radius,
  • Shadows,
  • Hover-triggered transformations,
  • Click-triggered transformations.

Note: If you plan on making extensive customizations to the style of your container, I’d recommend deleting any subsequent steps that were included with your template if you’re using one. It’ll be much easier if you duplicate this first container for subsequent steps and save yourself time replicating the styles. Alternatively, Feathery offers the ability to build and save reusable styles and components using themes.

Step 2: Create The Steps In Your Flow

People can really only hold about five to seven items in their short-term working memory.

If you want to get as many users through the onboarding process as possible, only give them as much information or ask them as few questions as needed.

That means capping your onboarding steps at no more than seven steps. Five would be better, but if they’re super short and not overly complicated requests, you might be able to stretch it a bit.

To build out your steps, first, decide what type of onboarding flow would be the most beneficial for your app. For example:

  • A product setup walkthrough to help users personalize their experience or get started;
  • A series of questions that help identify what type of user they are;
  • A preview of the core features;
  • A video or GIF intro that welcomes the user and shows them how easy the app is to use;
  • Powerful statistics that reinforce the transformative effects of your product.

Then start piecing them together.

To add a new screen to the flow, use the “Flow” tab on the left. Use the “+Step” button at the bottom to add a new step, or you can click the plus sign that appears beneath your existing step button when you hover over it.

In Feathery’s ‘Flow’ control tab, users can add new steps with ease. (Source: Feathery) (Large preview)

You can rename the labels of each step by double-clicking on each block. Alternatively, if you hover over the block, three vertical dots appear to the right, which gives you the ability to duplicate, rename or delete it.

Step 3: Customize The Content

In terms of what makes an onboarding flow effective, you generally want to keep the design simple — lots of white space, concise and easy-to-follow language, and motivational imagery and colors.

Each step will include a combination of imagery and text. You’ll find all of the mix-and-match components you need under the “Elements” tab in the left sidebar.

Let’s go through an example of what you might put on one of these screens from top to bottom.

Step 3a: Add a Progress Indicator

To start, place a “Progress” element at the top of the page. Whenever you have a multi-step process, providing users with visibility into their progress is a must.

You can customize everything about this progress indicator, including:

  • Length,
  • Alignment,
  • Visibility,
  • Color,
  • Font,
  • Text placement,
  • Text styling.

When designing your progress bar, consider its prominence in terms of the rest of the design. You want it to be easy to find, but not so overpowering that it distracts from the actual message on the page.

Step 3b: Customize The Text

Before adding any imagery, lay down your text layers. Ideally, each screen should contain no more than 50 words or so.

Another thing to think about with text is a hierarchy. If all of the text is the same size and styling, it could end up looking like one long block of text (even if it’s only 50 words).

To keep onboarding screens easy to read:

  • Limit the amount of text,
  • Keep it simple and jargon-free,
  • Choose a readable font,
  • Use headings at the top of the page,
  • Add bolding and italics for emphasis,
  • Place single- or multiple-choice options into blocks with visual companions.

In Feathery, use the “Text” field to add individual text layers.

You’ll have total control over how the text looks from the “Style” panel.

Step 3c: Add Imagery

With Feathery, you can add images and videos to your onboarding step designs using the corresponding elements — images you’ll have to upload from an external source, and videos will need to be hosted on YouTube and then embedded with a link.

For more complex configurations (like text and image blocks), use the element that most closely resembles what you need.

For instance, I want to create nine clickable blocks so that users can tell us how they found the app. For this, I’ll use the “Button Group” element:

You can edit the labels, add images, customize the layout and spacing, change the font, apply a border and shadow, and more.

The three side-by-side button blocks have been transformed into a colorful, custom-labeled grid of options. If they’re going to be clickable options like in this example, you’ll be able to program in hover effects to make them respond to your users’ touch.

Step 3d: Add Other Elements

You can add all kinds of interactive and attractive elements to your onboarding flows. For example:

  • Account setup/login forms;
  • File uploaders;
  • Slider selections;
  • Rating requests;
  • Payment forms.

It all depends on your goal for that step and what the easiest way to get that information across or to collect it from your users is.

Step 4: Add Navigation Buttons

Giving your users full control over their onboarding experience is going to be crucial in getting as many of them through the process as possible.

Although it’s common to place a single “Continue” button at the bottom of each onboarding step, it might be a good idea to give them extra navigation options.

For instance, let’s say the step we designed above is the first in a series of six. We can place a “Next” button on the right side of the step. A “Previous” button wouldn’t make sense, but a “Skip” button would be useful. While it’s nice to find out how people discovered our apps, it’s not totally necessary, so giving them the option to skip is good.

Both buttons are placed side-by-side here. However, the “Skip” button has been designed as a plain text link as opposed to “NEXT,” which is more prominent. You can play around with how you present the various button options to your users.

Step 5: Enable Interactivity and Connectivity

Every step you build will have at least one element of interactivity. To configure your clickable elements, use the “Properties” setting on the right toolbar.

Let’s start with the “Button” elements.

You can configure more than just links to the next steps or pages in your app. You also have the ability to:

  • Set multiple actions.
  • Disable the button if the fields aren’t submitted.
  • Create custom validation rules that need to be met.
  • Set conditional logic that determines when the button appears in the flow.
In Feathery, users can program multiple actions for button elements and force certain rules to be met. (Source: Feathery) (Large preview)

When it comes to other clickable elements in Feathery, you’ll have similar options with regard to validation rules and conditional logic. However, the other settings will differ based on what the element is.

For example, the “Button Group” element asks you to set constraints. You can allow for multiple responses. You can also make the response optional, which you’ll need if you include a “Skip” button on that screen. When it comes to form fields, on the other hand, you may need to set limits on what can be entered or uploaded.

Interactivity is an important piece of the user onboarding flow. If there are any issues with getting through the process, you’re going to lose users before they get inside your app. The rest will likely expect the app to be just as confusing or difficult to use, which won’t bode well for user attrition. So spend extra time on this step to make sure you get it right.

Step 6: Create The Flow

Depending on how you added new steps to your user onboarding flow, you might not need to do much in this step. However, it’s still a good idea to check.

To see what your flow looks like, click on the “Flow” tab on the left. Then click on the “Flow Editor” button to open your flow chart.

If you set up the steps initially to connect together, you’ll see them connected here. Additionally, if you programmed buttons to connect to other slides, you’ll see that reflected in the chart.

In addition to visualizing your flow, you can:

  • Create a new step,
  • Add new connections between steps,
  • Set the connecting element between two steps,
  • Add a condition so that different actions take users to different steps.

When you’re done, click the “Designer” button in the top-left to return to the main editor.

Step 7: Add Integrations (Optional)

There are different reasons to integrate other apps with your onboarding flow. For instance, you might want to track user engagement with a tool like Google Analytics. This would be helpful in keeping an eye on how many users are getting through each step. If they’re dropping out around the same point, your data will at least indicate where the greatest friction point is.

There are other tools that Feathery integrates with as well. Go to the “Integrations” tab at the top of your screen to find them.

You’ll be able to add a wide variety of functionality to your onboarding flows with integrations. Analytics is just the start.

For example, you can integrate with the following:

  • Firebase or Stytch to add an authenticated login step;
  • Stripe to collect one-off or recurring payment from new users;
  • Plaid to automatically collect user financial information;
  • Slack to notify your team when a new user has completed the process;
  • HubSpot to add new users to your CRM and automated email marketing campaigns.

A user onboarding form is a great way to get users into your app without the need for a sales rep or customer service associate to contact them. Adding integrations will allow you to streamline the onboarding and engagement process even more.

Step 8: Publish

When you’ve finished designing your user onboarding steps, hit the “Publish” button in the top-right corner. Use the down arrow to open a preview or the live form to see how your new onboarding flow looks and to test it out.

To add the new onboarding process to your app, use the “Publish” dropdown one more time to retrieve the JavaScript or React code.

The code will automatically be copied to your clipboard. Go into your CMS and add it wherever you want the element to appear — like in a pop-up at startup.

Wrapping Up

Is a user onboarding flow necessary? If you want to maximize conversions or collect necessary user information for other purposes, then yes.

Despite the critical role that onboarding plays in product adoption, designing a great onboarding experience for your users doesn’t have to be difficult. Whether you build it from scratch, use another app’s onboarding flow for inspiration, or start with a template, you can have a well-thought-out and beautifully designed user onboarding flow set up in no time at all.

Sign up for Feathery to start creating effective user onboarding flows of your own.

From Concept to Launch: The Ultimate Guide for Successful Client Briefings

Would you like to move qualified prospects through your web dev sales process more successfully, deliver consistently better results, and send your sales closing rates soaring? Of course you would, right?!

Well, good news – you’re in the right place to learn how! This no-hype guide to running a hyper successful client briefing session will show you how to boost sales of your web development services.

We’ll cover the following topics:

Your Client Briefing Secret Weapon

Q: Which of the following is an absolutely essential “must-have” to conduct a highly successful client briefing session?

A) A fancy office on the top floor of a skyscraper overlooking one of the 7 wonders of the world.

B) Sending out a stretch limo to pick up your prospective clients and drive them back after the briefing.

C) Serving clients chilled champagne, canapes, and caviar as soon as they arrive.

D) Having an impeccable sense of dress matching your suit with your hairstyle and the office decor.

Answer: None of the above.

To conduct a successful client briefing session, you need only two ears and …

A Needs Assessment Questionnaire

A Needs Assessment Questionnaire (NAQ) is an essential tool for your WordPress web development services business.

It’s a crucial part of an effective sales process as it helps you to:

  • Understand your client’s needs, preferences, and goals so you can provide them with the right solution for their needs.
  • Ask the right questions and gather the necessary information about the project’s scope, timeline, and budget to provide a realistic plan for the project and an accurate estimate of the project’s costs.
  • Identify any potential issues or concerns early in the sales process.
  • Manage the client’s expectations.
  • Qualify your prospect as being either a good fit for your services or not (yes, sometimes it’s better to let them go) and move them successfully through your sales process.
  • Establish a strong relationship with the client based on trust and communication.

Your questionnaire should be carefully crafted to glean the necessary information from the client while being concise and easy to understand.

It should also be customized to the client’s specific needs and provide clear instructions on how to complete it correctly, so that anyone in your business can conduct a client briefing session successfully.

By demonstrating a deep understanding of the client’s needs and goals, you can create a website or deliver a project that will hopefully exceed your client’s expectations. This, in turn, can lead to satisfied clients who are more likely to recommend your services to others.

The NAQ, then, is not just any old “questionnaire.” It’s an integral and valuable part of your sales process.

So, before we look at how to develop an effective Needs Assessment Questionnaire that will help you get better results in your business, let’s briefly go over the different stages of an effective sales system so we can have a clear understanding of where the Needs Assessment Questionnaire fits in.

The 7 Stages of an Effective Sales Process

An effective sales process typically consists of the following stages:

  • Stage 1: Initial Contact – This is the first stage of the sales process, where your potential client becomes aware of your service. They may visit your website, receive an email, phone call or recommendation, or see an advertisement, directory listing, etc.
  • Stage 2: Needs Assessment – In this stage, you (or your sales rep) asks questions to understand the client’s needs, challenges, and goals. The aim of this stage is try to gather information about the client’s business, industry, and competition and qualify them as a potential client.
  • Stage 3: Presentation – In this stage, you present a solution to the client’s problem or need. Your presentation may include a demonstration, samples of previous work, or a proposal.
  • Stage 4: Objections – In this stage, the client may raise objections or concerns about your proposed solution. You (or your sales rep) then address these objections and provide additional information or clarification.
  • Stage 5: Closing – In this stage, you (or your sales rep) ask for a decision. This may involve negotiating the price, terms, or delivery of the service.
  • Stage 6: Follow-up – After the sale, your business follows up with the client to provide onboarding (e.g. training), ensure satisfaction with your service, and to address any issues that may arise. You may also look for opportunities to cross-sell or upsell other services.
  • Stage 7: Referral – The final stage is when your satisfied client refers your business to others who may benefit from your services. This can be a powerful source of new business and growth for your company.

The sales process described above emphasizes the importance of understanding your client’s needs and providing a solution that meets those needs. It also highlights the need for ongoing customer engagement and relationship-building to drive long-term business success.

Your NAQ is vitally important to completing Stage 2 (Needs Assessment) of your sales process successfully.

Chart - 7 Stages of Sales Process
Assessing your clients’ needs effectively will help you deliver a better solution.

This article focuses on the Needs Assessment stage of the sales process, so let’s take an in-depth look at the role your Needs Assessment Questionnaire plays in it.

The Needs Analysis Presentation

All you need to run an effective sales presentation is an effective script and an effective sales tool.

To illustrate this, let’s say that you are asked to give a slide presentation to an audience about a subject you know little to nothing about.

If you design your slide presentation well using the right content and the right slide sequence, all you would have to do is show a slide, read the words on the slide, show the next slide and repeat the process, and you could run a successful presentation.

More importantly, anyone in your business could consistently and repeatedly deliver a successful presentation by simply following the same process. Even if you went a little off-topic and ad-libbed every now and then, the tool (i.e. the slides) and its built-in script (i.e. the words on each slide) would still guide the presenter successfully through the entire process.

This is essentially what we are aiming to achieve in “Stage 2” of the sales system… an effective and repeatable presentation that delivers consistent results and moves your client successfully to the next stage.

Stage 2, then, is your Needs Analysis Presentation and consists of two main elements:

  1. The presentation script
  2. The Needs Assessment Questionnaire

The “presentation script” is what you say and do during your client briefing session to create the best user experience possible for your client.

This includes how you greet your potential client, what you do to make them feel comfortable (e.g. offer water, tea, or coffee), the words you use to start the briefing session, the questions you ask them during the briefing, how you structure the entire meeting so clients feel relaxed and open to share information that will allow you to assess their needs and qualify them as prospects, the words you use to end the meeting and set up the next stage of the process, and so on.

For example, the “opening script” for your Needs Analysis Presentation might go something like this:

“[Client name], as I mentioned to you when setting the appointment, the purpose of today’s meeting is for us to get a better idea of your business, what it does, what problems you need help solving, what kind of results you expect from your website, and so on.

I’ve done some research on your business and there are some questions I’d like to ask so we can get the full picture of what you need and how we can help you. This will probably take about 30 minutes or so.

I will then review the information carefully with my team and come back to you with a customized solution that will best suit your needs and your budget.

And if it turns out that we are not a perfect fit for working with each other, I’ll let you know and recommend a more suitable solution.

Are you ok for us to get started?”

***

After delivering the opening script above, you then complete the Needs Assessment Questionnaire with your client. This is the tool that will guide you successfully through your Needs Analysis Presentation.

After completing your NAQ, you then deliver the “closing script,” which could be something like this:

“[Client name], thank you… I really appreciate you taking the time to answer all of these questions. This gives me everything I need.

As I mentioned at the start of the meeting, give me a day or so to review this with my team. We’ll put together the solution we think will best deliver what you’re looking for and then we’ll meet again and go through everything in detail and answer any other questions you have.

Are you happy for us to set up the next meeting now?”

The above is Stage 2 in a nutshell. Its purpose is to help you set up the next appointment, where you deliver your solution and hopefully get the client’s business.

The more attention you put into designing and structuring your Needs Assessment Questionnaire, the better the client’s experience will be and the more smoothly, consistently, and effectively your client meetings will run.

Even better, if you plan to scale your business, you will be able to train anyone to run client briefings competently. All they will need to do is learn the opening and closing scripts and use the Needs Assessment Questionnaire to complete this stage.

Now that we understand what the Needs Assessment Questionnaire’s purpose is and where it fits into the sales process, let’s start building an effective NAQ for your business.

Designing Your Needs Assessment Questionnaire

Since there is no “one size fits all” way to build a web development business, this section will provide a general framework to help you design a Needs Assessment Questionnaire customized to suit your specific needs, with a list of sections and suggested questions you can include in your NAQ.

We’ll begin by looking at the steps involved in creating a NAQ.

How To Create An Effective NAQ For Your WordPress Web Development Business

Here are the steps involved in creating an effective Needs Assessment Questionnaire that will enable you to gather the critical information needed to deliver successful WordPress web development services to your clients:

  1. Identify the key areas of information you’ll require: Begin by outlining the main areas of information you need to gather from the client, such as their business goals, target audience, website functionality, content needs, marketing strategies, budget, and timeline expectations.
  2. Determine the types of questions to ask: Once you have identified the main areas of information you need to gather, determine the types of questions to ask. Open-ended questions are ideal as they encourage clients to provide detailed information, allowing you to better understand their needs and preferences.
  3. Develop specific questions: Put together key questions for each area of information to gather more detailed insights. For example, to understand the client’s business goals and challenges, you could ask “What are your top business goals, and what challenges are you facing in achieving them?”
  4. Organize the questionnaire: Ensure that the questions flow logically and are easy for clients to understand. Group similar questions together, and consider using subheadings to organize the questionnaire by topic.
  5. Include instructions and explanations: Provide context for each question by explaining why you are asking it and how the answer will help you develop a customized solution for the client. The best way to do this is to turn this explanation into a “script” and write it into your questionnaire after each of the section headings and subheadings (e.g. “Now, I’d like to ask you questions about your current marketing efforts. This will help us understand what you are currently doing to generate new leads and drive traffic to your site, how these activities are performing, and if there are any issues that we would need to look at or improve…”). Including clear instructions and explanations will help clients understand the purpose of the questionnaire and what to expect in the web development process, and help you to fill it out.
  6. Test the questionnaire: Try out your newly created questionnaire on a few clients to ensure the questions are clear, relevant, and useful. Make any necessary adjustments to ensure the questionnaire effectively gathers the information needed for successful web development projects.
  7. Continuously review and refine: The questionnaire is not set in stone, so adjust and improve it over time based on feedback from clients and your team members. As your business evolves and new trends emerge, make sure that the questionnaire remains up-to-date and relevant.

So that’s the outline of the process. Now, let’s start putting a Needs Assessment Questionnaire together.

1) Decide What Information You Need

As mentioned above, the first step is to identify the key areas of information you need to gather from clients.

Mind-mapping the process at this stage can be useful for brainstorming ideas and organizing your thoughts.

Needs Assessment Questionnaire - Mind map
A mind map is a useful tool for planning your NAQ.

2) Define Your NAQ Categories

Once you have a clear idea of what information you need from your client, the next step is to organize this information into question categories. These will form the main sections of your NAQ.

Needs Assessment Questionnaire categories
Define the categories you will add to your Needs Assessment Questionnaire.

Think about the logical flow of your questionnaire’s sections, especially when planning subcategories, such as hosting and domains, design, functionality, and content for the website, or marketing-related questions.

For example, when discussing your client’s website needs, should you start by asking them questions about hosting and domains and then follow with questions about design, functionality, and content, or is there are better sequence that you feel would make the discussion flow more smoothly?

Also, consider things like:

  • Which areas are absolutely essential to get information from the client? Where should you insert this into your NAQ so you can make sure it gets covered in case the meeting is cut short or goes off on a tangent, or the client starts to feel overwhelmed?
  • Which areas of discussion could potentially blow out and take up a big chunk of the meeting? How can you design the process to quickly rein the client back into focus if this happens?

All of these details are very important when building a process flow for your NAQ’s design.

3) Decide on the Format

How are you going to run your Needs Analysis Presentation and record the client’s answers?

Will your client briefing sessions be done face-to-face, over the phone, online via video conferencing, or a combination of different styles?

Will your NAQ be printed with answers recorded as handwritten notes, in an electronic document, or a custom form application running from a phone, tablet, or laptop?

Probably the easiest and most effective way to start is using pen and paper. A printed questionnaire can serve as your prototype. This will allow you to review, tweak, test, and improve your sections, questions, question flow, accompanying instructions, fields for entering answers, etc, after every client briefing session.

Once you have a NAQ that delivers you consistent results, you can then turn your prototype into a format better suited for your business, like an electronic questionnaire or even an app. Or, just keep using a printed questionnaire if it works for you. Why complicate something when the simplest approach works?

4) Add Questions to Your NAQ Sections

Now that you have planned everything out, the next step is to add questions to each section of your NAQ.

Note: You don’t have to add every suggested question below to your NAQ. Just pick out the ones you need. Also, keep in mind that some questions may overlap for different sections, so include them where you think it would make the most sense for you to ask.

Let’s go over the main sections we suggest you consider including in your NAQ:

1) Overview

Your NAQ is an internal business document. It’s not something that you will leave with the client. So, it’s probably a good idea to add an Overview section. This could include a checklist of everything you need to cover during the session, such as documents or information the client needs to provide, instructions for completing certain sections, even your opening script.

2) Client’s Business

As a website developer, it’s important to understand the client’s business goals and challenges to create a website that meets their specific needs. During the client briefing session, it’s essential to ask the right questions to identify the client’s goals, target audience, unique selling points, and competition.

Questions about the client’s goals can include inquiries about what they hope to achieve with their website, whether they are looking to increase sales, generate leads, or increase brand awareness. Knowing the client’s goals will help you tailor your approach to meet these objectives.

Target audience questions should delve into the demographics of the client’s customers, their interests and behaviors, and what they are looking for in a website. By understanding the target audience, you can create a website that appeals to their audience’s needs and preferences.

Unique selling point questions can help you understand what sets the client’s business apart from the competition. This information will help you highlight these unique selling points on the website and create a competitive advantage for the client.

Finally, questions about the competition can help you understand what other businesses are offering and how the client’s website can differentiate itself. This information will help you create a website that stands out from the competition and attracts more customers to the client’s business.

Here is a list of questions you can include in this section of your NAQ:

Business Details

Prefill some of these details before your client briefing and ask the client to confirm these:

  • Company name: The legal name of the client’s business entity.
  • Contact person name: The name of the individual representing the client, such as the CEO or a manager.
  • Address: The physical address of the client’s business, including the street address, city, state/province, and zip/postal code.
  • Phone number: The primary phone number for the client’s business.
  • Email address: The email address of the client’s business or the contact person.
  • Website URL: The website address of the client’s business (if they have one).
  • Social media handles: The client’s social media handles (if applicable), such as Twitter, Facebook, Instagram, etc.
  • Industry: The industry that the client’s business operates in, such as finance, healthcare, technology, etc.
  • Legal status: The legal status of the client’s business, such as LLC, corporation, sole proprietorship, etc.
  • Revenue: The annual revenue of the client’s business.
  • Number of employees: The number of employees working for the client’s business.
  • Tax ID: The client’s tax identification number (if applicable).
  • Payment information: The payment information that the client uses to pay for goods or services, such as a credit card, bank account, or payment service.
  • Additional notes: Any additional notes or comments about the client that may be helpful for future reference.

Note: Some of this information may need to be asked or obtained at a later stage of the sales process if applicable (e.g. Revenue, Tax ID, Payment information).

About Your Business
  • What is your business and what does your business do?
  • What are your unique selling points (USPs)?
  • Who is your target audience?
  • What are the demographics of your target audience?
  • What are the interests and behavior patterns of your target audience?
  • What markets do you sell your products and services in? (Local, Regional, National, Global)
  • Is your business seasonal?
Your Business Goals
  • What are your primary business goals and objectives?
  • What difficulties are you currently experiencing in achieving them?
  • How do you envision an agency like ours will help you address these challenges?
Your Competition
  • Who are your main competitors?
  • What makes your business unique compared to your competitors?
  • What are the strengths and weaknesses of your competitors’ websites?
  • What do you like and dislike about your competitors’ websites?

3) Client’s Website

Your Needs Assessment Questionnaire should take into account the fact that a potential client may or may not already have an existing website. If so, it is essential to conduct a thorough assessment of the client’s existing website. This will help you understand their website, identify any issues that need to be addressed, and ensure that the end product is tailored to their specific needs and goals.

Here is a list of questions to ask a potential client during the client briefing session about their website to help you gain a comprehensive understanding of their needs and requirements in terms of functionality, design, content, and performance:

Hosting & Domains
  • What are your requirements for website hosting and maintenance?
  • Do you need help with website hosting or domain registration?
  • Do you have any registered domains?
  • Have you purchased webhosting for your site?

For existing websites, include the following questions:

  • Do you have any additional domains?
  • Do you have any big changes (like a migration) planned within the next 12 months?
General
  • What is the purpose of your website?
  • What are your primary business goals for this website? Is it achieving these goals?
  • What is the estimated size of your website (number of pages)?
  • Are there any legal or regulatory requirements that need to be considered for your website?

For existing websites, include the following questions:

  • What are the current issues or challenges you are experiencing with your website?
Design
  • Do you have any specific design preferences or requirements for your website?
  • Do you have any specific branding or visual identity guidelines that need to be followed?
  • What is your preferred color scheme?
  • Do you have any existing design elements that you would like us to incorporate?
  • What is your preferred tone of voice for your website?
Functionality
  • What features and functionalities do you want your website to have (e.g. eCommerce, contact forms, appointment scheduling, user registration, etc)?
  • Do you require any special integrations (e.g. social media sharing, Google Analytics, email marketing software, etc)?
  • What are your expectations for website performance (e.g. load time, speed, mobile responsiveness)?
  • Do you have any specific security requirements for your website?
  • Do you have a plan in place for website backups and security?

For existing websites, include the following questions:

  • Is your website mobile-friendly and responsive?
  • How does your website perform in terms of loading speed?
  • Is your website optimized for search engines?
  • Do you have any analytics or tracking tools installed on your website?
  • Has your website ever been negatively impacted by any core algorithm updates?
Content
  • How will you be creating and managing content for your website?
  • What type of media will you be using (e.g. images, videos, audio)?
  • Will you be updating the website content yourself or do you need ongoing maintenance and updates?
  • Do you need any help creating new content for your website?

For existing websites, include the following questions:

  • What content management system (CMS) are you currently using?
  • How frequently do you update your website’s content?
  • Do you have any existing website content that you would like to migrate to the new website?
  • Do you have any existing content that you would like us to use?

Also…

If content services are part of your offering, see the additional “Content Marketing” section below for more questions you can ask.

4) Client’s Marketing Efforts

By understanding your client’s marketing efforts, you can ensure that the website you create for them will be optimized for success.

For example, you can ask about the client’s SEO efforts, including any past keyword research or optimization. It is also important to understand any PPC campaigns the client has run, as well as their social media presence and email marketing efforts. Additionally, you can inquire about any PR campaigns the client has been a part of, including media outlets they have been featured in and soundbites from their representatives.

Here is a list of questions you could ask a potential client during the client briefing session to identify their marketing efforts related to SEO, PPC, social media, email marketing, PR, etc:

Marketing Goals

  • What are your primary marketing objectives, and how do you plan to achieve these?
  • Do you have a marketing plan in place for your website?
  • Have you done any marketing research to identify your target audience’s needs, preferences, pain points, and online behavior?
  • Have you done any competitive research to understand the strategies they are using to attract and retain customers?
  • Do you have a content marketing strategy in place? If so, what types of content have you found to be most effective in engaging your target audience?
  • What are your expectations for the role of your website in your overall marketing strategy, and how do you see it contributing to your business objectives?
  • Do you have any particular marketing challenges or pain points that you would like us to address through the website development process?
  • What increase in organic traffic (numbers or percentage) are you aiming for in the next six to 12 months?
  • How many conversions (leads and sales) would you like to get in the next six to 12 months?
  • Can you list any freelancers or agencies you have previously worked with? If so, what processes did you have in place with them that you would like for us to continue with, and what would you like to change?

Marketing Channels

  • How do you plan to promote your content to attract visitors to your website?
  • Have you ever invested in search engine optimization (SEO) services for your website? If so, what were the results?
  • Do you currently use pay-per-click (PPC) advertising to drive traffic to your website? If so, what platforms do you use, and what has been your experience with them?
  • Have you established a presence on social media? If so, which platforms do you use, and how frequently do you post updates?
  • Have you used email marketing to promote your business or website? If so, what has been your experience with it?
  • Have you invested in public relations (PR) services to increase brand awareness or promote your products/services? If so, what has been the outcome? Can you provide us with the media outlets you have been published on and existing soundbites from your representatives?
  • Are there any specific keywords or phrases that you would like your website to rank for in search engine results pages (SERPs)?
  • How do you plan to allocate your marketing budget across different channels, and what portion of it are you willing to invest in website development and maintenance?
  • Do you require any specific SEO (Search Engine Optimization) features or services?
  • Do you need assistance with setting up and integrating social media accounts?
  • What’s your top acquisition channel?

Marketing Performance

  • How do you plan to measure the success of your website?
  • How do you currently measure the success of your marketing efforts, and what metrics do you track?
  • Are you currently doing anything to acquire links? Do you have a list of websites you’d like us to start with?
  • Have you ever purchased any paid links or been part of any link schemes?
  • Has your website experienced any issues with link penalties?
  • What are the primary calls to action for your website?

Also…

Access to platforms:

  • Do you have Google Analytics set up on your website? If so, please share access with [your email]
  • Do you have Google Search Console set up on your website? If so, please share access with [your email]
  • Do you have Google Ads set up on your website? If so, please share access with [your email]

Access to documents:

  • We may need access to some existing documents to help us align our campaign with those already running. Can we get access to existing documents?
  • Can you provide us with keyword research done by previous agencies/staff?
  • Can you provide us with reports/work done by the previous agency?

5) Content Marketing

The success of a WordPress website is heavily dependent on the quality and relevance of its content. As a result, it’s important to understand the client’s content needs and preferences during the needs analysis. Understanding the client’s content preferences can help the web developer to create a website that aligns with the client’s brand identity and resonates with the target audience.

In addition to gleaning information about your client’s marketing efforts and goals using channels like paid advertising, social media, etc, understanding the client’s content needs and preferences is crucial for the success of their project.

During the needs analysis, it’s important to ask the client about the types of content they want to create and publish on their website. This could include blog posts, videos, infographics, and more. Additionally, the web developer should inquire about the topics that the client wants to cover, the frequency at which they want to publish content, and the overall tone and voice that they want to convey.

Here are some questions you can ask during the client briefing session to gain a better understanding of the client’s content marketing needs and preferences and create a website that supports those goals:

Content Creation
  • What are the main topics that your audience is interested in?
  • What topics do you want to cover in your content?
  • What type of content do you plan on publishing on your website?
  • What types of media do you plan on incorporating into your content, such as images, videos, or infographics?
  • How often do you plan on publishing new content?
  • Who will be responsible for creating content for your website?
  • What tone and voice do you want your content to convey?
  • Have you identified any gaps in your content that need to be addressed?
  • Do you have any existing content that can be repurposed or updated for your new website?
  • Are there any particular examples of content that you like or dislike?
  • Do you have any existing content that you would like to repurpose or optimize for SEO?
  • Will you need assistance creating content?
Content Management
  • How do you plan to manage your content?

6) Client’s Budget and Timeline

Before starting any project, it is crucial to set clear expectations for the budget and timeline.

Asking the right questions about the client’s budget and their timeline expectations during the briefing session will help you and your client understand the scope of the project and plan accordingly to ensure the success of the web development project.

Here are some questions you can ask a potential client to gain a better understanding of their budget constraints, project scope, and timeline expectations to create a proposal tailored to their needs and budget:

Timeline
  • What is the scope of the project?
  • What is the timeline for completing this project?
  • Are there any important deadlines that we should be aware of or strict deadlines that must be met?
  • Are there any specific project milestones that you would like to achieve?
  • How flexible are you with the project timeline?
Budget
  • What is the budget you have allocated for this project? (Ideal, minimum, maximum)
  • Have you worked with a website developer before? If so, what was your budget for that project?
  • Are you looking for a developer to work on a fixed budget or hourly rate?
  • What is the scope of the project?
  • Are there any additional services or features that you would like to include in the project?
  • Are there any budget constraints that we should be aware of?
  • Do you have a preferred payment schedule or milestone-based payment plan?
  • Is there any flexibility in the project scope, budget, or timeline?

7) Additional Notes

Create a space in your questionnaire for additional notes. Use this space to record your own thoughts, observations, contact names, things your client says that you can quote, etc.

What to Do Before and After Your Client Briefing Session

The Needs Analysis Presentation is an integral part of your overall sales process. Getting your presentation scripts and Needs Assessment Questionnaire right are vitally important.

But so is what you do before and after this stage.

Let’s look at what you can do to maximize the results from your client briefing sessions.

Before The Client Briefing Session

Here are the steps you should take before conducting your client briefing session to ensure that you are well-prepared and can conduct a successful needs analysis that will lead to a customized solution for your client’s website and marketing needs:

  • Research the client’s business: Before meeting with the client, research their business and industry to understand their target audience, competitors, and market trends.
  • Identify the client’s pain points: Determine the client’s pain points by reviewing their existing website, marketing materials, and customer feedback.
  • Customize the questionnaire: Depending on the format of your NAQ, you may be able to customize the questionnaire for each client based on their specific business, website, and marketing needs. If not, a simple way to do this is to create your ideal NAQ and then simply cross off any unnecessary questions you can skip during the client briefing session, or add any specific questions to the “Additional Notes” section of the questionnaire.
  • Set clear objectives for the meeting: Determine the objectives for the meeting with the potential client, such as understanding their goals, identifying their website requirements, and discussing their budget.
  • Schedule the meeting: Schedule the client briefing meeting at a time that is convenient for both parties, and make sure the meeting is held in a distraction-free environment.
  • Rehearse the presentation: Practise your presentation, review your scripts, and visualize how your client briefing meeting will run to create a positive and successful client experience.

After The Client Briefing Session

After conducting your needs analysis presentation with a potential client, make sure to complete the following steps to maximize your results:

  • Analyze the information: Review and compile all the information gathered during the needs analysis session. This includes the client’s business goals, website requirements, marketing efforts, and budget. If your analysis qualifies the potential client as a prospect for your business, continue with the steps below. If not, proceed no further with this process. Instead, reach out to the client and explain why you don’t think you will be the best fit for their needs.
  • Develop a proposal: Develop a comprehensive proposal that outlines your website development process, timeline, deliverables, and costs. The proposal should address the specific needs and goals of the client and should highlight how your WordPress web development services will help the client achieve their objectives.
  • Customize the proposal: Once developed, customize it to address any specific concerns or questions the client raised during the needs analysis session. Ensure that the proposal reflects the client’s unique requirements and preferences.
  • Provide a clear quote: A quote that clearly outlines the costs associated with your services should be provided. It should be transparent and easy to understand, and should reflect the services outlined in the proposal.
  • Provide a timeline: Give the client a detailed timeline for the WordPress web development project that outlines key milestones and deliverables. The timeline should be realistic and achievable, and should reflect the client’s timeline expectations.
  • Schedule the next meeting: Book in a meeting at a time that is convenient for both parties in a distraction-free environment where you will provide the client with a presentation of your solutions and recommendations.

Depending on how you structure your sales process, you may also want to:

  • Schedule a follow-up call or meeting with the client to answer any outstanding questions or clarify any concerns or misunderstandings they may have about the proposal, quote, or timeline.
  • Provide additional information or clarification as needed to ensure the client is fully informed and comfortable moving forward with the proposal, including project scope, timeline, and cost.
  • Finalize the proposal, quote, and timeline with the client, confirm the client’s agreement and obtain any necessary signatures or approvals to move forward with the WordPress web development project.

Finally, you have asked clients lots of questions about their business, so be prepared if clients have some questions about your business.

If Questions Arise, Systematize

As a WordPress web developer, one of the most important steps you can take to ensure the success of your projects is to conduct a thorough needs analysis with your clients.  This will help you understand your client’s business, goals, existing website, marketing efforts, content needs, budget, and timeline.

Asking the right questions during the client briefing process is crucial for delivering the best solution that will not only meet their needs and budget, but hopefully also exceed their expectations.

Using a needs analysis tool like a Needs Assessment Questionnaire can save you valuable time during the client briefing and in the process of qualifying prospects for your business.

Additionally, it can help your business to identify potential roadblocks and challenges upfront, allowing you to develop a strategy that addresses these before they become a problem, keep your project on track, on budget, and on time, create customized WordPress solutions tailored to your clients’ unique needs, goals, and challenges, and establish a strong relationship with your client that can lead to repeat business, referrals, and long-term partnerships.

We hope you have found this information useful. Apply it to your business and watch your sales results improve!

What is best method for link building nowadays?

The best methods for link building include creating high-quality content, guest blogging, broken link building, social media promotion, and influencer outreach. Quality is more important than quantity, and it's important to stay up-to-date with changes in search engine algorithms.

Compare The Best Landing Page Creation Tools

So much goes into an effective landing page. It takes practice, testing, analytics, design skills, keyword research, and so much more. 

Fortunately, there are plenty of landing page creation tools that take the guesswork out of building and optimizing your landing pages. This guide covers the best ones.

Landing Page Builders

These are typically websites or web-based services that let you build a landing page by using an HTML editor or drag-and-drop functionality. Some will give you a basic editor with different landing page templates to choose from.

Unbounce

Unsplash landing page builder splash page

Unbounce is one of the most well-known landing page builders simply because it was one of the first web-based services that allowed people to build and test landing pages without relying on the IT department.

Here’s the pricing breakdown:

  • Launch—$74/month billed annually or $99 billed monthly for sites getting up to 20,000 unique monthly visitors
  • Optimize—$109/month billed annually or $145 billed monthly for sites getting up to 30,000 unique monthly visitors
  • Accelerate—$180/month billed annually or $240 billed monthly for sites getting up to 50,000 unique monthly visitors
  • Concierge—$469/month billed annually or $625 billed monthly for sites getting more than 100,000 monthly visitors

Additionally, you can test as many landing pages as you want, and Unbounce offers a variety of templates for web-based, email, and social media landing pages.

Instapage

Instapage landing page creation tool  homepage

Instapage is a bit different than your typical landing page builder in that it does come with a variety of templates for different uses (lead generation, click-through and “coming soon” pages), but what sets it apart is that it learns based on the visitors that come to your landing pages.

You can view real-time analytical data and easily determine the winners of your split tests, while tracking a variety of conversion types from button and link clicks, to thank you pages and shopping cart checkouts.

Instapage also integrates with a variety of marketing tools and platforms, including:

  • Google Analytics
  • Mouseflow
  • CrazyEgg
  • Mailchimp
  • Aweber
  • Constant Contact
  • Facebook
  • Google+
  • Twitter
  • Zoho
  • And more

A free option is available if you’d like to try it out, and a Starter package makes landing page creation and testing a bit easier on the wallet of startups and new entrepreneurs.

Real features like the aforementioned integrations start kicking in with the Professional package at $79/month, but if you’d like to get landing pages up and running quickly, it’s hard to beat the stylish templates that Instapage provides.

Launchrock

Launchrock landing page creation tool homepage

Launchrock is not so much a landing page builder as it is a social and list-building placeholder. Combining “coming soon” pages with list building capabilities, Launchrock also includes some interesting social features that encourage users to share the page with others.

For example, get X people to sign up, you’ll get Y. It also includes basic analytics and the ability to use your own domain name or a Launchrock branded subdomain (yoursite.launchrock.com). You can customize the page via the built-in HTML/CSS editor if you know how to code.

Launchrock is free and requires only an email address to get started.

Landing Page Testers/Trackers

While many landing page builders also include testing and tracking, they usually do one or the other well, but not both.

Of course, when you’re just starting out, it’s a good idea to take advantage of free trials and see which service works best for your needs.

Here are a few of the most popular ones available for testing and tracking your campaigns:

Optimizely

Optimizely landing page creation tool homepage

Optimizely is often touted as a good entry-level product for when you’re just starting out and working toward upgrading to something bigger and better as your business grows.

But with prices starting at $17/month and a free 30 day trial period, it’s a powerful product in its own right.

There are some limitations with the lower level packages. For example, multivariate testing is not available at the Bronze or Silver levels. It only becomes a feature at the Gold level, which will set you back $359/month.

On the upside, Optimizely lets you conduct an unlimited number of tests and also allows for mobile testing and personalization.

Although you do get an unlimited number of experiments, you can also edit these on-the-fly, but doing so will also cause you to lose count of which version of which page you were working on.

It can also leave some things to be desired when it comes to integration with Google Analytics, for example, it’s not able to segment custom data (like PPC traffic) or advanced analytics segments.

You can also tell Optimizely what you consider as “goal” points on your website — ranging from email subscription to buying and checkout, and it will track those items independently.

Overall, it does a great job with a simple and intuitive user interface and is ideal for those just starting to optimize their landing pages.

CrazyEgg

CrazyEgg landing page creation tool homepage.

CrazyEgg is the definitive heat map and visualization service to help you better understand how your website visitors are interacting with your landing pages.

Reports are available as “confetti” style, mouse clicks/movement tracking and scrolling heat maps.

This gives you an all in one picture to see where your visitors are engaging with your pages (and where you could improve that engagement).

CrazyEgg landing page creation tool confetti style report example.

An example of a CrazyEgg click heatmap. Warmer colors indicate more activity

Although CrazyEgg doesn’t consider itself a landing page testing and tracking solution, it does take you beyond the core information that Google Analytics gives you to show you actual user behavior on your landing pages.

Pricing starts at $9/month for up to 10,000 visitors with 10 active pages and daily reports available. A 30 day free trial is also available.

Hubspot

Hubspot landing page creation tool example

More than a tracking/testing service, Hubspot’s landing pages offer extremely customizable elements that let you tailor each page to precisely match your customers’ needs.

This lets you devise alternative segments for each “persona” you’ve created — driving engagement and conversion rates even higher.

The packages are pricey ($200/month starting out) for first-time landing page optimizers, but larger companies and organizations will see the value built in to the platform.

Beyond its smart segmenting, Hubspot also offers a drag and drop landing page builder and form builder. This is all in addition to its existing analytics, email marketing, SEO and other platforms.

Visual Website Optimizer

Visual Website Optimizer landing page creation tool example

If you’d like a more creative, hands-on approach to your landing pages, along with fill in the blanks simplicity, Visual Website Optimizer is as good as it gets.

Where this package really shines, however, is through its multivariate testing. It also offers behavioral targeting and usability testing along with heat maps, so you can see precisely how your visitors are interacting with your landing pages, and make changes accordingly.

You can also use the built-in WYSIWYG (what you see is what you get) editor to make changes to your landing pages without any prior knowledge of HTML, CSS or other types of coding.

Results are reported in real-time and as with Hubspot, you can create landing pages for specific segments of customers.

Pricing for all of these features is in the middle of all of the contenders, with the lowest available package starting at $50/month. Still, it’s a good investment for an “all in one” service where you don’t need the advanced features or tracking that other products provide.

Ion Interactive

Ion Interactive landing page creation tool example.

Ion Interactive’s landing page testing solution, could set you back several thousand per month, but it’s one of the most feature-packed options available, letting you create multi-page microsites, different touch-points of engagement, and completely scalable options with a variety of dynamic customizable options.

If you’d like to take the service for a test drive, you can have it “score” your page based on an in-house 13-point checklist. A free trial is also available, as is the opportunity to schedule a demo.

Of course, once you’ve decided on the best building, testing and tracking solution, there’s still work to be done.

Before you formally launch your new landing pages, it’s a good idea to get feedback and first impressions — not just from your marketing or design team, but from real, actual people who will be using your site for the first time.

Here are a few tools that can help you do just that.

Optimal Workshop


Optimal Workshop actually consists of three different tools. OptimalSort lets you see how users would sort your navigation and content, while Treejack lets you find areas that could lead to page abandonment when visitors can’t find what they’re looking for.

Chalkmark lets you get first impressions from users when uploading wireframes, screenshots or other “under construction” images.

Through these services, you can assign tasks to users to determine where they would go in order to complete them. You can also get basic heat maps to see how many users followed a certain route to complete the task.

You can buy any of the three services individually, or purchase the whole suite for $1,990/year. A free plan with limited functionality and number of participants is also available if you’d like to try before you buy.

Usabilla

Usabilla landing page tool homepage

Usabilla allows you to immediately capture user feedback on any device, including smartphones and tablets – a feature that sets it apart from most testing services.

Improvement is done via a simple feedback button which can be fully customized and encourages the customer to help you improve your site by reporting bugs, asking about features or just letting you know about the great shopping experience they had.

Usabilla also lets you conduct targeted surveys and exit surveys to determine why a customer may be leaving a page.

They also offer a service called Usabilla survey which is similar to other “first impression” design testing services and lets visitors give you feedback on everything from company names to wireframes and screenshots.

Pricing starts at $49/month and a free trial is available.

5 Second Test


Imagine you want visitors to determine the point of a certain page. What if they could only look at it for five seconds and then give you their opinion? Five second test makes this possible and it’s incredibly quick and easy to set up.

Case in point — you can try a sample test to see what a typical user would see. In my case, I was asked my first impressions of an app named “WedSpot” and what I’d expect to find by using such an app.

It’s simple questions like these that can actually give you some invaluable insights – and that for just five seconds of your users time.

It’s free to conduct and participate in user tests through Five Second test.

Other Helpful Tools

Beyond usability testing and user experience videos, there are a few other tools that your landing pages can benefit from:

Site Readability Test


Juicy Studio has released a readability test that uses three of the most common reading level algorithms to determine how easy or difficult it is to read the content on your site.

You’ll need to match the reading level with your intended audience but these tests will give you some insight on simplifying your language and making your pages more reading-accessible to everyone.

You simply type in your URL and get your results in seconds. You can also compare your results to other typical readings including Mark Twain, TV Guide, the Bible and more.

Pingdom Website Speed Test


Page loading time is a huge factor in your website’s bounce rate and lack of conversions. Simply put, if your page loads too slowly, visitors won’t wait around for it to finish.

They’ll simply leave and potentially go to your competition. Using Pingdom’s website speed test, you can see how fast (or slow) your website is loading.

Beyond the speed of your website itself, the service will also calculate your heaviest scripts, CSS, images, or other files that could be slowing down your pages.

You should note that testing is conducted from Amsterdam, the Netherlands, so depending on how close or far your server is from there will also factor into the equation.

It’s free to test your site on Pingdom.

Browser Shots


Although this is the last entry in our series of helpful tools, it is by no means any less important. Testing your landing pages in a multitude of browsers on a variety of operating systems is crucial to your pages’ overall success.

Fortunately, BrowserShots.org makes this process incredibly easy. You can test your pages on all current versions of the web’s most popular browsers, as well as older versions of those browsers.

It does take time for browser screenshots to be taken and uploaded for you to see the results. You can sign up for a paid account and see them faster, but for a free tool, it’s no problem to wait a little while and see just how accessible your page is to visitors on a variety of operating systems, browsers, and browser versions.

The Top Landing Page Creation Tools in Summary

The best landing page creation tools help you with keyword research, split testing, content creation, and everything else you need to drive conversions.

Remember, landing page creation is not a one-and-done process. So make sure you assess tools that will help you optimize your landing page after you’ve created them.

Keys To An Accessibility Mindset

How many times have you heard this when asking about web accessibility? “It’s something we’d like to do more of, but we don’t have the time or know-how.”

From a broad perspective, web accessibility and its importance are understood. Most people will say it’s important to create a product that can be used by a wide array of people with an even wider range of needs and capabilities. However, that is most likely where the conversation ends. Building an accessible product requires commitment from every role at every step of the process. Time, priorities, and education for all involved, so often get in the way.

Performing an accessibility audit can cost a lot of time and money. The results can cost even more with just design, development, and QA (Quality Assurance). An audit becomes even more expensive when considering the other heavy investment. For every role, the learning curve for accessibility can be steep.

There’s so much nuance and technical depth when learning about web accessibility. It’s easy to feel lost in the trees. Instead, this article will take a look at the forest as a whole and demonstrate three keys for approaching accessibility naturally.

The POUR Principles of Web Accessibility

It may sound too simple, but we can break web accessibility down into four core principles: Perceivable, Operable, Understandable, and Robust. These principles, known as POUR, are the perfect starting point for learning how to approach accessibility.

Perceivable

What does it mean for content to be perceivable?

Let’s say you’re experiencing this article by reading it. That would mean the content is perceivable to people who are sighted. Perhaps, you’re listening to it. That would mean the content is perceivable by people who engage with content audibly.

The more perceivable your content is, the more ways people can engage with it.

Common examples of perceivable content would be:

  • Images with alternative descriptive text,
  • Videos with captions and/or subtitles,
  • Indicating a state with more than just color.

A terrific real-world example of perceivable content is a crosswalk. When it is not safe to cross the street, there is a red icon of a standing figure and a slow, repeating beep. Then, once the streetlights change and people can cross safely, the icon changes to a green figure walking, and the beeping speeds up. The crosswalk communicates with understandable icons, colors, and sound to create a comprehensive and safe experience.

Operable

Operable content determines whether a person can use a product or navigate a website.

It is common for the person developing a product to create one that works for themselves. If that person uses a mouse and clicks around the website, that’s often the first, and sometimes the only, experience they develop. However, the ways for operating a website extend far beyond a traditional mouse and keyboard.

Some important requirements for operable content are the following:

  • All functionality available by mouse must be available by the keyboard.
  • Visible and consistent keyboard focus for all interactive elements.
  • Pages have clear titles and descriptive, sequential headings.

Understandable

What good is creating content if the people consuming it can not understand it?

Understandable content is more than defining acronyms and terms. A product must be consistent and empathetic in both its design and content.

Ways to create an understandable experience would include:

  • Defining content language(s) to allow assistive technologies to interpret correctly.
  • Navigations that are repeated across pages are in the same location.
  • Error messages are descriptive and, when possible, actionable.

In Jenni Nadler’s article, “When Life Gives You Lemons, Write Better Error Messages”, she describes her team’s approach to error messaging at Wix. With clear language and an empathetic tone, they’ve created a standard in understandable messaging.

Robust

In a way, many of us are already familiar with creating robust content.

If you’ve ever had to use a compiler like Babel to transpile JavaScript for greater support, you’ve created more robust content. Now, JavaScript is just one piece of the front end, and that same broad, reliable approach should be applied to writing semantic HTML.

Ways to create robust markup include:

  • Validating the rendered HTML to ensure devices can reliably interpret it.
  • Using markup to assign names and roles to non-native elements.

The POUR principles of web accessibility lay a broad (if a bit abstract) foundation. Yet, it can still feel like a lot to consider when facing roadmaps with other priorities. This depth of information and considerations can be enough to turn some people away.

Web accessibility is not all or nothing.

Even small improvements can have a big impact on the accessibility of a product. In the same way software development has moved away from the waterfall approach, we can look at web accessibility with the same incremental mindset.

Even so, sometimes it’s easier to learn more about something you already know than to learn about something anew. At least, that’s what this entire article relies upon.

With slight adjustments to how we approach the design and development of a product, we can create one that more closely aligns with the POUR principles of accessibility but in a way that feels natural and intuitive to what we already know.

Keys To An Accessibility Mindset

There’s a lot to learn about web accessibility. While the POUR principles make the process more approachable, it can still feel like a lot. Instead, by applying these keys to our approach, we can dramatically improve the accessibility of a product and reduce the risk of exhaustive refactors in the future.

Markup Must Communicate As Clearly As The Design

When working from a design, it’s common to build what we see. However, visual design is only one part of creating perceivable content.

Let’s consider the navigation of a website. When a person is on a specific page, we highlight the corresponding link in the navigation with a different background color. Visually, this makes the link stand out. But what about other methods of perception?

The content becomes more perceivable when its markup communicates as clearly as its design.

When dealing with the navigation, what exactly are we communicating with the contrasting styles? We’re trying to say, “this is the page you’re on right now.” While this works visually, let’s look at how our markup can communicate just as clearly.

<a aria-current="page" href="/products">Products</a>

By setting aria-current="page" on the anchor of the current page, we communicate with markup the same information as the design. This makes the content perceivable to assistive technologies, such as screen readers.

In this demo, we’ll hear the difference perceivable markup can make.

Even though navigation items often look like buttons, we understand that they function as links or anchors instead. This is the perfect example of marking up an element based on its function and not its appearance.

When using an anchor tag, we receive several expected functional benefits by default. The anchor will support keyboard focus. Hovering or focusing on an anchor will reveal the URL to preview. Lastly, whether with a keyboard shortcut or through the context (right-click) menu, a link can be opened in a new window or tab.

If we marked up a navigation item like it appeared, as a button, we would lose the last two expected behaviors of anchor tags. When we break the expectations of an element, accessibility will suffer the most.

The following demo highlights the functional differences when using the a, button, and div elements as a link. By navigating the demo with our keyboard, we can see the differences between each variation.

Without first looking at the altitude and ground speed values, I couldn’t tell which system was active. Maybe the imperial option was active since it was the same color as the data. But maybe the metric option was active because it was a different color.

While it may take us a moment to figure out which option is active, it’s an unnecessary one caused by indicating a state with only color.

In the following mockup, we underline the active option and increase its font weight. With these details, it’s now easier to understand the active state of the screen.

So much of creating perceivable content comes down to communicating in layers. When we write perceivable markup, we’re creating an extra layer of information. Designing is no different. If we indicate a state with only color, that’s one layer. When we add an underline and font weight, we add additional layers of communication.

People learn and experience in different ways. Consider a book that has an audio version and a movie adaptation. Some people will read the book. Others will listen to it. Others still will watch the movie. When we communicate in layers, more people benefit.

Review

Most people will agree that web accessibility is important. But they will also agree that it can be difficult. With so many combinations of hardware and software and so many nuances with each, accessibility can feel overwhelming.

It’s easy to become lost in the weeds of code samples and articles trying to help. One article may suggest an approach, while a second article suggests another. If we’re not able to test each scenario ourselves, it can often feel like guessing. Guessing can be disheartening, even discouraging. It can turn people away from accessibility.

Instead, we can have a dramatic impact on the accessibility of our work by not focusing on specific details but by adjusting how we approach a design from the start. One of the most challenging areas of accessibility is knowing when and where it’s needed. With the keys to an accessibility mindset, we can identify those areas and understand what they need. We may not know how to provide a perceivable or operable experience, but it’s easier to find the answer when you understand the question.

I should note, though, that applying these keys will not ensure your work is accessible. Will it make a positive impact? Yes. But accessibility extends far beyond design and development. For as long as a product is changing, a commitment to accessibility must remain at every step and in every role, from leadership on down.

Ensuring markup communicates as clearly as its design will help provide perceivable content. Writing functional markup instead of visual will help make that content operable. If the functional markup cannot be styled, then return to the first key, and make it perceivable.

Remember, creating an accessible experience for some doesn’t take away from others.

If we think back to the crosswalk example, who are some people who benefit from their design? Of course, those who are blind, even partially, can benefit. But what about a person looking down at their phone? The audible cue can grab their attention to let them know when it’s safe to cross. I’ve benefited from crosswalks in this way. How about a parent using the lights to teach their child how to cross? Everybody can benefit from the accessible design of a crosswalk. Of course, if a person wants to cross when they feel comfortable, regardless of the state of the crosswalk, they can. The accessible design does not prevent that experience. It enables that experience for others.

Accessible design is good design, and it all starts with our mindset.

Resources

What is the best method for link building in 2023?

Hi guys, I have read many blogs about methods and tips for link building on different sites like Elysian Digital Services and also tried them, some results are better than others. But I wanted to know your tried and tested method for link building that gives you 100% results.