Using Automated Test Results To Improve Accessibility

A cursory google search will return a treasure trove of blog posts and articles espousing the value of adding accessibility checks to the testing automation pipeline. These articles are rife with tutorials and code snippets demonstrating just how simple it can be to grab one’s favorite open-source accessibility testing library, jam it into a cypress project, and presto changeo, shifting left, and accessibility has been achieved… right?

Unfortunately, no, because actioning results in a consistent, repeatable process is the actual goal of shift-left, not just injecting more testing. Unlike the aforementioned treasure trove of blog posts about how to add accessibility checks to testing automation, there is a noticeable dearth of content focused on how to leverage the results from those accessibility checks to drive change and improve accessibility.

With that in mind, the following article aims to fill that dearth by walking through a variety of ways to answer the question of “what’s next?” after the testing integration has been completed.

Status Quo

The confluence of maximum scalability and accessibility as requirements has brought most modern-day digital teams to the conclusion that the path to sustainable accessibility improvements requires a shift left with accessibility. Not surprisingly, the general agreement on the merits of shifting left has led to a tidal wave of content focused on how important it is to include accessibility checks in DevOps processes, like frontend testing automation, as a means to address accessibility earlier on in the product life cycle.

Unfortunately, there has yet to be a similar tidal wave of content addressing the important next steps of how to effectively use test results to fix problems and how to create processes and policies to reduce repeat issues and regression. This gap in enablement creates the problem that exists today:

The dramatic increase in the amount of accessibility testing performed in automation is not correlating to a proportional increase in the accessibility of the digital world.

Problem

The problem with the status quo is that without guidance on what to do with the results, increased testing does not correlate with increased accessibility (or a decrease in accessibility bugs).

Solutions

In order to properly tackle this problem, development teams need to be enabled and empowered to make the most of the output from automated accessibility testing. Only then can they effectively use the results to translate the increase in accessibility testing in their development lifecycle to a proportional decrease in accessibility issues that exist in the application.

How can we achieve this? With a combination of strategically positioned and mindfully structured quality gates within the CI/CD pipeline and leveraging freely available tools and technologies to efficiently remediate bugs when they are uncovered, your development team can be well on their way to effectively using automated accessibility results. Let’s dive into each of these ideas!

Quality Gates

Making a quality gate is an easy and effective way to automate an action on your project when committing your code. Most development teams now create gates to check if there are no linting errors, if all test cases have passed, or if the project has no errors. Automated accessibility results can fit right into this same model with ease!

Where Do The Gates Exist?

For the most part, the two primary locations for quality gates within the software development lifecycle (SDLC) are during pull requests (PRs) and build jobs (in CI).

With pull requests, one of the most commonly used tools is GitHub Actions, which allows development teams to automate a set of tasks that should be completed or checked when code is committed or deployed. In CI Jobs, the tools’ built-in functionality (Azure, Jenkins) is used to create a script that checks to see if test cases or scenario has passed. So, where does it make sense to have one for your team?

It all depends on what level development teams want to put a gate in place for accessibility testing results. If the team is doing more linting and component-level testing, the accessibility gate would make the most sense at a pull request level. If the automated test is at an integration level, meaning a full baked-out site ready for deployment, then the gate can be placed with a CI job.

Types Of Gates

There are two different ways that quality gates can operate: a soft check and a hard assertion.

A soft check is relatively simple in the definition. It looks at whether or not the accessibility tests were executed. That is it! If the accessibility checks were run, then the test passes. In contrast, assertions are more specific and stringent on what is allowed to pass. For example, if my accessibility test case runs, and it finds even ONE issue, the assertion fails, and the gate will say it has not passed.

So which one is most effective for your team? If you are looking to get more teams to buy into accessibility testing as a whole, a best practice is to not throw a hard assertion right away. Teams initially struggle with added tasks or requirements, and accessibility is no different. Starting with a soft gate allows teams to see what the requirement is going to be and what they are required to be doing.

Once a certain amount of time has passed, then that soft gate can switch to a hard assertion that will not allow a single automated issue out the door. However, if your team is mature enough and has been using accessibility automation for a while, a hard assertion may be used initially, as they already have experience with it.

Creating Effective Gates

Whether you are using a soft or hard gate, you have to create requirements that govern what the quality gate does with regard to accessibility automated results. Simply stating, “The accessibility test case failed,” is not the most effective way to make use of the automated results. Creation of gates that are data-driven, meaning they are based on a piece of data from the results, can help make a more effective gate that matches your development team or organization’s accessibility goals.

Here are three of the methods of applying assertions to govern accessibility quality:

  • Issue severity
    Pass or fail based on the existence or count of specific severity issues (Critical, Serious, and so on).
  • Most common issues
    Pass or fail based on the existence or count of specific issue types which are known to be most common (either global or organization specific).
  • Critical or Targeted UI /UX
    Do these bugs exist in high-traffic areas of the application, or do these bugs directly impede a user along a critical path through the UX?

Fixing Bugs

The creation and implementation of quality gates is an essential first step, but unfortunately, this is only half the battle. Ultimately a development organization needs to be able to fix the bugs found at the various quality gate inspection points. Otherwise, the applications’ quality will never improve, and nothing will clear the gates that were just put in place. What a terrifying thought that is.

In order to translate the adoption of the quality gates into improved accessibility, it is vital to be able to make effective use of the accessibility test results and leverage tools and technologies whenever possible to help drive remediation, which eliminates accessibility blockers and ultimately creates more inclusive experiences for users.

Accessibility Test Results

There is a common adage that “there is no such thing as bug-free software,” and given that accessibility conformance issues are bugs, this axiom applies to accessibility as well. As such, it is absolutely necessary to be able to clearly prioritize and triage accessibility test results in order to apply limited resources to seemingly unlimited bugs to fix them in as efficient and effective a way as possible.

It is helpful to have a few prioritization metrics to assist in the filtration and triage work when working with test results. Typically, context is an effective top-level filter, which is to say, attacking bugs and blockers that exist in high-traffic pages or screens or critical user flows is a useful way to drive maximal impact on the user experience and the application at large.

Another common filter, and one that is often secondary to the “context” filter mentioned above, is to prioritize bugs by their severity, which is to say, the impact on the user caused by the bug’s existence. Most free or open-source automated accessibility tools and libraries apply some form of issue severity or criticality label to their test results to help with this kind of prioritization.

Lastly, as a tertiary filter, some development teams are able to organize these bugs or tasks by thinking about the level of effort to implement a fix. This last filter isn’t something that will commonly be found in the test results themselves. Still, developers or product managers may be able to infer a level of effort estimation based on their own internal understanding of the application infrastructure and underlying source code.

Thankfully, accessibility test results, for the most part, share a level of consistency, regardless of which library is being used to generate the test results, in that they generally provide details about what specific checks failed, where the failures occurred in terms of page URL and sometimes even CSS or XPath as well as specific component HTML, and finally actionable recommendations on how to fix the components that failed the specific checks. That way, a developer always has a result that clearly states what’s wrong, where’s it wrong, and how to fix what’s wrong.

In the above ways, developers can clearly stack, rank, and prioritize tasks that result from automated accessibility test results. The test results themselves are typically designed to be clear and actionable so that each task can be remediated in a timely fashion. Again, the focus here is to be able to effectively deliver maximal impact with limited resources.

Helpful Tools

The above strategies are well and good in terms of having a clear direction for attacking known bugs within a project. Still, it can be daunting to figure out whether one’s remediation solution actually worked or further to figure out a path forward to prevent similar issues from recurring. This is where a number of free tools that exist in the community can come into play and support and empower development organizations to expedite remediation and enable validation of fixes, which ultimately improves downstream accessibility while maintaining development velocity.

One such family of free tools is the accessibility browser extension. These are free tools that can help teams locate, fix, and validate the remediation of accessibility bugs. It is likely that whatever accessibility library is being used in the CI/CD pipeline has an accompanying (and free) browser extension that can be used in local development environments. A couple of examples of browser extensions include:

The browser extensions allow a developer to quickly and easily scan a page in the browser, identify issues on the page, or as in the case described above, they can validate that an issue that was detected during the testing automation process, which they have since remediated, no longer exists (validation!). Browser extensions are also a fantastic tool that can be leveraged during active development to find and fix bugs before code gets committed. Often, they are used as a quality check during a pull request approval process, which can help prevent bugs from making their way downstream.

Another group of free tools that can help developers fix accessibility bugs is linters which can be integrated within the developers integrated development environment (IDE)and automatically identifies and sometimes automatically remediates accessibility bugs detected within the actual source code before it compiles and renders into HTML in a browser.

Linters are fantastic because they function similarly to a spell checker in a document editor tool like Microsoft Word. It’s largely fully automated and requires little to no effort for the developer. The downside is that linters typically have a limited number of reliable checks that can be executed for accessibility at the point of source code editing. Here are some of the top accessibility linters:

Equipping a development team with browser extensions and linters is a free and fast way to empower them to find and fix accessibility bugs immediately. The tools are simple to use, and no special accessibility training is required to execute the tests or consume and action the results. If the goal is to get farther faster with regard to actioning automated accessibility test results and improving accessibility, the adoption of these tools is a great first step.

The Next Level

Now that we have strategies for how to use results to improve accessibility at an operational level, what’s next? How can we ensure that all of our organization knows that accessibility is a practical piece of our development lifecycle? How can we build out our regression testing to include accessibility so that issues may not be reintroduced?

Codify it!

One way we can truly ensure that what we have created above will be done on a daily basis is to bring accessibility into your organization’s policy (also known as code policy or policy of code) — establishing such means that accessibility will be included throughout the SDLC as a foundational requirement and not an optional feature.

Although putting accessibility into the policy can take a while to achieve, the benefits of it are immeasurable. It creates a set of accessible coding practices that are clearly defined and established for how accessibility becomes part of the acceptance criteria or definition of “done” at the company level. We can use the automated accessibility results to drive this policy of code and ensure that the teams are doing full testing, using gates, and fixing the issues set by the policy!

Automate it!

Most automated accessibility testing libraries are standard out-of-the-box libraries that test generically for accessibility issues that exist on the page. The typical amount of issues caught is around 40%, which is a good amount. However, there is a way in which we can write automated accessibility tests to go above and beyond even more!

Accessibility regression scripts allow you to check accessibility functionality and markup to ensure that the contents of your page are behaving the way they should. Will this guarantee it works with a screen reader? Nope. But it will ensure that the accessible functionality of it is properly working.

For example, let’s say you have an expand/collapse section that shows extra details have you click the button. Automated accessibility libraries would be able to check to ensure the button has accessible text and maybe that it has a focus indicator. Writing a regression script, you could check to ensure the following:

  • It works with a keyboard (Enter and Space);
  • aria-expanded=” true/false” is properly set on the button;
  • The content in the expanded section is properly hidden from screen readers.

Doing this on key components can help ensure that the markup is properly set for assistive technology, and if there is an issue, it can be easier to debug if the issue is in code or potentially a bug in the assistive technology.

Conclusion

The “shift left” movement within the accessibility industry over the last few years has done a lot of good in terms of generating awareness and momentum. It has helped engage and activate companies and teams to actually take action to impact accessibility and inclusion within their digital properties, which in and of itself is a victory.

Even so, the actual impact on the overall accessibility of the digital world will continue to be somewhat limited until teams are not only empowered to execute tests in efficient ways but also that they are enabled to effectively use the test results to govern the overall quality, drive rapid remediation, and ultimately put process and structure in place to prevent regression.

In the end, the goal is really more than simply shifting left with accessibility, which often ends up taking what a bottleneck of testing in the QA stage of the SDLC is and simply dragging it left and upstream and placing it into the CI/CD pipeline. What really is desired, if sustainable digital accessibility transformation is the goal, is to decentralize the accessibility work and democratize it across the entire development team so that everyone participates (and hopefully into the design as well!) in the process.

The huge increase in automated accessibility testing adoption is a wonderful first step, but ultimately its impact is limited if we don’t know what to do with the results. If teams better understand how they can use these test results, then the increase in testing will, by default, increase accessibility in the end product. Simple gatekeeping, effective tool use and a mindful approach can have a major impact and lead to a more accessible digital world for all.

Related Reading on Smashing Magazine

How to Hide the Title for Selective WordPress Posts and Pages

Do you want to hide the title for selective WordPress posts and pages?

Titles can be helpful for both search engines and visitors, but not every page needs to display a title depending on its design.

In this article, we will show you how to hide the title for specific WordPress posts and pages.

How to Hide Title for Selective WordPress Posts and Pages

Why Hide the Title on Specific WordPress Posts or Pages?

When you create a WordPress page or post the first thing you’ll see is an ‘Add title’ field where you will type your title.

Creating a WordPress page or post title

Most WordPress themes show this title at the top of the page or post. A descriptive, relevant title can let visitors know they’re in the right place and what to expect from this page.

Titles may be helpful, but not every page or post needs a title. Your website’s homepage is one common example.

At other times you may want to show the page’s title in a different area. For example you might start your landing page with an eye-catching banner, and then show the title further down the page. 

In this guide, we’ll be covering three different methods to hide the post or page title in WordPress. Simply click the links below to jump to the method you prefer.

Method 1: Remove Post Title Using Full Site Editor

If you’re using WordPress 5.9 or later, and have a WordPress theme that supports full site editing, then you can use this method to remove the title from all posts or all pages.

Not sure if your theme support full site editing?

If it does, then you’ll see the menu option Appearance » Editor available in your WordPress dashboard.

Appearance Editor - full site editing menu

After clicking on ‘Editor’, the full site editor will launch.

From here, you’ll need to select the template you want to edit by clicking on the dropdown at the top of the page, and then clicking on ‘Browse all templates’.

Borwse templates

In this example, we’ll edit the Single Post template so that we can hide all our blog post titles.

To hide the title, first you’ll need to click on the blog post title. Then, simply click on the three dots options menu and select the ‘Remove Post Title’ option at the bottom.

Full site editor - remove post title

Don’t forget to click the Save button at the top of the screen after you’re done customizing the template.

That’s it, you’ve hidden the title on all your blog posts.

If you’d like a way to hide the title only on specific posts or pages, the next method should work for you.

Method 2: Hiding Selective WordPress Titles Using CSS

You can hide a page or post’s title by adding custom CSS code to the WordPress Customizer. This method simply hides the title from your visitors, but it still loads in the page’s HTML code.

This means that search engines can still use the title to help them understand your page’s contents, which is good for your WordPress website’s SEO and can help you get more traffic.

We’ll show you how to hide the title on specific posts or pages, or on all your posts and pages.

How to Hide the Title on a Specific WordPress Post or Page With CSS

To hide a page or post’s title using CSS, you just need to know its ID.

In your WordPress dashboard, either go to Posts » All Posts, or Pages » All Pages. Then find the page or post where you want to hide the title.

You can now open this post or page for editing.

The WordPress page and post editor

Now simply take a look at the URL in your browser’s address bar.

You should see a ‘post=’ section followed by a number. For example ‘post=100.’

Getting a WordPress post ID

This is your post’s ID. Make a note of this number, as you’ll be using it in your CSS code.

You can now go to Appearance » Customize.

Accessing the WordPress Customizer

This launches the WordPress Customizer.

In the sidebar, simply click on Additional CSS.

Adding CSS to your WordPress website

Now scroll to the bottom of the sidebar. 

You should now see a small text editor. This is where you’ll type your CSS code.

The WordPress CSS text editor

If you want to hide the title for a post, you’ll need to use the following code.

Just make sure you replace the ‘100’ with the post ID you got in the previous step.

.postid-100 .entry-title {
display: none;
}

If you want to hide a page’s title, you’ll need to use some slightly different code.

Once again make sure you replace the ‘100’ with your real page ID.

.page-id-100 .entry-title {
display: none;
}

Next, just scroll to the top of the page.

You can then click on the blue Publish button.

Publishing your custom CSS

Now if you check this page or post, the title should have disappeared. 

Is the title still there? 

If this method hasn’t worked for you, your WordPress theme may be using a different CSS class. This means your page or post ID will be different from the number shown in its URL. 

To get the correct ID, you’ll need to use your browser’s developer console. 

To start, head over to the page or post on your WordPress website. You can then open your browser’s developer console. 

This step will vary depending on which web browser you’re using. For example, if you have Chrome then you can use the Control+Shift+J keyboard shortcut on Windows, or the Command+Option+J shortcut on Mac.

Chrome users can also Control+click anywhere on the page or post, and then select Inspect. 

Chrome's inspection tool

If you’re unsure how to open the developer console, you can always check your browser’s website or official documentation for more information.

In the developer console, click on the three dotted icon. You can then select ‘Search.’

The Chrome developer console

You should now see a search bar towards the bottom of the developer console.

In this bar, type <body class, then simply press the Enter key on your keyboard.

Finding the post id in the body class

If you’re looking at a WordPress page, you should see something similar to the following.

<body class="page-template-default page page-id-78 logged-in admin-bar 
no-customize-support wp-embed-responsive is-light-theme no-js singular">

In the sample code above, you can see that the ‘page-id’ value is 78.

If you’re inspecting a WordPress post, the console should show something like:

<body class="post-template-default single single-post postid-100 single-format-standard logged-in admin-bar no-customize-support wp-embed-responsive is-light-theme no-js singular">

In that example, the ‘postid’ value is 100. You can now use this value with the CSS code we provided in the previous step.

Simply add this code to your website using the WordPress Customizer, following the process described above.

You can now take a look at the page or post. The title should have vanished.

How to Hide the Title on All Posts or Pages with CSS

To hide the titles for all your pages and posts, copy/paste the following into the text editor.

.entry-title {
display: none;
}

Do you want to hide the titles for all your pages, but not your posts? To hide all the page titles, copy/paste the following into the small text editor.

.page .entry-title {
display: none;
}

Another option is hiding the title for all of your posts. You can do this using the following CSS.

.post .entry-title {
display: none;
}

Sometimes you may want to hide the titles for all your posts and pages.

To do that, add the following.

.entry-title {
display: none;
}

Method 3: Hiding Selective WordPress Titles Using a Plugin

You can easily hide the title for selective posts and posts using Hide Page And Post Title. This free plugin lets you hide the title of any page, post, or even custom posts types.

First you’ll need to install and activate the Hide Page And Post Title plugin. If you need help, you can follow our tutorial on how to install a WordPress plugin.

Upon activation, open the page, post or custom post you want to edit.

The WordPress Posts page

Now simply scroll to the bottom of the right sidebar.

Here you’ll find a new ‘Hide Page and Post Title’ box.

How to hide a WordPress page or post title

To hide the title, just click to select the ‘Hide the title’ checkbox. You can then update or publish this post as normal.

That’s it! If you visit the page you’ll notice that the title has disappeared.

At some point you may need to restore this page or post’s title.

This is easy. Just open the page or post for editing. Then click to deselect the same ‘Hide the title’ checkbox. 

Don’t forget to click on the Update button at the top of the screen. Now if you visit this page, the title should have reappeared.

Method 4: Hiding Selective WordPress Titles Using SeedProd

Another option is to hide the title using a page builder plugin.

SeedProd is the best WordPress page builder plugin in the market. You can use this plugin to easily creating custom pages or even create your own WordPress theme.

This means you can easily hide the title on a custom page design or your theme.

SeedProd comes with a template library with over 150+ templates you can use as a starting point for your page designs. Let’s see how easy it is to remove the title from one of these theme templates. 

In your WordPress dashboard go to SeedProd » Template Builder. You can then click on the Themes button.

The SeedProd page builder plugin

This launches the SeedProd template library. You can now browse through all of the different designs.

To take a closer look at a template simply hover your mouse over it. Then click on the magnifying glass icon. 

The SeedProd template library

This will open the template in a new tab. 

When you find a template that you want to use, hover your mouse over that template. Then simply click on the checkmark icon.

Selecting a SeedProd WordPress template

This adds all of this template’s designs to your WordPress dashboard. 

There are usually different designs for different types of content. 

The SeedProd theme builder

You can use these templates to hide the title for the different content types. For example, many SeedProd templates have a separate design for the homepage.

To hide the title for your homepage, you would simply need to edit the Homepage template.

A SeedProd homepage template

To hide the title for all your posts, you’ll typically need to edit the Single Post template. 

Meanwhile if you want to hide the title from your pages you’ll usually edit SeedProd’s Single Page template.

A SeedProd single post template

To edit a template hover your mouse over it. 

You can then go ahead and click on the Edit Design link.

Editing a SeedProd template design

This opens this design in the SeedProd drag and drop editor. To hide the title, find either the Post or Page Title. 

Once you spot this title, give it a click. SeedProd’s sidebar will now show all of the settings for the selected area.

At the top of this panel you should see either Post Title or Page Title.

Removing the post title using SeedProd

After confirming that you’ve selected the right area, hover over the Post Title or Page Title in the main SeedProd editor.

You should now see a row of controls. 

Hiding the post or page title with SeedProd

To remove the title from this design just click on the Trash icon.

SeedProd will ask whether you really want to delete the title. To go ahead and remove it, simply click on ‘Yes, delete it!’

Editing your WordPress theme with SeedProd

The title will now disappear from your design. 

To see how this will look on your website click on the Preview button.

Previewing your custom WordPress theme

When you’re happy with your design click on the Publish button.

Depending on how your site is set up, you may need to remove the title from some additional templates. For example you might want to hide the title for all your posts and pages. In this case, you would typically need to edit both the Single Post and Single Page templates. 

If you’re unsure then it may help to review all the designs that make up your theme. To do this simply go to SeedProd » Theme Builder

The SeedProd theme builder

You should now see a list of all your different designs. You can now edit any of these templates following the same process described above. 

FAQs About Hiding the Title for Selective Pages and Posts

Before hiding your page or post titles, there are some effects you should think about, such as the impact this action will have on your website’s SEO.

That being said, here are some of the most frequently asked questions about hiding the page and post title. 

Why can’t I just leave the ‘Add title’ field blank? 

When it comes to hiding the title there seems like an easy fix. As you’re creating your page, just leave the title field blank. 

At first this does seem to fix the problem. WordPress will display this post to visitors without a title. However, there are a few problems.

Firstly, this page or post will appear as ‘(no title)’ in your WordPress dashboard. This makes it more difficult to keep track of your pages. 

If you create lots of different ‘(no title)’ posts, then how do you know which is your contact us page? And which page is your homepage? 

Multiple WordPress pages without a title

WordPress also uses the title to create the page’s URL.

If you don’t provide a title, then by default WordPress uses a number instead, such as ‘www.mywebsite/8.’

Visitors often use the URL to help them understand where they are on your WordPress website, so ‘www.mywebsite/8’ isn’t particularly helpful.

This vague URL is not an SEO-friendly permalink, so search engines may have a harder time understanding what your content is about and including it in the relevant search results.

Will hiding the page or post title affect my SEO?

If you prefer to hide a page or post’s title, you’ll want to spend some extra time fine-tuning the rest of your WordPress SEO, including setting an SEO title. This will help ensure that the search engines understand your page’s content, even without the title.

Here you’ll need a good SEO plugin, since WordPress doesn’t let you do this by default.

We recommend using AIOSEO, the best SEO plugin for WordPress in the market. This beginner friendly SEO toolkit is used by over 3 million websites. 

If you need help getting started, then please refer to our guide on how to properly set up All in One SEO in WordPress.

To make sure your titles are optimized, you can see our guide on how to use the headline analyzer in AIOSEO.

We hope this article helped you learn how to hide the title for selective WordPress posts and pages. You can also go through our guide on how to choose the best web design software, and the best WordPress landing page plugins

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post How to Hide the Title for Selective WordPress Posts and Pages first appeared on WPBeginner.

How to Add Facebook Like Button in WordPress

Do you want to add a Facebook Like button in WordPress?

A Facebook Like button on your WordPress website can make it simple and easy for users to like and share your content. You can increase engagement and get more followers.

In this article, we will show you how to add the Facebook Like button in WordPress.

How to add Facebook like button in WordPress

Why Add a Facebook Like Button in WordPress?

Facebook is one of the most popular social media platforms in the world. Many businesses use Facebook to connect with their customers and promote their products.

Adding a Facebook Like button to your WordPress website can help drive more engagement. It also encourages people to share your content on their Facebook profiles and attract new users to your site.

You can use the Facebook Like button to increase your social followers and build a community. It helps raise awareness about your products and services and boosts conversions.

That said, let’s see how you can add a Facebook Like button in WordPress using a plugin or adding custom code.

Method 1: Add Facebook Like Button in WordPress Using a Plugin

In this method, we will be using a WordPress plugin to add Facebook Like button. This method is very easy and recommended for beginners.

The first thing you need to do is install and activate the BestWebSoft’s Like & Share plugin. For more details, see our step-by-step guide on how to install a WordPress plugin.

Upon activation, you can go to Like & Share » Settings from your WordPress admin panel.

Like and share plugin settings

Next, you’ll need to add the Facebook App ID and App Secret. If you don’t have this information, then simply follow the steps below.

How to Create a Facebook App ID and App Secret

Go ahead and click the ‘create a new one’ link under App ID or App Secret field in the Like & Share plugin.

This will take you to the Meta for Developers website. We suggest opening the website in another tab or window because you’ll need to open the Like & Share settings page in your WordPress dashboard to enter the app ID and secret.

From here, you’ll need to select an app type. Go ahead and choose ‘Business’ as the app type and click the ‘Next’ button.

Choose your app type

Next, you’ll need to provide basic information about your app.

You can enter a display name for your app, and be sure that the correct email address appears under the ‘App contact email’ field. Facebook will automatically pick the email address of the account you’re currently logged in as.

There’s also an option setting to choose a business account. You can leave this on ‘No Business Manager account selected’ and click the ‘Create app’ button.

Enter basic information for app

A popup window will now appear where Facebook will ask you to re-enter your password.

This is for security purposes to stop malicious activity on your account. Go ahead and enter your Facebook account password and click the ‘Submit’ button.

Reenter your Facebook password

After that, you’ll see your app dashboard.

From here, you can head to Settings » Basic from the menu on your left.

Go to basic settings page

On the Basic settings page, you will see the ‘App ID’ and ‘App Secret.’

You can now enter this information in the Like & Share plugin settings in your WordPress dashboard.

Copy the app ID and secret

Finish Up Customizing Your Facebook Like Button

First, copy the ‘App ID’ and head back to the tab or window where you have the Like & Share » Settings page opened. Simply enter the ‘App ID’ in the respective fields.

Now repeat the step by copying the ‘App Secret’ data from Meta for Developers page and pasting it into the Like & Share plugin settings.

Customize your Facebook like button

Once you’ve done that, you can choose whether to show the Facebook Like button along with the Profile URL and Share buttons.

There are also settings to edit the Facebook Like button’s size, its position before or after the content, and alignment.

More customization options

If you have enabled the Profile URL button, then you can scroll down to the ‘Profile URL Button’ section and enter your Facebook username or ID.

When you’re done, don’t forget to save your changes.

Now, the plugin will automatically add a Facebook Like button to your WordPress website and position it based on your settings.

You can also use the [fb_button] shortcode to add the Facebook Like button anywhere on your site.

That’s all! You can now visit your site and see the Like button on each post.

Facebook like button preview

Method 2: Manually Add Facebook Like Button in WordPress

Another way to add a Facebook Like button is by using custom code. However, this method requires you to add the code directly in WordPress so we only recommend it for people who are comfortable editing code.

With that in mind, we are going to use the free WPCode plugin to do so, which makes it simple for anyone to add code to their WordPress blog.

First, you need to visit the ‘Like Button’ page on the Meta for Developers website and scroll down to the ‘Like Button Configurator’ section.

Get code from Facebook developer site

Next, you can enter the URL of your Facebook page in the ‘URL to Like’ field. This will be the page you’d like to connect with the Facebook Like button.

After that, simply use the configuration to choose the Like button layout and size. You will also see a preview of the Like button.

Once satisfied with the preview, click on the ‘Get Code’ button.

This will bring up a popup showing you two pieces of code snippets under the ‘JavaScript SDK’ tab.

Copy the SDK code

Please note that if you directly add these code snippets to your WordPress theme, it may break your website. Plus, the code snippets will be overwritten when you update the theme.

An easier way of adding code to your is by using the WPCode plugin. It lets you paste code snippets to your website and easily manage custom code without having to edit the theme files.

First, you’ll need to install and activate the WPCode free plugin. For more details, please see our guide on how to install a WordPress plugin.

Upon activation, you can head to Code Snippets » Header and Footer from your WordPress dashboard. Now, you’ll need to copy the first code snippet and add it to your WordPress theme’s header.php file right after the <body> tag.

Simply copy the code and enter it in the ‘Body’ section. Don’t forget to click the ‘Save Changes’ button.

Add first code to header section

Next, you need to copy the second piece of code and paste it into your WordPress site to display the Like button.

To start, you can go to Code Snippets » + Add Snippet from your WordPress admin panel or click the ‘Add New’ button.

Click 'Add New Snippet' in WPCode

On the next screen, WPCode will allow you to select a snippet from the pre-built library or add a new custom code.

Go ahead and choose the ‘Add Your Custom Code (New Snippet)’ option and click the ‘Use snippet’ button.

Add custom code in WPCode

After that, you can give a name for your custom code and enter the second code snippet under the ‘Code Preview’ section.

Make sure to click the ‘Code Type’ dropdown menu and select ‘HTML Snippet’ as the code type.

Enter second code and select code type

Next, you can scroll down to the ‘Insertion’ section and select where you’d like the Facebook Like button to appear. For example, let’s say you want it to appear before the content.

Simply click the ‘Location’ dropdown menu and choose the Insert Before Content option under Page, Post, Custom Post Types.

Select location of like button

Once you’re done, you can click the ‘Save Snippet’ button.

You’ll also have to click the toggle and switch it from Inactive to Active.

Save and activate code snippet WPCode

That’s it, a Facebook Like button will appear on your website after entering the code.

What is Open Graph Metadata & How to Add it to WordPress?

Open Graph is metadata that helps Facebook collect information about a page or post on your WordPress site. This data includes a thumbnail image, post/page title, description, and author.

Facebook is quite smart in pulling up the title and description fields. However, if your post has more than one image, then it may sometimes show an incorrect thumbnail when shared.

If you are already using the All in One SEO (AIOSEO) plugin, then this can be easily fixed by visiting All in One SEO » Social Networks and clicking on the Facebook tab.

Next, click the ‘Upload or Select Image’ button to set a default post Facebook image if your article doesn’t have an open graph image.

Upload default Facebook image

Besides that, you can also configure an open graph image for each individual post or page.

When you’re editing a post, just scroll down to the AIOSEO Settings section in the content editor. Next, switch to the ‘Social’ tab and see a preview of your thumbnail.

Now scroll down to the ‘Image Source’ option, and you can then choose an open graph image for your post.

For example, you can select the featured image, attached image, the first image in the content, or upload a custom image to be used as an open graph thumbnail.

Image source for open graph

For more details and alternate ways to add open graph metadata, see our guide on how to add Facebook Open Graph metadata in WordPress.

We hope this article helped you learn how to add Facebook Like button in WordPress. You may also want to see our list of how to register a domain name and the best social media plugins for WordPress.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post How to Add Facebook Like Button in WordPress first appeared on WPBeginner.

How to Make Search Your Site’s Greatest Asset

What makes a site truly brilliant? Impressive content? Sophisticated design? User-friendly interface? An effective support system for users, old and new alike? All this and more, my friend. No matter what you choose to build your site around, it can’t exist without a great search solution that helps guide every visitor to what they’re looking for – quickly, efficiently, and with as little effort on the site owner’s part as possible.

In this article, we’ll go through the crème de la crème of the coolest features you can implement on your search with the help of Site Search 360, an easy-to-install and easier-to-maintain app fit for any site builder. Whether you have a HubSpot blog, a knowledge base maintained via Zendesk, a Shopify store, or all three at once, as long as your site’s content is searchable, this app is just what the doctor ordered!

Top 5 search features for your site

Search Result Categorization

It’s highly likely that your site has tons of content that your users might be asked to search through. Depending on the number of pages you have accumulated over the years, that could require herculean patience. So, the first thing needed for your new search are separate tabs to neatly organize all the types of content you offer.

Say you sell a million types of products. You wouldn’t want your users to scroll through all product categories mixed together as they search for their dream pair of shoes. Non-commercial sites can use this nifty technique, too – for instance, to put articles, YouTube videos, and blog posts in their own dedicated tabs. Or, as we call them in the search biz, Result Groups.

Categorization via Result Groups is by no means limited to good old content types. Your search results might constitute pages from more than just a singular site – you could, for instance, have several interconnected domains for your primary content, FAQ knowledge base, news, etc. All of them have unique subsets of pages that you’d need your users to be able to search through, and as long as all these sites are included in your Data Sources, you can not only enable extensive cross-domain search, but also separate pages from these sites into dedicated groups for easier navigation.

And the best part? You can even manually order these tabs to guide your site visitors to the categories you deem most important.

So, how do you set this up? Easy – just enter the URL patterns of the page subsets you’d like to include in the same tab (or XPaths to specific elements found across all of these pages), give your brand-new Result Group a name, and you’re done:

setting up result group for facetted search

Repeat until all categories are in place.

And here’s what your Result Groups can look like once implemented:

result groups in live search

Pro tip: If you ever feel like adding multiple search boxes to your site, you can limit each of them to specific Result Groups. You’ll then have, say, only products in the search results for the commercial part of the site, FAQ entries on the “About Us” page, etc. Configuration options are close to infinite!

Filters and Ranking

Your search is now organized into tabs. But that’s not the only thing you can do to make navigating your site’s content a piece of cake.

Filters are a must-have when you want your users to be able to narrow down their search to instantly find exactly what they had in mind. Say someone’s looking for articles written by a specific author within a specific date range. With just a few clicks, you can create filters for both of these criteria (or anything in the world really – from prices to locations and beyond).

These bad boys are configured differently for projects whose search results were generated either with a sitemap or through website crawling (low-touch integrations where the only thing we need to index your content is your site domain) and for those where a product feed was involved, turning each product into its own search result (best integrated over our API or through our extensions for various e-commerce platforms such as Shopify, Shopware, Adobe Commerce, and so on).

For crawler-based integrations, filters are configured with Data Points, tidbits of information found across numerous pages that the crawler is pointed to via XPaths, URL patterns, linked and meta data, or even regular expressions. Data Points can be added to search result descriptions (across all pages as well as in specific Result Groups), used to automatically boost certain pages in your search results’ hierarchy, and, of course, they can direct the crawler to your future filter values. All of this can be configured right when a Data Point is created with a simple tick in the box of your choice.

Here are the settings you can tinker with for each of your filters:

crawler-based filters

And here’s your data point used simultaneously in the description of the product and as a filter:

crawler-based filter example

For e-com, things get even more exciting. Instead of Data Points, we extract and then use Product Facts, aka the various product characteristics (like color, material, vendor, etc.) available in your feed. The process is fully automated – no need to experiment with XPaths and regexes. It also comes with some ecom-exclusive perks such as HEX-coded circles next to “Color” filter values.

An e-com filter configuration could look like this:

color filter with swatches

Another pro tip for you: e-com and regular filters alike (as well as their values) can be reordered, and there’s even an option to exclude specific values from any filter. But the coolest part is that you get to choose how many pages should bear the values of a specific filter before that filter is triggered to pop up in the search. There really isn’t much of a point in showing the filter if it can only be applied to a singular page, now is there?

In action, these filters are impressive to say the least.

ecommerce filters example

Filters are tightly connected to Ranking Strategies.

Crawler-based integrations come with the option to sort results in ascending or descending order by any numeric Data Point such as “Price”. Sorting Options are configured in a very straightforward manner:

price filter and sorting config

And here’s what your search can look like with the same “Product Price” Data Point used as a basis for the Sorting Order, filter, and part of the search snippet.

price filter and sorting option

You can also prioritize different subsets of pages in the search result hierarchy with boosting rules – in bulk, pointing the crawler to the most important pages by putting more value in their URL patterns, or by specific page element. Pages with query matches in their titles can be consequently ranked higher than those which only have said query matches in other extracted content and vice versa.

E-com integrations get more variety with strategies based on Product Facts attributes, a singular one or several all at once, to automatically put products with specific characteristics at the top of your search results. Let’s say you wanted to bring your users’ attention to products of a specific brand already available for purchase and popular among your customers. In that case, you could base your Ranking Strategy on three factors: “Brand” (for example, “ASOS”), “Availability” (“In stock”) and “Most Sold Items” (“More is better”). You could also prioritize them differently so that “Brand” is the most important characteristic, with “Availability” second, and “Most Sold Items” following suit.

Here’s an example of a Ranking Strategy with multiple attributes:

ranking strategy config

Narrowing down the search and automatically ordering your search results to point your users to the coolest content you offer right away? Sign me up!

Custom results and result sets

This is it, folks. The thing you never knew you needed but always did.

Boosting whole subsets of pages is cool, but what if you need more granular control over your result sets? That’s where Result Mappings come into play.

There are two primary ways in which mappings are used – to create custom result sets and to redirect queries to specific URLs.

This is quite simple too: you enter your query of choice (e.g., a popular search term), our system returns every page that matches said query, and you can reorder these search results as you see fit. You can even blend in other queries among this set so if the mapping is applied for, say, “jeans” , it can also include pages that’d normally be displayed for “pants”, but whenever your users search for “pants”, they’d see default search results. You can also automatically apply filters (with specific values, should you so choose) and ranking strategies to your mapping.

Here’s a possible configuration for a custom result set:

customized result mappings config example

Redirecting queries to your URL of choice is even easier – you just need to type in the address of the page which will pop up automatically once a query is entered into the search box, and you’re done:

redirect to url mapping setup

Both these types of mappings can be applied to queries indefinitely or within a specific timeframe. Added some exciting content to your website and want your users to see it before the older stuff? Just place those results at the top of the page and have the search return to its default state when your new content no longer needs promotion. Simple, but super effective.

And the best part? Your mappings aren’t limited to pages you have indexed from your site. Your custom result set can include entirely new pages – you’ll just need to enter the desired URL, set up the title, description, and thumbnail of your result, and there it’ll be, totally indistinguishable from other search results as if it were in your Index all along:

custom result config
custom result in live search

Your mapping can also be complemented by a custom banner placed above other results – a fancy way of promoting whatever content is most worthy of your users’ attention. This is where things get a little code-heavy. Banners are configured with HTML templates, so be ready to put your developer hat on – it’s totally worth the effort:

custom banner example

Mappings aren’t the only way of customizing which results are shown for any given query. There’re also Dictionary Entries, which allow you to make certain queries synonymous to others.

Intricate control over any page displayed for any query? Don’t mind if we do.

Search design customization

So, you’re done experimenting with your new search features, and the time has come for you to decide how they should look on your site.

Every design-related feature we offer is nested in our “Search Designer” tool, with three distinct blocks for our default design options (fully configurable by clicking a few buttons – no coding necessary), “Additional CSS” (meant for more intricate design customization), and “Templating” (an impressive HTML and Mustache-based tool that allows you to apply your site’s existing search design configuration to your Site Search 360 search results for a seamless integration with any website theme).

Default design options are the way to go if you don’t feel like getting into coding:

design configured with default site search 360 settings

These options are also categorized according to the functionality they offer:

  • “Main search settings”, aka search box styling configuration (its color and loading animation), search bar/search result placement (selector-based), and layout (in a pop-up on the page where the search was initiated on top of the search or fullscreen in a new tab), as well as settings -> layout (embedded, in a pop-up window on top of your site page or in a fullscreen overlay)
  • “Suggestions” are tailored to give you full control over the presentation of your search suggestions – from basic functionally settings, like how many symbols should be entered into the search box for your suggestions to pop up, to fully customizable suggestions formats: user search history and quick links for when your search box is empty
Empty state suggestion
  • …alongside autocomplete queries (preconfigured or pulled from your analytics as most typed in by your user base) as well as best matching results from every Result Group for when a few symbols have already been typed in
Autocomplete suggestions
  • “Search result behavior” is for, to no one’s surprise, results (their number per page, optionable redirection to a new tab when they’re clicked on), Result Groups (whether they’re shown or hidden in bulk and by specific groups), Filters (shown or hidden, where they’re placed on the search results page, whether the number of filters values available for matching search results is displayed) and Highlighting (whether matched content is put in bold and if so, in which color)
  • “Search result layout” is used to configure what (if anything) is displayed in the result thumbnails when a page is associated with several images or no specific image at all, how search results are presented on desktop and mobile (how they’re ordered as well as whether each result should have a URL, image, and description included in the result card) and whether there should be a CTA button (“Add to Cart” or quite literally anything your heart desires)
  • “Advanced settings” gives you the opportunity to add custom CSS to your search’s design as well as your result/suggestion templates, and this is where things get really exciting

Both “Additional CSS” and “Templating” are essentially pop-up windows on the right side of the screen where you can enter your custom CSS/HTML.

additional css

Keep in mind that Templating necessitates filling numerous properties (result title, its images, URL, description, and applicable Data Points) with syntax similar to the Mustache templating engine:

templating

You can also use Callbacks to personalize your templates even further:

templating callback

Just compare our default search design:

default search design

To the design templated on a user’s site:

search design with additional css and templating

Total creative freedom with just a tiny bit of coding.

Everything in between

Alright, you caught us. The list of awesome search features you could implement on your site actually goes way beyond our top 5 contenders! To finish off, a lightning round of other functionality settings that will grant you full control over the look and feel of your search.

So you get to analyze how successful the search is. Awesome! Now you just need to see which exact pages constitute your search results. First off, you’ll need to see which pages constitute your search results. Your Index Log (or Product List) is there to help you out with that! Not only will you have a list of pages our crawler has access to, each of them will also be equipped with a status indicating whether it has been added to your search results successfully:

index log

Your search also needs variety when it comes to the sources your results are pulled from. Depending on the integration, your primary indexing method could be either the list of pages available on your site or your product feed. Some site builders, like Duda, Lightspeed, or Shopify, allow us to access this content automatically – no action needs to be taken by the user. You can also use multiple indexing methods, peppering FAQ entries, pages from non-indexable domains, and even YouTube videos in your search. Keep in mind that your search results’ formats may go way past simple HTML pages – you also have the option to index PDF, Word, Excel, PowerPoint, and even Open Office documents, so the search can be as diverse as you wish (and with PDF Indexing specifically, you’re also in control of indexing strategies for your PDFs’ titles, content, and images).

Another nifty feature to experiment with? Search Fuzziness, aka how closely search results and search suggestions need to match the entered query – a range from “extremely strict” (the match needs to be perfect) to “get more results” (the most lenient of options requiring just a few symbols of the query to be found in the search result/search suggestion titles and content):

Fuzziness settings

You can also pick and choose the content to be included in the search. Our Content Extraction rules allow you to point our crawler to the exact element (or elements) where your search results’ titles and images are located. The same goes for the content of the page – you can force the crawler to automatically skip specific parts of your site’s pages when configuring your search results.

Search snippets are a bit different, though. By default, snippets are dynamic, their text changing depending on the part of the page’s content where a query match is found (and highlighted). But you can choose to configure static snippets based on your metadata or Open Graph tags if your page descriptions are carefully curated and kept up to date. In that case, the same few paragraphs would showing regardless of whether a match is there or not. We usually recommend to stick to dynamic snippets so that your users can instantly see why a certain result popped up in their search.

search snippet settings
dynamic search snippet

Next up – security settings which will allow you to block specific IPs from using the search (or that keep them from being logged in your analytics), thus protecting yourself from bot searches and ensuring that internal activity doesn’t pollute your search data.

Last but definitely not least, you’ll need a way to determine how successfully your search is performing. There are three types of data that might prove useful in this regard:

  • Dashboard Analytics present information about how your search is being used, aka how many searches have been performed in a specific timeframe, what are the most common queries, etc. Here you’ll also see queries that returned zero matches (for instance, due to misspellings or lack of relevant site content) and get the opportunity to instantly correct your search result sets by setting up new Result Mappings or Dictionary Entries.
dashboard analytics
  • Search Success Tracking is a score calculated based on the number of users who engage with the search; the frequency at which they click on the search results they’re presented with, abandon their search, and use filters; and the percentage of zero result searches:
search success tracking
  • Search Map shows where in the world your search is being used as well as which queries are most common in any given region:
search map

A lot, right? Well, there’s never be too much of a good thing.

Most likely, you’re already on the edge of your seat, desperate to try out all these features yourself. So what are you waiting for?

Start with a free Site Search 360 account today!

If your site’s on the smaller side when it comes to the number of pages to index and the frequency at which the search will be used, you can enjoy all these features (and more) for free. And if you exceed the Free plan limits, Site Search 360 offers a wide variety of custom plans that can accommodate your every need.

If any questions come up, head over to to our How-To docs for step-by-step instructions on setting up every search feature mentioned in this article, or reach out to 360’s support.

Happy searching!

Node.js Authentication With Twilio Verify

Building authentication into an application is a tedious task. However, making sure this authentication is bulletproof is even harder. As developers, it’s beyond our control what the users do with their passwords, how they protect them, who they give them to, or how they generate them, for that matter. All we can do is get close enough to ensure that the authentication request was made by our user and not someone else. OTPs certainly help with that, and services like Twilio Verify help us to generate secured OTPs quickly without having to bother about the logic.

What’s Wrong With Passwords?

There are several problems faced by developers when using password-based authentication alone since it has the following issues:

  1. Users might forget passwords and write them down (making them steal-able);
  2. Users might reuse passwords across services (making all their accounts vulnerable to one data breach);
  3. Users might use easy passwords for remembrance purposes, making them relatively easy to hack.

Enter OTPs

A one-time password (OTP) is a password or PIN valid for only one login session or transaction. Once it can only be used once, I’m sure you can already see how the usage of OTPs makes up for the shortcomings of traditional passwords.

OTPs add an extra layer of security to applications, which the traditional password authentication system cannot provide. OTPs are randomly generated and are only valid for a short period of time, avoiding several deficiencies that are associated with traditional password-based authentication.

OTPs can be used to substitute traditional passwords or reinforce the passwords using the two-factor authentication (2FA) approach. Basically, OTPs can be used wherever you need to ensure a user’s identity by relying on a personal communication medium owned by the user, such as phone, mail, and so on.

This article is for developers who want to learn about:

  1. Learn how to build a Full-stack express.js application;
  2. Implement authentication with passport.js;
  3. How to Twilio Verify for phone-based user verification.

To achieve these objectives, we’ll build a full-stack application using node.js, express.js, EJS with authentication done using passport.js and protected routes that require OTPs for access.

Note: I’d like to mention that we’ll be using some 3rd-party (built by other people) packages in our application. This is a common practice, as there is no need to re-invent the wheel. Could we create our own node server? Yes, of course. However, that time could be better spent on building logic specifically for our application.

Table Of Contents
  1. Basic overview of Authentication in web applications;
  2. Building an Express server;
  3. Integrating MongoDB into our Express application;
  4. Building the views of our application using EJS templating engine;
  5. Basic authentication using a passport number;
  6. Using Twilio Verify to protect routes.
Requirements
  • Node.js
  • MongoDB
  • A text editor (e.g. VS Code)
  • A web browser (e.g. Chrome, Firefox)
  • An understanding of HTML, CSS, JavaScript, Express.js

Although we will be building the whole application from scratch, here’s the GitHub Repository for the project.

Basic Overview Of Authentication In Web Applications

What Is Authentication?

Authentication is the whole process of identifying a user and verifying that a user has an account on our application.

Authentication is not to be confused with authorization. Although they work hand in hand, there’s no authorization without authentication.

That being said, let’s see what authorization is about.

What Is Authorization?

Authorization at its most basic, is all about user permissions — what a user is allowed to do in the application. In other words:

  1. Authentication: Who are you?
  2. Authorization: What can you do?
Authentication comes before Authorization.
There is no Authorization without Authentication.

The most common way of authenticating a user is via username and password.

Setting Up Our Application

To set up our application, we create our project directory:

mkdir authWithTwilioVerify
Building An Express Server

We’ll be using Express.js to build our server.

Why Do We Need Express?

Building a server in Node could be tedious, but frameworks make things easier for us. Express is the most popular Node web framework. It enables us to:

  • Write handlers for requests with different HTTP verbs at different URL paths (routes);
  • Integrate with view rendering engines in order to generate responses by inserting data into templates;
  • Set common web application settings — like the port used for connecting, and the location of templates used for rendering the response;
  • Add additional request processing middleware at any point within the request handling pipeline.

In addition to all of these, developers have created compatible middleware packages to address almost any web development problem.

In our authWithTwilioVerify directory, we initialize a package.json that holds information concerning our project.

cd authWithTwilioVerify
npm init -y

In Keeping with the Model View Controller(MVC) architecture, we have to create the following folders in our authWithTwilioVerify directory:

mkdir public controllers views routes config models

Many developers have different reasons for using the MVC architecture, but for me personally, it’s because:

  1. It encourages separation of concerns;
  2. It helps in writing clean code;
  3. It provides a structure to my codebase, and since other developers use it, understanding the codebase won’t be an issue.

  4. Controllers directory houses the controllers;

  5. Models directory holds our database models;
  6. Public directory holds our static assets e.g. CSS files, images e.t.c.;
  7. Views directory contains the pages that will be rendered in the browser;
  8. Routes directory holds the different routes of our application;
  9. Config directory holds information that is peculiar to our application.

We need to install the following packages to build our app:

  • nodemon automatically restarts our server when we make changes;
  • express gives us a nice interface to handle routes;
  • express-session allows us to handle sessions easily in our express application;
  • connect-flash allows us to display messages to our users.
npm install nodemon -D

Add the script below in the package.json file to start our server using nodemon.

"scripts": {
    "dev": "nodemon index"
    },
npm install express express-session connect-flash --save

Create an index.js file and add the necessary packages for our app.

We have to require the installed packages into our index.js file so that our application runs well then we configure the packages as follows:

const path = require('path')
const express = require('express');
const session = require('express-session')
const flash = require('connect-flash')

const port = process.env.PORT || 3000
const app = express();

app.use('/static', express.static(path.join(__dirname, 'public')))
app.use(session({ 
    secret: "please log me in",
    resave: true,
    saveUninitialized: true
    }
));

app.use(express.json())
app.use(express.urlencoded({ extended: true }))

// Connect flash
app.use(flash());

// Global variables
app.use(function(req, res, next) {
    res.locals.success_msg = req.flash('success_msg');
    res.locals.error_msg = req.flash('error_msg');
    res.locals.error = req.flash('error');
    res.locals.user = req.user
    next();
});

//define error handler
app.use(function(err, req, res, next) {
    res.render('error', {
        error : err
    })
})

//listen on port
app.listen(port, () => {
    console.log(`app is running on port ${port}`)
});

Let’s break down the segment of code above.

Apart from the require statements, we make use of the app.use() function — which enables us to use application level middleware.

Middleware functions are functions that have access to the request object, response object, and the next middleware function in the application’s request and response cycle.

Most packages that have access to our application’s state (request and response objects) and can alter those states are usually used as middleware. Basically, middleware adds functionality to our express application.

It’s like handing the application state over to the middleware function, saying here’s the state, do what you want with it, and call the next() function to the next middleware.

Finally, we tell our application server to listen for requests to port 3000.

Then in the terminal run:

npm run dev

If you see app is running on port 3000 in the terminal, that means our application is running properly.

Integrating MongoDB Into Our Express Application

MongoDB stores data as documents. These documents are stored in MongoDB in JSON (JavaScript Object Notation) format. Since we’re using Node.js, it’s pretty easy to convert data stored in MongoDB to JavaScript objects and manipulate them.

To install MongoDB in your machine visit the MongoDB documentation.

In order to integrate MongoDB into our express application, we’ll be using Mongoose. Mongoose is an ODM(which is the acronym for object data mapper).

Basically, Mongoose makes it easier for us to use MongoDB in our application by creating a wrapper around Native MongoDB functions.

npm install mongoose --save

In index.js, it requires mongoose:

const mongoose = require('mongoose')

const app = express()

//connect to mongodb
mongoose.connect('mongodb://localhost:27017/authWithTwilio', 
{ 
    useNewUrlParser: true, 
    useUnifiedTopology: true 
})
.then(() => {
    console.log(`connected to mongodb`)
})
.catch(e => console.log(e))

The mongoose.connect() function allows us to set up a connection to our MongoDB database using the connection string.

The format for the connection string is mongodb://localhost:27017/{database_name}.

mongodb://localhost:27017/ is MongoDB’s default host, and the database_name is whatever we wish to call our database.

Mongoose connects to the database called database_name. If it doesn’t exist, it creates a database with database_name and connects to it.

Mongoose.connect() is a promise, so it’s always a good practice to log a message to the console in the then() and catch() methods to let us know if the connection was successful or not.

We create our user model in our models directory:

cd models
touch user.js

user.js requires mongoose and create our user schema:

const mongoose = require('mongoose');

const userSchema = new mongoose.Schema({
    name : {
        type: String,
        required: true
    },
    username : {
        type: String,
        required: true
    },
    password : {
        type: String,
        required: true
    },
    phonenumber : {
        type: String,
        required: true
    },
    email : {
        type: String,
        required: true
    },
    verified: Boolean
})

module.exports = mongoose.model('user', userSchema)

A schema provides a structure for our data. It shows how data should be structured in the database. Following the code segment above, we specify that a user object in the database should always have name, username, password, phonenumber, and email. Since those fields are required, if the data pushed into the database lack any of these required fields, mongoose throws an error.

Though you could create schemaless data in MongoDB, it is not advisable to do so — trust me, your data would be a mess. Besides, schemas are great. They allow you to dictate the structure and form of objects in your database — who wouldn’t want such powers?

Encrypting Passwords

Warning: never store users’ passwords as plain text in your database.
Always encrypt the passwords before pushing them to the database.

The reason we need to encrypt user passwords is this: in case someone somehow gains access to our database, we have some assurance that the user passwords are safe — because all this person would see would be a hash. This provides some level of security assurance, but a sophisticated hacker may still be able to crack this hash if they have the right tools. Hence the need for OTPs, but let’s focus on encrypting user passwords for now.

bcryptjs provides a way to encrypt and decrypt users’ passwords.

npm install bcryptjs

In models/user.js, it requires bcryptjs:

//after requiring mongoose
const bcrypt = require('bcryptjs')

//before module.exports
//hash password on save
userSchema.pre('save', async function() {
    return new Promise( async (resolve, reject) => {
        await bcrypt.genSalt(10, async (err, salt) => {
            await bcrypt.hash(this.password, salt, async (err, hash) => {
                if(err) {
                    reject (err)
                } else {
                    resolve (this.password = hash)
                }
            });
        });
    })
})
userSchema.methods.validPassword = async function(password) {
    return new Promise((resolve, reject) => {
        bcrypt.compare(password, this.password, (err, res) => {
            if(err) {
                reject (err)
            } 
            resolve (res)
        }); 
    })
}

The code above does a couple of things. Let’s see them.

The userSchema.pre('save', callback) is a mongoose hook that allows us to manipulate data before saving it to the database. In the callback function, we return a promise which tries to hash(encrypt) bcrypt.hash() the password using the bcrypt.genSalt() we generated. If an error occurs during this hashing, we reject or we resolve by setting this.password = hash. this.password being the userSchema password.

Next, mongoose provides a way for us to append methods to schemas using the schema.methods.method_name. In our case, we’re creating a method that allows us to validate user passwords. Assigning a function value to *userSchema.methods.validPassword*, we can easily use bcryptjs compare method bcryprt.compare() to check if the password is correct or not.

bcrypt.compare() takes two arguments and a callback. The password is the password that is passed when calling the function, while this.password is the one from userSchema.

I prefer this method of validating users’ password because it’s like a property on the user object. One could easily call User.validPassword(password) and get true or false as a response.

Hopefully, you can see the usefulness of mongoose. Besides creating a schema that gives structure to our database objects, it also provides nice methods for manipulating those objects — that would have been otherwise somewhat though using native MongoDB alone.

Express is to Node, as Mongoose is to MongoDB.
Building The Views Of Our Application Using EJS Templating Engine

Before we start building the views of our application, let’s take a look at the front-end architecture of our application.

Front-end Architecture

EJS is a templating engine that works with Express directly. There’s no need for a different front-end framework. EJS makes the passing of data very easy. It also makes it easier to keep track of what’s going on since there is no switching from back-end to front-end.

We’ll have a views directory, which will contain the files to be rendered in the browser. All we have to do is call the res.render() method from our controller. For example, if we wish to render the login page, it’s as simple as res.render('login'). We could also pass data to the views by adding an additional argument — which is an object to the render() method, like res.render('dashboard', { user }). Then, in our view, we could display the data with the evaluation syntax <%= %>. Everything with this tag is evaluated — for instance, <%= user.username %> displays the value of the username property of the user object. Aside from the evaluation syntax, EJS also provides a control syntax (<% %>), which allows us to write program control statements such as conditionals, loops, and so forth.

Basically, EJS allows us to embed JavaScript in our HTML.

npm install ejs express-ejs-layouts --save

In index.js, it requires express-ejs-layouts:

//after requiring connect-flash
const expressLayouts = require('express-ejs-layouts')

//after the mongoose.connect logic
app.use(expressLayouts);
app.set('view engine', 'ejs');

Then:

cd views
touch layout.ejs

In views/layout.ejs,

<!DOCTYPE html>
<html lang="en">
    <head>
    <meta charset="UTF-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <meta http-equiv="X-UA-Compatible" content="ie=edge" />
        <link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.6.3/css/all.css" integrity="sha384-UHRtZLI+pbxtHCWp1t77Bi1L4ZtiqrqD80Kn4Z8NTSRyMA2Fd33n5dQ8lWUE00s/" crossorigin="anonymous">
        <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/semantic-ui@2.4.2/dist/semantic.min.css">
        <link rel="stylesheet" href="/static/css/app.css">
        <link rel="stylesheet" href="/static/css/intlTelInput.css">
    <title>Node js authentication</title>
    </head>
    <body>

    <div class="ui container">
        <%- body %>
    </div>
    <script
        src="https://code.jquery.com/jquery-3.3.1.slim.min.js"
        integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo"
        crossorigin="anonymous"
    ></script>
    <script src="https://cdn.jsdelivr.net/npm/semantic-ui@2.4.2/dist/semantic.min.js"></script>
    </body>
</html>

The layout.ejs file serves like an index.html file, where we can include all our scripts and stylesheets. Then, in the div with classes ui container, we render the body — which is the rest of our application views.

We’ll be using semantic UI as our CSS framework.

Building The Partials

Partials are where we store re-usable code, so that we don’t have to rewrite them every single time. All we do is include them wherever they are needed.

You could think of partials like components in front-end frameworks: they encourage DRY code, and also code re-usability. Think of partials as an earlier version of components.

For example, we want partials for our menu, so that we do not have to write code for it every single time we need the menu on our page.

cd views
mkdir partials

We’ll create two files in the /views/partials folder:

cd partials
touch menu.ejs message.ejs

In menu.ejs,

<div class="ui secondary  menu">
    <a class="active item" href="/">
        Home
    </a>
    <% if(locals.user) { %>
        <a class="ui item" href="/users/dashboard">
        dashboard
        </a>
        <div class="right menu">
        <a class='ui item'>
            <%= user.username %>
        </a>
        <a class="ui item" href="/users/logout">
            Logout
        </a>
        </div>
    <% } else {%>
        <div class="right menu">
        <a class="ui item" href="/users/signup">
            Sign Up
        </a>
        <a class="ui item" href="/users/login">
            Login
        </a>
        </div>
    <% } %>
    </div>

In message.ejs,

<% if(typeof errors != 'undefined'){ %> <% errors.forEach(function(error) { %>
    <div class="ui warning message">
        <i class="close icon"></i>
        <div class="header">
            User registration unsuccessful
        </div>
        <%= error.msg %>
    </div>
<% }); %> <% } %> <% if(success_msg != ''){ %>
<div class="ui success message">
    <i class="close icon"></i>
    <div class="header">
        Your user registration was successful.
    </div>
    <%= success_msg %>
</div>
<% } %> <% if(error_msg != ''){ %>
<div class="ui warning message">
    <i class="close icon"></i>
    <div class="header">

    </div>
    <%= error_msg %>
</div>
<% } %> <% if(error != ''){ %>
<div class="ui warning message">
    <i class="close icon"></i>
    <div class="header">

    </div>
    <%= error %>
</div>
<% } %>

Building The Dashboard Page

In our views folder, we create a dashboard.ejs file:

<%- include('./partials/menu') %>
<h1>
    DashBoard
</h1>

Here, we include the menu partials so we have the menu on the page.

Building The Error Page

In our views folder, we create an error.ejs file:

<h1>Error Page</h1>
<p><%= error %></p>

Building The Home Page

In our views folder, we create a home.ejs file:

<%- include('./partials/menu') %>
<h1>
    Welcome to the Home Page
</h1>

Building The Login Page

In our views folder, we create a login.ejs file:

<div class="ui very padded text container segment">
    <%- include ('./partials/message') %>
    <h3>
        Login Form
    </h3>

    <form class="ui form" action="/users/login" method="POST">
    <div class="field">
        <label>Email</label>
        <input type="email" name="email" placeholder="Email address">
    </div>
    <div class="field">
        <label>Password</label>
        <input type="password" name="password" placeholder="Password">
    </div>
    <button class="ui button" type="submit">Login</button>
    </form>
</div>

Building The Verify Page

In our views folder, we create a login.ejs file:

<%- include ('./partials/message') %>
<h1>Verify page</h1>
<p>please verify your account</p>
<form class="ui form" action="/users/verify" method="POST">
    <div class="field">
        <label>verification code</label>
        <input type="text" type="number" name="verifyCode" placeholder="code">
    </div>
    <button class="ui button" type="submit">Verify</button>
</form>
<br>
<a class="ui button" href="/users/resend">Resend Code</a>

Here, we provide a form for users to enter the verification code that will be sent to them.

Building The Sign Up Page

We need to get the user’s mobile number, and we all know that country codes differ from country to country. Therefore, we’ll use the [intl-tel-input](https://intl-tel-input.com/) to help us with the country codes and validation of phone numbers.

npm install intl-tel-input
  1. In our public folder, we create a css directory, js directory and img directory:

    cd public
     mkdir css js img
     
  2. We copy the intlTelInput.css file from node_modules\intl-tel-input\build\css\ file into our public/css directory.

  3. We copy both the intlTelInput.js and utils.js from node_modules\intl-tel-input\build\js\ folder into our public/js directory.
  4. We copy both the flags.png and `flags@2x.pngfromnode_modules\intl-tel-input\build\img` folder into our public/img directory.

We create an app.css in our public/css folder:

cd public
touch app.css

In app.css, add the styles below:

.iti__flag {background-image: url("/static/img/flags.png");}

@media (-webkit-min-device-pixel-ratio: 2), (min-resolution: 192dpi) {
    .iti__flag {background-image: url("/static/img/flags@2x.png");}
}
.hide {
    display: none
}
.error {
    color: red;
    outline: 1px solid red;
}
.success{
    color: green;
}

Finally, we create a signup.ejs file in our views folder:

<div class="ui very padded text container segment">
    <%- include ('./partials/message') %>
    <h3>
        Signup Form
    </h3>

    <form class="ui form" action="/users/signup" method="POST">
    <div class="field">
        <label>Name</label>
        <input type="text" name="name" placeholder="name">
    </div>
    <div class="field">
        <label>Username</label>
        <input type="text" name="username" placeholder="username">
    </div>
    <div class="field">
        <label>Password</label>
        <input type="password" name="password" placeholder="Password">
    </div>
    <div class="field">
        <label>Phone number</label>
        <input type="tel" id='phone'>
        <span id="valid-msg" class="hide success">✓ Valid</span>
        <span id="error-msg" class="hide error"></span>
    </div>
    <div class="field">
        <label>Email</label>
        <input type="email" name="email" placeholder="Email address">
    </div>

    <button class="ui button" type="submit">Sign up</button>
    </form>
</div>
<script src="/static/js/intlTelInput.js"></script>
<script>
    const input = document.querySelector("#phone")
    const errorMsg = document.querySelector("#error-msg")
    const validMsg = document.querySelector("#valid-msg")

    const errorMap = ["Invalid number", "Invalid country code", "Too short", "Too long", "Invalid number"];
    const iti = window.intlTelInput(input, {
        separateDialCode: true,
        autoPlaceholder: "aggressive",
        hiddenInput: "phonenumber",
        utilsScript: "/static/js/utils.js?1590403638580" // just for formatting/placeholders etc
    });
    var reset = function() {
        input.classList.remove("error");
        errorMsg.innerHTML = "";
        errorMsg.classList.add("hide");
        validMsg.classList.add("hide");
    };
    // on blur: validate
    input.addEventListener('blur', function() {
        reset();
        if (input.value.trim()) {
        if (iti.isValidNumber()) {
            validMsg.classList.remove("hide");
        } else {
            input.classList.add("error");

            var errorCode = iti.getValidationError();
            errorMsg.innerHTML = errorMap[errorCode];
            errorMsg.classList.remove("hide");
        }
        }
    });
    // on keyup / change flag: reset
    input.addEventListener('change', reset);
    input.addEventListener('keyup', reset);

    document.querySelector('.ui.form').addEventListener('submit', (e) => {
        if(!iti.isValidNumber()){
        e.preventDefault()
        }
    })
</script> 

Basic Authentication With Passport

Building authentication into an application can be really complex and time-draining, so we need a package to help us with that.

Remember: do not re-invent the wheel, except if your application has a specific need.

passport is a package that helps out with authentication in our express application.

passport has many strategies we could use, but we’ll be using the local-strategy — which basically does username and password authentication.

One advantage of using passport is that, since it has many strategies, we can easily extend our application to use its other strategies.
npm install passport passport-local

In index.js we add the following code:

//after requiring express
const passport = require('passport')

//after requiring mongoose
const { localAuth } = require('./config/passportLogic')

//after const app = express()
localAuth(passport)

//after app.use(express.urlencoded({ extended: true }))
app.use(passport.initialize());
app.use(passport.session());

We’re adding some application level middleware to our index.js file — which tells the application to use the passport.initialize() and the passport.session() middleware.

Passport.initialize() initializes passport, while the passport.session() middleware let’s passport know that we’re using session for authentication.

Do not worry much about the localAuth() function. That takes the passport object as an argument, and we’ll create the function just below.

Next, we create a config folder and create the needed files:

mkdir config
touch  passportLogic.js middleware.js

In passportLogic.js,

//file contains passport logic for local login
const LocalStrategy = require('passport-local').Strategy;
const mongoose = require('mongoose')
const User = require('../models/user')
const localAuth = (passport) => {
    passport.use(
        new LocalStrategy(
        { usernameField: 'email' }, async(email, password, done) => {
            try {
                const user = await User.findOne({ email: email }) 

                if (!user) {
                    return done(null, false, { message: 'Incorrect email' });
                }
                //validate password
                const valid = await user.validPassword(password)
                if (!valid) {
                    return done(null, false, { message: 'Incorrect password.' });
                }
                return done(null, user);
            } catch (error) {
                return done(error)
            }
        }
    ));
    passport.serializeUser(function(user, done) {
        done(null, user.id);
    });

    passport.deserializeUser(function(id, done) {
        User.findById(id, function(err, user) {
            done(err, user);
        });
    });
}
module.exports = {
    localAuth
}

Let’s understand what is going on in the code above.

Apart from the require statements, we create the localAuth() function, which will be exported from the file. In the function, we call the passport.use() function that uses the LocalStrategy() for username and password based authentication.

We specify that our usernameField should be email. Then, we find a user that has that particular email — if none exists, then we return an error in the done() function. However, if a user exists, we check if the password is valid using the validPassword method on the User object. If it’s invalid, we return an error. Finally, if everything is successful, we return the user in done(null, user).

passport.serializeUser() and passport.deserializeUser() helps in order to support login sessions. Passport will serialize and deserialize user instances to and from the session.

In middleware.js,

//check if a user is verified
const isLoggedIn = async(req, res, next) => {
    if(req.user){
        return next()
    } else {
        req.flash(
            'error_msg',
            'You must be logged in to do that'
        )
        res.redirect('/users/login')
    }
}
const notLoggedIn = async(req, res, next) => {
    if(!req.user) {
        return next()
    } else{
        res.redirect('back')
    }
}


module.exports = {
    isLoggedIn,
    notLoggedIn
}

Our middleware file contains two(2) route level middleware, which will be used later in our routes.

Route-level middleware is used by our routes, mostly for route protection and validation, such as authorization, while application level middleware is used by the whole application.

isLoggedIn and notLoggedIn are route level middleware that checks if a user is logged in. We use these middlewares to block access to routes that we want to make accessible to logged-in users.

Building The Sign-Up Controllers

cd controllers
mkdir signUpController.js loginController.js

In signUpController.js, we:

  1. Check for users’ credentials;
  2. Check if a user with that detail(email or phone-number) exists in our database;
  3. Create an error if the user exists;
  4. Finally, if such a user does not exist, we create a new user with the given details and redirect to the login page.
const mongoose = require('mongoose')
const User = require('../models/user')

//sign up Logic
const getSignup = async(req, res, next) => {
    res.render('signup')
}
const createUser = async (req, res, next) => {
    try {
        const { name, username, password, phonenumber, email} = await req.body
        const errors = []
        const reRenderSignup = (req, res, next) => {
            console.log(errors)
            res.render('signup', {
                errors,
                username,
                name,
                phonenumber,
                email
            })
        }
        if( !name || !username || !password || !phonenumber || !email ) {
            errors.push({ msg: 'please fill out all fields appropriately' })
            reRenderSignup(req, res, next)
        } else {
            const existingUser = await User.findOne().or([{ email: email}, { phonenumber : phonenumber }])
            if(existingUser) {
            errors.push({ msg: 'User already exists, try changing your email or phone number' })
            reRenderSignup(req, res, next)
            } else {
                const user = await User.create(
                    req.body
                )
                req.flash(
                    'success_msg',
                    'You are now registered and can log in'
                );
                res.redirect('/users/login')
            }

        }
    } catch (error) {
        next(error)
    }
}
module.exports = {
    createUser,
    getSignup
}

In loginController.js,

  1. We use the passport.authenticate() method with the local scope (email and password) to check if the user exists;
  2. If the user doesn’t exist, we give out an error message and redirect the user to the same route;
  3. if the user exists, we log the user in using the req.logIn method, send them a verification using the sendVerification() function, then redirect them to the verify route.
const mongoose = require('mongoose')
const passport = require('passport')
const User = require('../models/user')
const { sendVerification } = require('../config/twilioLogic')
const getLogin = async(req, res) => {
    res.render('login')
}
const authUser = async(req, res, next) => {
    try {
        passport.authenticate('local', function(err, user, info) {
            if (err) { 
                return next(err) 
            }
            if (!user) { 
                req.flash(
                    'error_msg',
                    info.message
                )
                return res.redirect('/users/login')
            }
            req.logIn(user, function(err) {
                if (err) { 
                    return next(err)
                }
                sendVerification(req, res, req.user.phonenumber)
                res.redirect('/users/verify');
            });
        })(req, res, next);
    } catch (error) {
        next(error)
    }

}
module.exports = {
    getLogin,
    authUser
}

Right now, sendVerification() doesn’t exactly work. That’s because we’ve not written the function, so we need Twilio for that. Let’s install Twilio and get started.

Using Twilio Verify To Protect Routes

In order to use Twilio Verify, you:

  1. Head over to https://www.twilio.com/;
  2. Create an account with Twilio;
  3. Login to your dashboard;
  4. Select create a new project;
  5. Follow the steps to create a new project.

To install the Twilio SDK for node.js:

npm install twilio

Next, we need to install dotenv to help us with environment variables.

npm install dotenv

We create a file in the root of our project and name it .env. This file is where we keep our credentials, so we don’t push it to git. In order to do that, we create a .gitignore file in the root of our project, and add the following lines to the file:

node_modules
.env

This tells git to ignore both the node_modules folder and the .env file.

To get our Twilio account credentials, we login into our Twilio console, and copy our ACCOUNT SID and AUTH TOKEN. Then, we click on get trial number and Twilio generates a trial number for us, click accept number. Now from the console copy, we copy our trial number.

In .env,

TWILIO_ACCOUNT_SID = <YOUR_ACCOUNT_SID>
TWILIO_AUTH_TOKEN = <YOUR_AUTH_TOKEN>
TWILIO_PHONE_NUMBER = <TOUR_TWILIO_NUMBER>

Don’t forget to replace <YOUR_ACCOUNT_SID>, <YOUR_AUTH_TOKEN>, and <TOUR_TWILIO_NUMBER> with your actual credentials.

We create a file named twilioLogic.js in the config directory:

cd cofig
touch twilioLogic.js

In twilioLogic.js,

require('dotenv').config()
const twilio = require('twilio')
const client = twilio(process.env.TWILIO_ACCOUNT_SID, process.env.TWILIO_AUTH_TOKEN)
//create verification service
const createService = async(req, res) => {
    client.verify.services.create({ friendlyName: 'phoneVerification' })
        .then(service => console.log(service.sid))
}

createService();

In the code snippet above, we create a new verify service.

Run:

node config/twilioLogic.js

The string that gets logged to our screen is our TWILIO_VERIFICATION_SID — we copy that string.

In .env, add the line TWILIO_VERIFICATION_SID = <YOUR_TWILIO_VERIFICATION_SID>.

In config/twilioLogic.js, we remove the createService() line, since we need to create the verify service only once. Then, we add the following lines of code:

//after createService function creation

//send verification code token
const sendVerification = async(req, res, number) => {

        client.verify.services(process.env.TWILIO_VERIFICATION_SID)
            .verifications
            .create({to: `${number}`, channel: 'sms'})
            .then( verification => 
                console.log(verification.status)
            ); 
}

//check verification token
const checkVerification = async(req, res, number, code) => {
    return new Promise((resolve, reject) => {
        client.verify.services(process.env.TWILIO_VERIFICATION_SID)
            .verificationChecks
            .create({to: `${number}`, code: `${code}`})
            .then(verification_check => {
                resolve(verification_check.status)
            });
    })
}
module.exports = {
    sendVerification,
    checkVerification
}

sendVerification is an asynchronous function that returns a promise that sends a verification OTP to the number provided using the sms channel.

checkVerification is also an asynchronous function that returns a promise that checks the status of the verification. It checks if the OTP provided by the users is the same OTP that was sent to them.

In config/middleware.js, add the following:

//after notLoggedIn function declaration

//prevents an unverified user from accessing '/dashboard'
const isVerified = async(req, res, next) => {
    if(req.session.verified){
        return next()
    } else {
        req.flash(
            'error_msg',
            'You must be verified to do that'
        )
        res.redirect('/users/login')
    }
}

//prevent verified User from accessing '/verify'
const notVerified = async(req, res, next) => {
    if(!req.session.verified){
        return next()
    } else {
        res.redirect('back')
    }
}


module.exports = {
    //after notLoggedIn
    isVerified, 
    notVerified
}

We’ve created two more route level middleware, which will be used later in our routes.

isVerified and notVerified check if a user is verified. We use these middlewares to block access to routes that we want to make accessible to only verified users.

cd controllers
touch verifyController.js

In verifyController.js,

const mongoose = require('mongoose')
const passport = require('passport')
const User = require('../models/user')
const { sendVerification, checkVerification } = require('../config/twilioLogic')
const loadVerify = async(req, res) => {
    res.render('verify')
}
const resendCode = async(req, res) => {
    sendVerification(req, res, req.user.phonenumber)
    res.redirect('/users/verify')
}
const verifyUser = async(req, res) => {
    //check verification code from user input
    const verifyStatus = await checkVerification(req, res, req.user.phonenumber, req.body.verifyCode)

    if(verifyStatus === 'approved') {
        req.session.verified = true
        res.redirect('/users/dashboard')
    } else {
        req.session.verified = false
        req.flash(
            'error_msg',
            'wrong verification code'
        )
        res.redirect('/users/verify')
    }

}
module.exports = {
    loadVerify,
    verifyUser,
    resendCode
}

resendCode() re-sends the verification code to the user.

verifyUser uses the checkVerification function created in the previous section. If the status is approved, we set the verified value on req.session to true.

req.session just provides a nice way to access the current session. This is done by express-session, which adds the session object to our request object.

Hence the reason I said that most application level middleware do affect our applications state (request and response objects)
Building The User Routes

Basically, our application is going to have the following routes:

  1. /user/login: for user login;
  2. /user/signup: for user registration;
  3. /user/logout: for log out;
  4. /user/resend: to resend a verification code;
  5. /user/verify: for input of verification code;
  6. /user/dashboard: the route that is protected using Twilio Verify.
cd routes
touch user.js

In routes/user.js, it requires the needed packages:

const express = require('express')
const router = express.Router()
const { createUser, getSignup } = require('../controllers/signUpController')
const { authUser, getLogin } = require('../controllers/loginController')
const { loadVerify, verifyUser, resendCode } = require('../controllers/verifyController')
const { isLoggedIn, isVerified, notVerified, notLoggedIn } = require('../config/middleware')

//login route
router.route('/login')
    .all(notLoggedIn)
    .get(getLogin)
    .post(authUser)

//signup route
router.route('/signup')
    .all(notLoggedIn)
    .get(getSignup)
    .post(createUser)
//logout
router.route('/logout')
    .get(async (req, res) => {
        req.logout();
        res.redirect('/');
    })
router.route('/resend')
    .all(isLoggedIn, notVerified)
    .get(resendCode)
//verify route
router.route('/verify')
    .all(isLoggedIn, notVerified)
    .get(loadVerify)
    .post(verifyUser)
//dashboard
router.route('/dashboard')
    .all(isLoggedIn, isVerified)
    .get(async (req, res) => {
        res.render('dashboard')
    })

//export router
module.exports = router

We’re creating our routes in the piece of code above, let’s see what’s going on here:

router.route() specifies the route. If we specify router.route('/login'), we target the login route. .all([middleware]) allows us specify that all request to that route should use those middleware.

The router.route('/login').all([middleware]).get(getController).post(postController) syntax is an alternative to the one most developers are used to.

It does the same thing as router.get('/login', [middleware], getController) and router.post('/login, [middleware], postController).

The syntax used in our code is nice because it makes our code very DRY — and it’s easier to keep up with what’s going on in our file.

Now, if we run our application by typing the command below in our terminal:

npm run dev 

Our full-stack express application should be up and running.

Conclusion

What we have done in this tutorial was to:

  1. Build out an express application;
  2. Add passport for authentication with sessions;
  3. Use Twilio Verify for route protection.

I surely hope that after this tutorial, you are ready to rethink your password-based authentication and add that extra layer of security to your application.

What you could do next:

  1. Try to explore passport, using JWT for authentication;
  2. Integrate what you’ve learned here into another application;
  3. Explore more Twilio products. They provide services that make development easier(Verify is just one of the many services).

Further Reading On Smashing Magazine

Compare the Best Screen Recording Software

Want to jump straight to the answer? The best screen recording software for most people is OBS Studio or Loom.

If there’s anything better than telling employees what to do, it’s showing them.

Screen recording software makes it easier than ever to create professional tutorials with minimal effort. With the right tool, you can describe problems, deliver instructions, and share knowledge—all at the click of a button.

But not every screen recording tool is equal. Keep reading to find my top five picks that tick all the right boxes, from a user-friendly interface to advanced features and premium effects.

The Top 5 Best Screen Recording Software

  • OBS Studio — Best Free Screen Recording Software
  • Loom — Best for Online Sharing
  • Scribe — Best for Making Automated Step-by-Step Tutorials
  • Camtasia — Best for Creating Professional-Level Screen Videos
  • Screencast-O-Matic — Best for Mobile Screen Recording

Don’t worry; I’ll tell you exactly why they make it on my list.

OBS Studio — Best Free Screen Recording Software

  • Free open-source platform
  • HD-quality recording and streaming
  • Complete broadcasting tool
  • Intuitive audio mixer
Get started for free

OBS Studio comes packed with several advanced features and offers HD-quality recording and streaming with no limits on video length—all without charging you a single penny. While this may sound too good to be true, it isn’t.

It’s an open-source platform that lets you record your screen, save it, and come back to it later. You can also encode your footage in FLV format and save it locally. The fact that OBS Studio is a complete tool for broadcasting means you can add as many displays and cameras as your computer can handle.

Screenshot of OBS Studio homepage
OBS Studio comes packed with several advanced features and offers HD-quality recording and streaming.

Easily create scenes with images, text, browser windows, window captures, and more, and switch seamlessly between them through custom transitions. Sizing and positioning elements within each studio require just a click and drag, and you can control them during a recording session using a hotkey.

OBS Studio’s intuitive audio mixer and extra components like noise suppression and VST plugin support ensure your audio delivery is nothing short of excellent. Use the Streamlined Setting tools to give your recordings the final polishing touch.

My only issue is the steep learning curve. The complicated interface is loaded with way too many advanced features that feel overwhelming initially, but once you get a hang of it, it’ll be a smooth ride from there.

Pricing

OBS Studio is a free screen recording software.

Loom — Best for Online Sharing

  • Easy sharing and collaboration
  • Multiple screen recording options
  • Mobile-optimized library
  • Decent editing tools
Sign up for the 14-day trial

Loom is the easiest way to quickly record something—process, workflow, or tutorial—and share it with your team.

Choose whether you want to record your screen, screen with webcam video, or just webcam, and go about your process. Once you’re done, your video will automatically be uploaded to the Loom platform. Add the final finishing touches to add more context to your video, using the platform’s basic editing tools.

When you’re happy with the end product, simply copy the link to the video and share it with your team. All your Looms are then stored in a mobile-optimized library, allowing you to access your videos even when you’re not in front of your computer.

Screenshot of Loom homepage
Loom is the easiest way to quickly record something—process, workflow, or tutorial—and share it with your team.

Loom is designed for easy collaboration. Viewers can interact with you through timestamp comments and emojis. You’ll also be notified every time there is any activity on your account, so you never miss anything important.

Having said that, Loom isn’t perfect. Its editing functionality is significantly limited compared to other screen recording software and you cannot censor sensitive information. While the desktop app works smoothly, its mobile app is slightly confusing. For instance, you cannot combine video with screencasting.

Nevertheless, Loom still gets the job done without any hassle—and gets done well.

Pricing

Loom currently offers three plans:

  • Starter — Free
  • Business — $8 per creator, per month
  • Enterprise — Request a customized quote

Test-drive the Business plan by signing up for the 14-day trial.

Scribe — Best for Making Automated Step-by-Step Tutorials

  • Create visual step-by-step guides
  • Innovative Pages feature
  • Custom editing and branding
  • Effortless video sharing
Try for free

Scribe is one of the few screen recording software tools that strike the right balance between functionality and aesthetics.

It’s a browser extension or desktop app that lets you create automated, visually-appealing step-by-step guides with just one click. Simply hit the Record button to start capturing your process prices, and when you’re done, select Stop.

Scribe will then automatically generate a step-by-step guide, complete with relevant screenshots and text.

Screenshot of Scribe homepage
Scribe lets you create automated, visually-appealing step-by-step guides with just one click.

What’s more, you can edit and update your screen recordings to customize them into visual SOPs, training manuals, and other types of documents. Add custom branding, annotations, and blur to bring more depth and detail where needed.

Its Pages features let you combine multiple Scribes to create an even more comprehensive document. Once your custom Scribe is ready, share it instantly with your team as a PDF document or a URL link—or embed it in wikis or your knowledge base.

Pricing

Scribe currently offers three plans:

  • Basic — Free
  • Pro — $29 per user, per month
  • Enterprise — Request a customized quote

Nonprofits are eligible for a 50% discount, plus you can get a pilot for the Enterprise plan before making any commitment.

Camtasia — Best for Creating Professional-Level Screen Videos

  • Creates professional-looking videos
  • Click-and-drag editing
  • Good stock images
  • Interactive features
Try it today

Developed by TechSmith, Camtasia makes it simple to record and make professional-looking videos on your desktop.

Record both video and audio from your Mac or Windows device, as well as capture your webcam for voiceovers to guide viewers. Its built-in video editor comes with an exhaustive list of editing options giving you all the necessary tools at your fingertips.

Screenshot of Camtasia home page
Camtasia makes it easy for even beginners to create professional-looking videos on your desktop.

There’s also a decent selection of stock images with click-and-drag effects to customize screen captures however you like. Insert zoom out, zoom in, and pan animations into footage—or use transitions between slides and scenes to improve the flow of your videos.

Camtasia makes it surprisingly easy to create professional intro and outro segments, so even beginners can use the tool. Experiment with interactive features that let you add quizzes or clickable buttons to your videos.

While configuring these elements doesn’t take much time, previewing and exporting the final video certainly becomes more complicated than when handling non-interactive videos.

Pricing

Camtasia currently offers the following four plans:

  • Individual — $299 per user
  • Business — $299.99 per user
  • Education — $212.99 per user (This is the discounted rate)
  • Government and non-profit — $268.99 per user

A 30-day money-back guarantee is available on all plans for your greater peace of mind.

Screencast-O-Matic — Best for Mobile Screen Recording

  • Mobile-optimized tool
  • Multiple screen recording options
  • Built-in stock library
  • Editing capabilities
Get it now

Screencast-O-Matic is another excellent screen recording software that can be installed on all different devices, including PCs, tablets, laptops, and mobile phones—and it offers a great experience on each one.

The software works particularly well on mobile apps, offering more features than its competitors. The iPhone and iPad apps allow you to record your screen and overlay a separate recording of your face onto the same screen recording. You can then edit the video directly on your mobile device.

Screenshot of Screencast-O-Matic homepage
Screencast-O-Matic is an excellent screen recording software that works particularly well on mobile apps.

This doesn’t mean Screencast-O-Matic’s desktop app isn’t just as good.

Record your desktop’s entire screen or just a specific area and optionally add a webcam with your face. While the free version lets you crop and adds music, the paid version comes with more editing tools to customize your recordings.

Overall, Screencast-O-Matic is fairly easy to use and comes with a video editor to make your recordings more engaging. Other features include a built-in stock library full of videos, images, and music tracks, scripted recordings, drawings and annotations, and text-to-speech caption generation.

Save your screen recordings directly to your device or upload them to Screencast-O-Matic, Google Drive, on YouTube. You can also add them to your Dropbox or Vimeo account, but you’ll need the paid plan for this.

Pricing

Screencast-O-Matic offers two categories of pricing:

Individuals and Business

  • Solo Deluxe — $3 per month paid annually
  • Solo Premiere — $6 per month paid annually
  • Solo Max — $10 per month paid annually
  • Team Business — $8 per month, per user, paid annually

Education

  • Solo Deluxe — $2 per month paid annually
  • Solo Premiere — $4 per month paid annually
  • Solo Max — $6 per month paid annually
  • Team Business — $2 per month, per user, paid annually

Monthly plans are available, but only for Screencast-O-Matic’s Business plans.

How to Find the Best Screen Recording Software for You

Ideally, you want to consider the following factors when choosing a screen recording software tool:

Screen Recording Capabilities

As mentioned, all screen recording software tools are not equal. While some record your whole screen, others focus on a specific area. Naturally, it’s better to choose tools that offer both.

Alongside finding out how the software records, you should also note its audio recording options. Does the tool even record audio? If yes, does it record from my microphone or use system audio recording?

I recommend using tools that let you superimpose a webcam feed over your screen capture. This is extremely important to record commentary over your recorded material and to add a face to the voice. Also, check the recording quality to ensure the recording is in high definition.

User-Friendly Interface

Consider your prospective software’s interface. Is it user-friendly and designed with the inexperienced user in mind? Or does it have a steep learning curve, requiring prior knowledge?

Understand your and your team members’ technical capabilities when deciding on screen recording software. This is extremely important as users are unlikely to use software they find too complicated or tedious.

Sharing Options

Sharing is another crucial factor when choosing a screen recording software.

Once a recording is done, see whether you can save it and in what video file format (MP3, GIF). Find out whether you can upload the screen recording to the cloud and platforms like YouTube, Dropbox, and Google Drive in real time and get a sharing URL link for it.

Having options to embed them in wikis and knowledge bases is also desirable.

Summary

The best screen recording software is OBS Studio and Loom which combine a user-friendly interface with advanced features and convenient sharing options. However, your individual requirements can always differ, so ensure to choose a tool that works best for you.

The majority of my recommendations come with free plans and trials, so you can always test-drive different software before committing and pick the most suitable option for your needs.

Google is not displaying the correct meta description tag of the URL. How t

A month ago I changed the meta tag of the following URL and I have been trying to increase traffic for it. I have resubmitted the URL in Webmastertool. However, URL index is very unstable, sometimes it loses index and this URL is showing the wrong Meta tag. I don't know where the error lies, currently I'm trying to share on social media. How to fix this problem? Hope to get help from you.
This is that URL: https://ongdienchongchay.com/cac-loai-ong-thep-luon-day-dien-cua-cong-ty-hai-duong.html

How to Do Keyword Research for Your WordPress Blog

Do you want to do keyword research for your WordPress blog?

Keyword research helps you find betters content ideas so you can grow your traffic and create highly engaging content that users will love.

In this article, we will show you how to do keyword research for your WordPress site.

How to do keyword research for your WordPress blog

What is Keyword Research and Why Do You Need it?

Keyword research is a technique used by content creators and search engine optimization (SEO) experts. It helps you discover the words that people search for when looking for content just like yours.

Once you know the words that people are entering into the search engines, you can use these keywords to optimize your blog posts for SEO. This can help you get more traffic from search engines such as Google.

Some website owners fall into the trap of assuming they already know what their audience are searching for. However, you don’t need to make guesses when there are powerful tools that can help you make decisions based on real data.

By doing keyword research as part of your WordPress SEO strategy, you can:

  • Find the words and phrases that your audience are actually entering into the search engines
  • Increase the traffic you get from search engines
  • Find content ideas that are easy to rank for and have decent search volume
  • Find out what your competitors are doing – and do better!

That being said, let’s take a look at how to do keyword research for your WordPress blog.

In this post we have hand-picked the best keyword research tools that we have personally used for our own projects. We will explore each tool and how it can help you perform keyword research like a pro.

1. Semrush (Recommended)

Semrush Website

Semrush is one of the best SEO Tools on the market. It is a complete SEO suite with tools that can help you do organic research, paid advertising research, keyword research, and in-depth competition analysis.

To get started, simply go to the Semrush website. Then type a keyword into the ‘Enter domain, keyword or URL’ field. 

Semrush - enter keyword into search bar

If you don’t already have some keywords in mind, then you can use any word or phrase related to your business. For example, if you run an eCommerce site that sells headphones, then the word ‘headphones’ might be a solid starting point. 

Click on the ‘Start now’ button and Semrush will display lots of information about the keyword you just entered. 

The Semrush Keyword Magic tool

This include the CPC for paid advertising, the number of search results, and search volume.

Scroll down a little and you will see a list of related keywords. Related keywords are search terms that are related to the keyword you typed into the Semrush homepage.

A list of related keywords in the Semrush platform

Some websites are guilty of stuffing the same keyword into their content over and over again, in an attempt to rank for that keyword. This makes your content more difficult to read.

In fact, Google may even give your website a penalty if they suspect you’re using these tactics. This can lower your position in the search engine rankings.

By adding lots of related keywords to your content, you can show the search engines that you’re writing about your chosen topic in a genuine, detailed way. For this reason, it’s smart to add related keywords to your content wherever possible. 

To see the full list of related keywords, click on the ‘View all…’ button.

How to view all keywords, in the Semrush tool

Next, scroll down to the SERP Analysis section. SERP stands for search engine results page. This is the page that the search engines display after a user searches for a word or phrase.

The SERP Analysis section displays the list of top search results for the keyword that you originally entered.

If you want to rank for this keyword, then these sites are your biggest competition.

The Semrush SERP analysis feature

To view a detailed Organic Report for each result, simply click on its URL.

By analyzing this report, you can better understand why this page ranks so highly for this particular keyword. 

Organic Research, as seen in the Semrush tool

If you want to learn more about the related keywords, then Semrush has a Keyword Magic Tool. This gives you fast access to information about a wider range of related keywords.

To see this tool in action, click on Keyword Magic Tool in the Semrush sidebar. 

Semrush's Keyword Magic tool

When you spot a promising keyword, you can click on its add (+) icon. This will add this word or phrase to Semrush’s keyword analyzer where you can learn even more about it.

Once you have figured out the best keywords with the highest search volume, the next step is analyzing competition for those words or phrases. If a keyword has a huge search volume but very high competition, then you may struggle to earn a cut of that traffic.

To see a detailed analysis, click on the links that are already ranking for your chosen keyword. You can also see the backlinks for that URL, other keywords that page ranks for, and an estimate of how much search traffic this link gets.

Overall, Semrush is the best keyword research tool on the market. It not only gives you keyword ideas, it also helps you find out how you can rank for those keywords.

Even better, Semrush integrates with All in One SEO (AIOSEO) to help you find and research related keywords directly in your WordPress dashboard. 

AIOSEO is the best SEO plugin for WordPress and has all the tools you need to get more traffic from the search engines without editing code or hiring a developer. For more details, you can see our guide on how to setup AIOSEO for WordPress correctly

When you’re creating a page or post in WordPress, simply scroll down the page to AIOSEO’s Focus Keyphrase section. You can then enter the keyword that you want to target with this content.

Adding a focus keyphrase in the AIOSEO tool

Then, click on the Add Focus Keyphrase button. AIOSEO will now scan your content and give you an SEO score. This is an easy way to make sure it’s optimized for the phrase that you want to rank for. 

AIOSEO will also suggest ways to improve your score. 

Performing keyword research using the All in One SEO plugin

These tips can help you rank for your focus keyphrase.

The Semrush integration also makes it easy to discover related words and phrases. 

To start, simply click on the Get Additional Keyphrases button. This launches a popup where you can log into your Semrush account. 

AIOSEO's Get Additional Keyphrase button

After connecting your Semrush account to AIOSEO, you can explore the related keywords inside your WordPress dashboard. 

AIOSEO will even display the search volume and trends for each related keyphrase. This can help you pinpoint the terms that could deliver the most visitors to your website.

Exploring related keywords, using AIOSEO

If you have AIOSEO Pro, you’ll also see an Add Keyphrase button. This makes it easy to add any keyphrase to your post. 

After adding a related keyword, AIOSEO will check all of your content for this new phrase. It will then give you a score, which reflects how well you’re targeting the related keyword.

The All in One SEO (AIOSEO) plugin user interface

AIOSEO will even give you feedback and suggestions on how to improve your content. By following these recommendations, you’ll stand the best possible chance of ranking for this additional keyphrase. 

2. Ahrefs

Ahrefs

Ahrefs is one of the most powerful keyword research tools on the market. It helps you learn why your competitors are ranking so high, and what you need to do to outrank them in search results.

Ahrefs crawls more than 6 billion pages every day, with over 22 trillion links in their index from over 170 domain names. That’s a lot of data, but the real beauty is how the Ahrefs platform helps you use this data to perform keyword research. 

Ahrefs has a user-friendly interface that breaks this data into different sections. Simply type a domain into the search field and Ahrefs will display a wealth of information in an easy to understand format.

Ahref's Site Explorer page

Ahfres will start by showing you an overview of the information for this domain. This includes the total number of backlinks, referring domains, organic keywords, and content review. You can click on any of these sections to learn more.

There is lots to explore. You can see one of the most useful reports by clicking on ‘Organic Keywords.’ This will display a list of keywords for the domain name along with search volume, search rank, URL, and more.

Organic Keyword data, in the Ahfres dashboard

If you want to generate some keyword ideas, then start entering phrases or words into the search box. This can be anything from the name of your top-selling product, to a new buzzword in your industry. 

Ahrefs’ keyword explorer tool will then generate a list of keyword suggestions, along with their search volume, difficulty score, and clicks. We recommend looking for keywords that have a high search volume, and a lower difficulty score. 

The Ahrefs keyword explorer tool

Ahrefs also comes with powerful tools for content analysis, rank tracking, web monitoring, and more. You can export all reports in CSV or PDF format and then work on them in your favorite spreadsheet software.

3. AnswerThePublic

The AnswerThe Public keyword research tool

AnswerThePublic is a free visual keyword research and content ideas tool. It uses Google and Bing’s auto-suggest features, and presents this data in a format that’s easier to understand.

To get started, simply visit the AnswerThePublic website and enter a keyword or phrase. The tool will then load related keywords and present them as a visual map. 

Exploring related keywords, in the AnswerThePeople tool

You can click on any keyword and AnswerThePublic will show this word’s Google search results in a new browser tab. This is an easy way to explore the questions that people are searching for, so you can create content that answers those questions. 

AnswerThePublic presents all its keyword research on a single page. You can also download this data as images. 

Another option is exporting AnswerThePublic’s data to a CSV file. You can then explore this data using your preferred spreadsheet software.

How to download the AnswerThePeople keyword research data

More Keyword Research Tips

All the above tools will provide you with a wealth of data.

If you’re unsure where to start, then here’s some tips on how to find the keywords that will deliver lots of traffic to your website.  

1. Start With Broader Search Terms

Keyword research isn’t about finding the perfect search terms on your first try. 

After all, there’s a reason why it’s called research! 

It often makes sense to start with more general, vague search terms. This could be the name of your products, company, or important topics within your industry. 

You can then explore the results, and refine your potential keywords. If you’re using AIOSEO, then you can use the Additional Keyphrases feature to easily find related keywords.

You can then view the search volume and trends for these related words and phrases. This can help you find new keywords that could potentially bring lots of high-converting, highly-focused traffic to your website.   

2. Do Competitor Research 

Competitor research is where you use a tool to analyze your own keyword performance and then compare it with your competitors. For more details, please see our guide on how to do an SEO competitor analysis in WordPress.  

Some tools can even help you spot opportunities to outperform your competitors. For example, the Semrush sidebar has an entire section dedicated to competitor research.

Semrush's competitor research section

3. Focus on Search Intent 

Search intent is what the user hoped to achieve by searching for this particular keyword.

Some keywords have a very clear search intent. Others are a bit more vague. 

Imagine someone types ‘burrito’ into a search engine. This person may want to order takeout, or they might be searching for a burrito recipe. They may even be looking for the definition of the word ‘burrito.’ 

The search intent is unclear. Even if you manage to rank for this keyword, you may struggle to create content that matches the visitor’s search intent. This is simply because there are lots of different possible search intents for the word ‘burrito.’ 

Now, think about the search term ‘best vegetarian burrito recipe.’ Here, the search intent is much more clear.

There may be less people searching for this keyphrase, but it’s easier to create content that perfectly matches the search intent.

When it comes to keyword research, there’s always the temptation to use generic search terms that have a huge search volume.

However, it’s important to keep the search intent in mind. 

Here, a tool such as AIOSEO can help you find related keywords that have a lower search volume, but a very clear search intent. 

How to Apply Keyword Research in Your Business or Blog? 

The main goal of keyword research is to find out what your customers are searching for, and then rank for those words in the search results. There are different ways to do that depending on your content strategy.

To start, plan a proper content marketing strategy around the keywords that you want to target. This might involve creating useful content articles, blog posts, infographics, and videos about this keyword.

Business websites can create landing pages, documentation, FAQs and other content targeting their new keywords. 

When writing your content, you can enter your target keyword into a tool such as AIOSEO. AIOSEO will then help you optimize your content for this focus keyword.

If you run an online store, then you can also use those keywords in your product titles, descriptions, and product categories. If you’re using WooCommerce for your online store, then please see our step-by-step guide to ranking #1 in Google.

Need help creating engaging content? Check out our expert pick of the best content marketing tools and plugins.

How Do I Track the Performance of My Keyword Research?

Was your keyword research a success? Are you getting more traffic from the search engines, or do you still have work to do?

If you’re going to answer these questions, you’ll need to track your performance.

First, you need to sign up for Google Search Console. This tool provides insights into how Google views your website. You can also use Google Search Console to track the keywords that you rank for and your average position in the search results. 

To properly track the performance of your content, you’ll need to add Google Analytics to your WordPress website.

The best way to do that is by using MonsterInsights, the best Google Analytics plugin for WordPress. It’s used by over 3 million businesses, including Microsoft, Bloomberg, PlayStation, and Subway. For more details, please see our guide on how to install Google Analytics to a WordPress website.

We hope this article helped you learn how to do keyword research for your WordPress blog. You may also want to see our guide on how to create an email newsletter the right way, or check out our expert comparison of the best business phone services for small business.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post How to Do Keyword Research for Your WordPress Blog first appeared on WPBeginner.

How to Change the Category Base Prefix in WordPress

Do you want to change the category base prefix in WordPress?

By default, WordPress automatically adds /category/ as a prefix to URLs for all category pages. However, you can easily change the category base prefix or completely remove it.

In this article, we will show you how to change the category base prefix in WordPress.

How to change the category base prefix in WordPress

What is Category Base Prefix? Should You Change It?

Each category on your WordPress site gets its own page and RSS feed. You can view all posts filed under a category by visiting that category archive page.

By default, WordPress adds ‘category’ as a base prefix to URLs for category pages. This helps differentiate pages and posts from category and tag archives.

For example, if you have a category called ‘News’ then its URL will look like this:

http://example.com/category/news/

Similarly, WordPress also adds tag prefixes to URLs for tag archives.

http://example.com/tag/iphone/

This SEO-friendly URL structure helps users and search engines understand what kind of page they are visiting.

Most websites don’t need to change the base prefix at all. However, if you are creating a niche site where you would like to use a different word or phrase for your categories, then you can change the category base prefix to reflect that.

Changing Category Base Prefix in WordPress

Changing the category base prefix is quite simple in WordPress.

You need to visit the Settings » Permalinks page from your WordPress dashboard and scroll down to the ‘Optional’ section.

Add a new category base prefix

In the ‘Category base’ field, you can enter the prefix you would like to use next to the category base option. You can also change the tag base prefix if you want.

For example, you can add ‘topics’ as the new prefix. In this case, your category URLs will look like this:

http://example.com/topics/news/

Don’t forget to click on the ‘Save Changes’ button to store your settings.

Removing Category Base Prefix from URLs

Many of our users have asked us about removing the category base prefix from WordPress URLs altogether. This will change your category URLs to look like this:

http://example.com/news/

This is not a good idea, and we recommend that you do not remove category base prefix.

The category base prefix helps both users and search engines distinguish between posts/pages and categories. Removing the prefix makes your URLs ambiguous, which is not good for user experience or WordPress SEO.

You may also run into technical issues with various WordPress plugins. For example, if you have a category and a page with the same name or when you are using %postname% as your URL structure for single posts, then your site will experience an infinite redirect loop causing the pages to never load.

However, if you still want to do this, then you can use the All in One SEO (AIOSEO) plugin.

All In One SEO - AIOSEO

It is the best SEO plugin for WordPress and makes it super easy to optimize your website for search engines. Plus, it gives you an option to strip the category base prefix with a click of a button.

For this tutorial, we’ll use the AIOSEO Pro license because it includes the feature to remove category base and other powerful options like the redirection manager and link assistant. There’s also a free version of AIOSEO that you can use.

First, you’ll need to install and activate the AIOSEO plugin. For more details, please see our guide on how to install a WordPress plugin.

Upon activation, you’ll see the AIOSEO setup wizard. Simply click the ‘Let’s Get Started’ button. You can see our guide on how to setup All in One SEO for WordPress for more information.

All in One SEO setup

Next, you can head over to All in One SEO Search » Appearance from your WordPress admin panel and click on the ‘Taxonomies’ tab.

After that, go to the Categories section and switch to the ‘Advanced’ tab. From here, simply click the toggle to Yes for ‘Remove Category Base Prefix’ option.

Enable the remove category base prefix button

Don’t forget to click the ‘Save Changes’ button when you’re done.

Setting Up Redirects After Changing Category Base Prefix

If you are changing or removing the category base prefix on a new WordPress website, then you don’t need to do anything. However, if you are doing this on an existing website, then users visiting the old category page will see a 404 error.

To fix this, you will need to set up a redirect to make sure both search engines and regular visitors are properly redirected to the correct category page on your site.

The easiest way of setting up redirection in WordPress is by using the All in One SEO (AIOSEO) plugin.

To start, you can go to All in One SEO » Redirects from the WordPress admin panel and then click the ‘Activate Redirects’ button.

Activate redirects

Once it’s active, you can go to the ‘Redirects’ tab to setup redirection.

Simply enter the URL you want to redirect under the ‘Source URL’ field and the new location for the link under the ‘Target URL’ field.

As for the Redirct Type, you can select ‘301 Moved Permanently’ from the dropdown menu. This will permanently move your old category pages to the new destination.

Set up redirection in AIOSEO

Don’t forget to click the ‘Add Redirect’ button when you’re done.

For more details, please see our beginners guide to creating 301 redirects in WordPress.

Now all your users and search engines will be redirected to the correct URLs using your new category prefix.

We hope this article helped you learn how to change the category base prefix in WordPress. You may also want to see our list of most wanted WordPress tips, tricks, and hacks and how to start an online store.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post How to Change the Category Base Prefix in WordPress first appeared on WPBeginner.

How to Import External Images in WordPress

Do you want to import external images in WordPress?

If you have recently moved your website from one platform or host to another, then there is a good chance that you may have external images embedded on your pages.

In this article, we will explain how to properly import those external images in WordPress.

How to Import External Images in WordPress

Why Import External Images in WordPress?

External images are images embedded in your content that load from another website or URL different from your main WordPress website.

Most commonly, WordPress users come across external images issue after migrating their website from other platforms like Blogger, Weebly, Joomla, or WordPress.com.

By default, if you use one of the WordPress importers, then it will try to import images. You can see the imported images by visiting Media » Library page in your WordPress admin area.

If you see that all your images are already in the Media Library, but the image URLs in your posts still point to your old website, then you don’t need this article. Instead, you should follow our guide on how to easily update URLs when moving your WordPress site.

However, if you don’t see images imported to your WordPress media library, then continue reading and we will show you how to import those external images.

How to Import External Images in WordPress

The first thing you need to do is to install and activate the Auto Upload Images plugin. For more details, see our step by step guide on how to install a WordPress plugin.

Upon activation, you need to visit the Settings » Auto Upload Images page to review the plugin settings.

The Auto Upload Images Settings Page

The default settings would work for most users, but you can change them as needed.

For example, the plugin will import images to your default WordPress media uploads folder. You can change that by providing a different base URL. Other than that, it also allows you to set filename, image alt tag, image size, and exclude post types.

If you do make some changes, then don’t forget to click on the ‘Save Changes’ button at the bottom of the page to store the new settings.

Next, you will need to update the posts or pages containing the external images. Since this is a manual process, it can be tedious if you have a lot of content.

Luckily, there’s a quick way to update all posts with external images. Simply go to Posts » All Posts page and then click on the Screen Options button at the top.

Use Screen Options to Display 999 Posts at a Time

You need to increase the number in the ‘Number of items per page field’ field to ‘999’ and click the ‘Apply’ button.

WordPress will reload the page, and this time it will show up to 999 posts at a time.

Note: If you have slow web hosting, your server may not be able to handle updating so many posts at once. In that case, you would want to do smaller batches of posts at a time, or consider switching to better WordPress hosting.

Next, you can select all of your posts on this page by clicking the checkbox next to ‘Title’. After that, you should select ‘Edit’ under the bulk actions menu and click the ‘Apply’ button.

Editing All Posts in Bulk

WordPress will now show you a ‘Bulk Edit’ box with all selected posts.

You just need to click on the ‘Update’ button, and WordPress will update all your posts.

Updating All Posts in Bulk

Remember, don’t change any of the settings in the bulk edit settings that you see. You just need to click the ‘Update’ button.

This will trigger the plugin to check all selected posts and import external images as it finds them.

If you have more than 999 posts, then you will need to visit the next page to select the remaining posts.

We hope this tutorial helped you learn how to import external images in WordPress. You may also want to learn how to create a custom Instagram photo feed, or check out our list of must have plugins to grow your site.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post How to Import External Images in WordPress first appeared on WPBeginner.

How to Embed Medium Blog Posts in WordPress

Do you want to embed Medium article posts on WordPress?

Medium is a popular blogging platform that allows you to easily publish articles on the internet. However, you may want to display those posts on your WordPress website as well.

In this article, we’ll show how to easily embed Medium article posts in WordPress.

Easily add Medium articles in WordPress

Why Embed Medium Article Posts in WordPress?

Medium is a popular blogging platform that allows you to easily publish articles on the internet.

However, one downside of using Medium is that it doesn’t give you the same flexibility as a WordPress website.

For this reason, you may want to embed your Medium articles on WordPress.

WordPress is more flexible, you can use it to make any type of website and monetize your content any way you see fit.

How to Embed Medium Articles in WordPress

Normally, WordPress uses the oEmbed format to embed third party content from supported websites like YouTube, Twitter, and more.

Unfortunately, Medium doesn’t support oEmbed format, which makes it difficult to embed Medium articles in WordPress. There used to be plugins that allowed users to display their Medium articles on a WordPress blog, but they either don’t work, or they’re no longer maintained due to low demand.

So now, the only way to embed your Medium articles in WordPress is by using the RSS block or widget.

First, you need to find your Medium publication’s RSS feed. Usually, it is located at a URL like this:

https://medium.com/feed/your-publiction-name

If you are using a custom domain for your Medium publication, then your RSS feed would be located at:

https://your-domain.com/feed

Next, you need to edit the WordPress post or page where you want to embed Medium posts and add the RSS block to the content area.

RSS block

After that, add your Medium RSS feed URL in the block settings.

WordPress will then fetch your recent Medium articles and display them. Under the block settings, you can choose to show excerpt, featured image, author, and date options.

Medium feed display

The problem with this method is that you can’t embed a specific Medium article by itself. The block will automatically show you the latest Medium posts only.

If you would like more flexibility and freedom, then perhaps you should consider migrating your Medium articles to WordPress.

How to Migrate Medium Articles to WordPress

Migrating your Medium articles to WordPress would allow you to take advantage of all the flexibility and features of WordPress.

WordPress is the most popular website builder on the market. It powers more than 43% of all websites on the internet.

For more details, see our article on why you should use WordPress to make your website.

Step 1. Set Up Your WordPress Website

If you haven’t already done so, then you’ll need to set up a WordPress website first.

There are two types of WordPress websites: WordPress.com which is a blogging platform, and WordPress. org which is also called self-hosted WordPress. For more details, see our article on the difference between WordPress.com vs WordPress.org.

We recommend using self-hosted WordPress as it gives you complete freedom to build your website however you choose.

To get started, you’ll need a domain name and a WordPress hosting account.

Fortunately, Bluehost has agreed to offer WPBeginner users a free domain name and a generous discount on hosting. Basically, you can get started for $2.75 per month.

After signup, Bluehost will send login details to your email address which will allow you to login to your Bluehost dashboard.

Bluehost Dashboard - log in to WordPress

You’ll notice that Bluehost has already installed WordPress for you.

You can now go ahead and simply login to your new WordPress website.

The WordPress dashboard

Step 2. Import Your Medium Articles to WordPress

Before you can import your Medium articles to WordPress, you’ll need them in the format supported by WordPress.

Medium doesn’t provide a tool to do that by default. But it does allow you to export your content in an unsupported format.

Simply login to your Medium account and click on your profile photo. From here, click on the Settings link.

Medium settings

This will take you to the settings page where you need to scroll down to the ‘Download Your Information’ section.

Click on the ‘Download zip’ button to export your Medium data.

Download export file

On the next page, you need to click on the export button. Medium will then prepare your download and send a link to you via email.

After you have downloaded the export file, you need to visit the Medium to WordPress Importer tool. It is a free online tool that converts your medium export file into a WordPress-compatible format.

First, you need to provide your Medium profile URL, your name, and email address.

Enter your Medium profile URL

If your blog is using a custom domain on Medium, then you need to enter your custom domain URL.

Now, if you are using your Medium profile URL, then you’ll be asked to upload the Medium export file you downloaded in the earlier step.

Next, click on the ‘Export My Medium Website’ button to continue.

The Medium to WordPress Importer will now prepare your export file. Once finished, it will show you a success message with a button to download your WordPress-ready Medium export file.

Download your WordPress compatible import file

You can now download the file to your computer.

After that, switch to your WordPress website and go to the Tools » Import page.

You will see a list of importers available for different platforms. You need to scroll down to WordPress and then click on the ‘Install Now’ link.

Install WordPress importer

WordPress will now fetch and install the importer plugin.

Once finished, you need to click on ‘Run Importer’ to launch it.

Run importer

On the next screen, click on the ‘Upload file and import’ button to continue.

Choose import file to upload

The WordPress importer will now upload your Medium export file and analyze it.

On the next screen, it will ask you to assign authors.

Assign user to articles

You can import the author from your Medium website, create a new author, or assign all content to your existing WordPress user.

Don’t forget to check the box next to ‘Download and import file attachments’ option. It will attempt to get images from your Medium website into your WordPress media library.

You can now click on the Submit button to run the importer. Upon completion, you will see a success message.

Success message

Congratulations, you have successfully imported content from Medium to WordPress!

You can now go to the posts page in your WordPress admin area to double check if all your content is there.

Step 3. Import Images from Medium to WordPress

The WordPress importer tries to import images from your Medium articles to the WordPress media library. However, it may fail due to the way Medium displays images in your articles.

To see all the images that have been imported successfully, simply go to the Media » Library page.

Media library

If some or all of your images failed to import, then you will need to import them again.

To do that, you first need to install and activate the Auto Upload Images plugin. For more details, see our step by step guide on how to install a WordPress plugin.

Upon activation, you need to update the posts containing the external images. This update will trigger the plugin to fetch and store the external images in the article.

You can also bulk update all articles at once to quickly import all images. For detailed instructions, see our step by step tutorial on how to import external images in WordPress.

Step 4. Setting up Redirects for Medium Articles

If your Medium publication uses a medium.com URL, then you cannot setup redirects.

However, if you were using a custom domain for your Medium publication, then you can set up custom redirects in WordPress.

First, you will need to get all URLs of your Medium articles and save them in a text file. After that, you need to start setting up redirects for all your articles.

There are multiple ways to set up redirects in WordPress. You can follow the instructions in our beginner’s guide to creating redirects in WordPress for detailed instructions.

Step 5. Deciding What to Do With Your Medium Articles

Now, having the same articles on two different websites will affect their search engine optimization (SEO) since Google will consider them duplicate content. That means that your new WordPress site may not get any search engine traffic.

To avoid this, you can simply deactivate your Medium account. Deactivating an account keeps all your data on Medium, but it becomes publicly unavailable.

Simply click on your Profile icon under your Medium account and then select Settings.

Account settings

From settings page, scroll down to the Security section.

Then, click on the Deactivate Account link at the bottom of the page.

Deactivate medium account

Bonus Step: Promoting Your Medium Articles on WordPress

Now that you have migrated your articles from Medium to WordPress, here are a few tools to promote your articles.

1. All in One SEO – The best WordPress SEO plugin to easily optimize your blog posts for search engines.

2. SeedProd – Enjoy the endless design options with the best WordPress page builder. It allows you to easily create beautiful landing pages for your website.

3. WPForms – Make your website interactive by adding beautiful contact forms. WPForms is the best WordPress contact form plugin with a drag and drop interface to create any kind of form you need.

4. OptinMonster – The best conversion optimization software on the market. OptinMonster allows you to easily convert website visitors into subscribers and customers.

5. MonsterInsights – Start tracking your website visitors from day one. MonsterInsights is the best Google Analytics plugin for WordPress. It allows you to see your most popular content and where your users are coming from.

For more, see our expert pick of the must have WordPress plugins for all websites.

We hope this article helped you learn how to embed Medium article posts on WordPress. You may also want to see our guide on how to get a free email domain, or our expert pick of the best business phone services for small business.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post How to Embed Medium Blog Posts in WordPress first appeared on WPBeginner.

Useful Tools for Visualizing Databases on a Budget

A diagram is a graphical representation of information that depicts the structure, relationship, or operation of anything. Diagrams enable your audience to visually grasp hidden information and engage with them in ways that words alone cannot. Depending on the type of project, there are numerous ways to use diagrams. For example, if you want to depict the relationship between distinct pieces, we usually use an Entity Relationship Diagram (ERD). There are many great tools that can help you sketch out your database designs beautifully.

In this article, I will be sharing some of my favorite tools that I use to curate my data structures and bring my ideas to life.

Google Docs Drawing

The drawing function in Google Docs allows you to add illustrations to your pages. You can add custom shapes, charts, graphs, infographics, and text boxes to your document with the built-in drawing tool.

Screenshot of database entity relationships using Google Docs.

Sketching with Google Docs

Although it is simple to add a graphic to your Google Docs, the procedure is not totally visible. Here’s how:

1 . Open a new document on Google Docs.

Screenshot of a new document in Google Docs.

2 . Click on the insert button and select Drawing . Then, from the drop-down option, choose New to open the drawing screen.

Screenshot of adding a new Drawing in Google Docs.

3 . You can use the toolbox on this screen to add text boxes, select lines, and shapes, and modify the colors of your drawing.

Screenshot of selecting an Arrow in Google Docs.

4 . You may also use the cursor to adjust the size of your drawings and the color of your designs by using the toolbox at the top of your screen.

Screenshot of customizing a drawing in Google Docs.

5 . When finished, click the Save and close button. You can click on the “File” toolbar displayed on the top of your screen to download your document.

Features

CostFree.
CLI? GUI? Online?Online.
Requires an Account?Yes, a Google account is required.
Collaborative Editing?Yes, with Google Drive sharing.
Import SQLNot Applicable.
Export SQLNot Applicable.
Export Formats.doc, .pdf, .rtf, .odt, .txt, .html, .epub
Generate Shareable URLYes.

Google Docs offers amazing convenience. However, diagramming databases is not something it was intended for. You may find yourself frustrated with redrawing arrows and relationships if you are making frequent edits to your model.

Graphviz

Graphviz is a free graph visualization software that allows us to express information diagrammatically.

Screenshot of database entity relationships using Graphviz.

Graphviz implements the DOT language. The DOT language is an abstract grammar that makes use of terminals, non terminals, parentheses, square brackets, and vertical bars. More information about the DOT language can be found in its documentation.

Features

CostFree.
CLI? GUI? Online?CLI.
Visual Studio Code, Eclipse, and Notepad++.
Graphical Interfaces.
Requires an Account?No.
Collaborative Editing?Not Applicable.
Import SQLYes, using SQL Graphviz.
Export SQLYes, using SQL Graphviz.
Export Formats.gif, .png, .jpeg, .json, .pdf and more
Generate Shareable URLNot Applicable.

Graphviz has an impressive and supportive community. However, a high level of SQL support is only available when you install additional third-party software. This overhead may make it less approachable to users that are not comfortable setting up their computer to support these tools.

ERDPlus

ERDPlus is a database modeling tool that allows you to create Entity Relationship Diagrams, Relational Schemas, Star Schemas, and SQL DDL statements.

Screenshot of database entity relationships using ERDPlus.

It includes a brief guide on how to create your ER diagrams, which is especially useful for beginners. You can also easily convert your created ER diagrams to relation schemas.

Features

CostFree.
CLI? GUI? Online?Online.
Requires an Account?Not required, but recommended for saving.
Collaborative Editing?Not Applicable.
Import SQLNo.
Export SQLYes, with the support of SQL DDL statements.
Export Formats.png
Generate Shareable URLNot Applicable.

ERDPlus is suited for SQL. It does lack additional export formats and ability to share with teams, but these features are not necessary with import and export.

Diagrams.net

Diagrams.net (previously Draw.io) is a free online diagramming tool that can be used to create flowcharts, UML diagrams, database models, and other types of diagrams.

Screenshot of database entity relationships using Diagrams.net.

Features

CostFree.
CLI? GUI? Online?Desktop and Online.
Requires an Account?Not required, but recommended for saving.
Collaborative Editing?Sharing requires Google Drive or OneDrive.
Import SQLYes.
Export SQLNo.
Export Formats.png, .jpeg, .svg, .pdf, .html and more.
Generate Shareable URLYes, export as URL an option.

Diagrams.net is designed to support many different workflows. Its ability to easily integrate with third-party integrations such as Trello, Quip, Notion, and others distinguishes it from the other options. The ability to share and collaborate may make it work well for collaborative teams.

Conclusion

This article is based on using free database tools that could help visualize your ideas and their capabilities with limitations to great details on how to use these tools.

In my research, I also came across other excellent tools with free trials available for creating database diagrams like Lucidchart, EDrawMax, and, DrawSQL. However, these free trials have limitations which may make them less suited for developers working on multiple projects.

I strongly recommend that you read the documentation for each of these tools to determine what works best for you and, most importantly, to avoid any difficulties in using these tools.

Thank you for taking the time to read this far, and I hope you found what you were looking for. Have a wonderful day!


Useful Tools for Visualizing Databases on a Budget originally published on CSS-Tricks. You should get the newsletter.

How to Fix Missing Theme Customizer in WordPress Admin

Do you want to fix the missing theme customizer in the WordPress dashboard?

WordPress themes that support full site editing (FSE) don’t include a theme customizer option in the WordPress admin panel. Instead, you’ll see a new ‘Editor (Beta)’ option under the Appearance menu.

In this article, we’ll show you how to fix the missing theme customizer in WordPress admin.

How to fix missing theme customizer in WordPress admin

What Happened to the Theme Customizer in WordPress Admin?

With WordPress gradually releasing the new full site editor, many themes no longer show the theme customizer option in your WordPress dashboard.

Full site editing allows you to customize your website design using blocks, just like editing a blog post or page in the WordPress content editor. You can add and edit different sections of your theme template using blocks, widgets, and menus.

However, your Appearance menu will look different if you’re using a block-based theme like the default Twenty Twenty-Two theme.

Missing theme customizer from admin panel

You’ll notice that the ‘Customize’ option to open the theme customizer is missing from the Appearance menu. Instead, there’s an ‘Editor (Beta)’ option to launch the full site editor.

By using the full site editor, you should be able to make any changes you would have made with the Customizer tool.

However, you might prefer to use the customizer instead of learning a whole new way of customizing your theme. In that case, we’ve put together a guide on how you can still use the customizer on your WordPress site.

Let’s look at some of the ways you can fix the missing theme customizer.

Fixing Missing Theme Customizer from WordPress Admin

There are 3 simple ways you can use to fix the missing theme customizer from your WordPress admin panel.

We’ll go through each method, so you can choose the one that best suits you.

1. Manually Enter the Theme Customizer URL in Your Browser

If you’re using a WordPress theme that uses the full site editor and want to access the theme customizer, then you can add ‘customize.php’ at the end of your WordPress admin URL.

Your website URL will look like this:

https://example.com/wp-admin/customize.php

Simply replace ‘example.com’ with your own website domain name and enter the link in your web browser. This will launch the theme customizer for your website.

Access theme customizer with URL

However, it’s important to note that the editing options will be limited in the theme customizer for themes using full site editing. For instance, you may only see a few simple settings like site identity, homepage settings, and additional CSS.

If you want to use all the options offered by the theme customizer to edit your site’s theme, then you can use the next method.

2. Switch Your WordPress Theme to Fix Missing Theme Customizer

Another way to solve the missing theme customizer issue is by changing your WordPress theme.

Full site editor is steadily rolling out, and it’s still in the early phases even in the latest WordPress 6.0 version. This means that not many themes fully support site editing at the moment, and those that do can be clunky and tricky to use.

Plus, the block-based themes have limited customization options if you access the theme customizer.

A simple way of restoring the theme customizer menu is by switching your WordPress theme to one that doesn’t include full site editing.

Themes that don't support full site editing

You can see our list of the most popular WordPress themes for plenty of options.

For more details, you can see our guide on how to change your WordPress theme.

3. Use a WordPress Theme Builder to Fix Missing Theme Customizer

You can also use a WordPress theme builder like SeedProd to customize your WordPress website and fix the missing theme customizer from WordPress admin.

WordPress theme builders allow you to customize your site’s theme the way you want without writing any code or hiring a developer. Their drag and drop interface lets you build different theme templates and removes the need to use the WordPress theme customizer.

SeedProd is the best WordPress theme builder and page builder plugin that’s used by over 1 million professionals. It offers pre-built theme templates that you can use to create a custom WordPress theme in no time.

SeedProd website theme templates

After selecting a template, you can use the drag and drop builder to customize your design.

Simply drag any element from the menu on your left and drop it onto the template. Plus, there are options to further customize each element on the template and change its color, size, font, and alignment.

SeedProd even includes WooCommerce blocks, so you can use it to create an online store.

SeedProd drag and drop theme builder

For step-by-step instructions, you can see our beginner’s guide on how to create a custom WordPress theme (no code).

We hope this article helped you learn how to fix missing theme customizer in WordPress admin. You can also see our guide on how to check website traffic for any site, or see our expert pick of the best WordPress SEO plugins to improve your rankings.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post How to Fix Missing Theme Customizer in WordPress Admin first appeared on WPBeginner.

How to Fix Missing Appearance Menu in WordPress Admin

Do you want to fix the missing Appearance Menu option in the WordPress admin area?

Some WordPress themes may come with support for the full-site editing experience, which changes the options under the Appearance Menu in WordPress admin area.

In this article, we’ll show you how to easily fix the missing appearance menu in WordPress admin area.

Fixing the missing appearance menu in WordPress

What Happened to ‘Appearance Menu’ in WordPress?

WordPress is gradually releasing the full site editing experience which uses blocks to edit all aspects of a WordPress website.

Full site editing allows you to use blocks for theme editing and customization. You can add and edit any part of a theme using blocks, including widgets and menus.

This makes certain items under the Appearance menu redundant, so they’re hidden by default.

Missing items from Appearance menu

This only happens with themes offering the support for the full-site editing. These themes are also called block based themes.

If you are using one such theme, or a default WordPress theme like Twenty Twenty-Two, then your Appearance menu would look different.

How to Fix Missing Menus in WordPress Admin

There are two ways to fix the missing appearance menu in WordPress, so you can create and edit your menus again.

We’ll go through them one by one and you can choose the one that suits you.

1. Use the Navigation Block in Full Site Editor

If you are using a block based WordPress theme with full site editing support, then you cannot access the classic navigation menus screen.

Even if you manually entered the URL for the navigation menu page (e.g. https://example.com/wp-admin/nav-menus.php), then you’ll see the following error message.

No support for menus

When using a full site editing theme, you can add, create, and edit navigation menus using the Navigation block under the site editor.

Simply launch the full site editor by visiting Appearance » Editor page.

This will bring you to the site editor interface. You can insert a new Navigation block by clicking on the (+) add block button.

Navigation block

If you already have a navigation block added by your theme, then you can click to select it.

Then simply choose a menu or create a new one.

Create and manage menus in navigation block

You can even select previous menus that you have created for your website under the Classic Menus section.

If you are starting with a new empty menu, then you can add items to your navigation menu. You can add links like you normally do in the block editor when writing posts and pages.

Adding menu items

Once you are finished, don’t forget to click on the Update button to save your menu and apply it across your WordPress blog.

For more details, you can see our step-by-step guide on how to add a navigation menu in WordPress.

2. Fix Appearance Menu by Switching Theme

Full site editing feature is still in the early phases even in WordPress 6.0.

This means that it may behave unexpectedly with different WordPress themes. It may also feel a bit clunky and unfamiliar to many users.

If you want to keep using the classic navigation menus, then you’ll need to switch your WordPress theme to one that doesn’t include the full site editing feature.

Themes that don't support full site editing

Currently many popular WordPress themes don’t support full site editing. However, there is always a chance that they may start using it as it improves over time.

Alternately, you can create a custom WordPress theme of your own without writing any code.

Fix Other Missing Appearance Menus in WordPress

Navigation menus are not the only items disappearing from the Appearance menu. Here are is how you can fix other missing items under the Appearance Menu.

1. Customize

The Customize menu under Appearance used to launch the Theme Customizer. You can still access a limited version of it by visiting the customize.php URL directly:

https://example.com/wp-admin/customize.php

Simply enter that URL into your browser and change “example.com” to your own site’s domain name.

You’ll see a notification that your theme supports full site editing. Below that, you’ll find a few basic customization options.

Minimum options in customize page

2. Widgets

If your WordPress theme doesn’t have any sidebars or widget areas defined, then you will not see the Widgets menu under Appearance.

Manually accessing the widgets page (e.g. https://example.com/wp-admin/widgets.php) will show you an error message that your theme is not widget-aware.

Your theme is not widget-aware

On the other hand, if your theme does have widget areas, then you will see a widgets menu, but it will still use the block based widget editor.

Block based widgets

You can switch to the legacy widgets screen by using the Classic Widgets plugin.

3. Theme File Editor

WordPress came with a basic file editor that allowed you to edit theme files directly from the WordPress admin area.

We don’t recommend using that editor, but it often came in handy for many beginners when they needed to quickly add a code snippet to their theme’s functions.php file.

The good news is that it is still available, but it is moved under the Tools menu if you are using a full site editing theme.

Theme file editor

We hope this article helped you fix the missing Appearance Menus in the WordPress admin area. You may also want to take a look at how to add a search bar to your menu, or our expert pick of the best WordPress plugins for small business.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post How to Fix Missing Appearance Menu in WordPress Admin first appeared on WPBeginner.

Inline Image Previews with Sharp, BlurHash, and Lambda Functions

Don’t you hate it when you load a website or web app, some content displays and then some images load — causing content to shift around? That’s called content reflow and can lead to an incredibly annoying user experience for visitors.

I’ve previously written about solving this with React’s Suspense, which prevents the UI from loading until the images come in. This solves the content reflow problem but at the expense of performance. The user is blocked from seeing any content until the images come in.

Wouldn’t it be nice if we could have the best of both worlds: prevent content reflow while also not making the user wait for the images? This post will walk through generating blurry image previews and displaying them immediately, with the real images rendering over the preview whenever they happen to come in.

So you mean progressive JPEGs?

You might be wondering if I’m about to talk about progressive JPEGs, which are an alternate encoding that causes images to initially render — full size and blurry — and then gradually refine as the data come in until everything renders correctly.

This seems like a great solution until you get into some of the details. Re-encoding your images as progressive JPEGs is reasonably straightforward; there are plugins for Sharp that will handle that for you. Unfortunately, you still need to wait for some of your images’ bytes to come over the wire until even a blurry preview of your image displays, at which point your content will reflow, adjusting to the size of the image’s preview.

You might look for some sort of event to indicate that an initial preview of the image has loaded, but none currently exists, and the workarounds are … not ideal.

Let’s look at two alternatives for this.

The libraries we’ll be using

Before we start, I’d like to call out the versions of the libraries I’ll be using for this post:

Making our own previews

Most of us are used to using <img /> tags by providing a src attribute that’s a URL to some place on the internet where our image exists. But we can also provide a Base64 encoding of an image and just set that inline. We wouldn’t usually want to do that since those Base64 strings can get huge for images and embedding them in our JavaScript bundles can cause some serious bloat.

But what if, when we’re processing our images (to resize, adjust the quality, etc.), we also make a low quality, blurry version of our image and take the Base64 encoding of that? The size of that Base64 image preview will be significantly smaller. We could save that preview string, put it in our JavaScript bundle, and display that inline until our real image is done loading. This will cause a blurry preview of our image to show immediately while the image loads. When the real image is done loading, we can hide the preview and show the real image.

Let’s see how.

Generating our preview

For now, let’s look at Jimp, which has no dependencies on things like node-gyp and can be installed and used in a Lambda.

Here’s a function (stripped of error handling and logging) that uses Jimp to process an image, resize it, and then creates a blurry preview of the image:

function resizeImage(src, maxWidth, quality) {
  return new Promise<ResizeImageResult>(res => {
    Jimp.read(src, async function (err, image) {
      if (image.bitmap.width > maxWidth) {
        image.resize(maxWidth, Jimp.AUTO);
      }
      image.quality(quality);

      const previewImage = image.clone();
      previewImage.quality(25).blur(8);
      const preview = await previewImage.getBase64Async(previewImage.getMIME());

      res({ STATUS: "success", image, preview });
    });
  });
}

For this post, I’ll be using this image provided by Flickr Commons:

Photo of the Big Boy statue holding a burger.

And here’s what the preview looks like:

Blurry version of the Big Boy statue.

If you’d like to take a closer look, here’s the same preview in a CodeSandbox.

Obviously, this preview encoding isn’t small, but then again, neither is our image; smaller images will produce smaller previews. Measure and profile for your own use case to see how viable this solution is.

Now we can send that image preview down from our data layer, along with the actual image URL, and any other related data. We can immediately display the image preview, and when the actual image loads, swap it out. Here’s some (simplified) React code to do that:

const Landmark = ({ url, preview = "" }) => {
    const [loaded, setLoaded] = useState(false);
    const imgRef = useRef<HTMLImageElement>(null);
  
    useEffect(() => {
      // make sure the image src is added after the onload handler
      if (imgRef.current) {
        imgRef.current.src = url;
      }
    }, [url, imgRef, preview]);
  
    return (
      <>
        <Preview loaded={loaded} preview={preview} />
        <img
          ref={imgRef}
          onLoad={() => setTimeout(() => setLoaded(true), 3000)}
          style={{ display: loaded ? "block" : "none" }}
        />
      </>
    );
  };
  
  const Preview: FunctionComponent<LandmarkPreviewProps> = ({ preview, loaded }) => {
    if (loaded) {
      return null;
    } else if (typeof preview === "string") {
      return <img key="landmark-preview" alt="Landmark preview" src={preview} style={{ display: "block" }} />;
    } else {
      return <PreviewCanvas preview={preview} loaded={loaded} />;
    }
  };

Don’t worry about the PreviewCanvas component yet. And don’t worry about the fact that things like a changing URL aren’t accounted for.

Note that we set the image component’s src after the onLoad handler to ensure it fires. We show the preview, and when the real image loads, we swap it in.

Improving things with BlurHash

The image preview we saw before might not be small enough to send down with our JavaScript bundle. And these Base64 strings will not gzip well. Depending on how many of these images you have, this may or may not be good enough. But if you’d like to compress things even smaller and you’re willing to do a bit more work, there’s a wonderful library called BlurHash.

BlurHash generates incredibly small previews using Base83 encoding. Base83 encoding allows it to squeeze more information into fewer bytes, which is part of how it keeps the previews so small. 83 might seem like an arbitrary number, but the README sheds some light on this:

First, 83 seems to be about how many low-ASCII characters you can find that are safe for use in all of JSON, HTML and shells.

Secondly, 83 * 83 is very close to, and a little more than, 19 * 19 * 19, making it ideal for encoding three AC components in two characters.

The README also states how Signal and Mastodon use BlurHash.

Let’s see it in action.

Generating blurhash previews

For this, we’ll need to use the Sharp library.


Note

To generate your blurhash previews, you’ll likely want to run some sort of serverless function to process your images and generate the previews. I’ll be using AWS Lambda, but any alternative should work.

Just be careful about maximum size limitations. The binaries Sharp installs add about 9 MB to the serverless function’s size.

To run this code in an AWS Lambda, you’ll need to install the library like this:

"install-deps": "npm i && SHARP_IGNORE_GLOBAL_LIBVIPS=1 npm i --arch=x64 --platform=linux sharp"

And make sure you’re not doing any sort of bundling to ensure all of the binaries are sent to your Lambda. This will affect the size of the Lambda deploy. Sharp alone will wind up being about 9 MB, which won’t be great for cold start times. The code you’ll see below is in a Lambda that just runs periodically (without any UI waiting on it), generating blurhash previews.


This code will look at the size of the image and create a blurhash preview:

import { encode, isBlurhashValid } from "blurhash";
const sharp = require("sharp");

export async function getBlurhashPreview(src) {
  const image = sharp(src);
  const dimensions = await image.metadata();

  return new Promise(res => {
    const { width, height } = dimensions;

    image
      .raw()
      .ensureAlpha()
      .toBuffer((err, buffer) => {
        const blurhash = encode(new Uint8ClampedArray(buffer), width, height, 4, 4);
        if (isBlurhashValid(blurhash)) {
          return res({ blurhash, w: width, h: height });
        } else {
          return res(null);
        }
      });
  });
}

Again, I’ve removed all error handling and logging for clarity. Worth noting is the call to ensureAlpha. This ensures that each pixel has 4 bytes, one each for RGB and Alpha.

Jimp lacks this method, which is why we’re using Sharp; if anyone knows otherwise, please drop a comment.

Also, note that we’re saving not only the preview string but also the dimensions of the image, which will make sense in a bit.

The real work happens here:

const blurhash = encode(new Uint8ClampedArray(buffer), width, height, 4, 4);

We’re calling blurhash‘s encode method, passing it our image and the image’s dimensions. The last two arguments are componentX and componentY, which from my understanding of the documentation, seem to control how many passes blurhash does on our image, adding more and more detail. The acceptable values are 1 to 9 (inclusive). From my own testing, 4 is a sweet spot that produces the best results.

Let’s see what this produces for that same image:

{
  "blurhash" : "UAA]{ox^0eRiO_bJjdn~9#M_=|oLIUnzxtNG",
  "w" : 276,
  "h" : 400
}

That’s incredibly small! The tradeoff is that using this preview is a bit more involved.

Basically, we need to call blurhash‘s decode method and render our image preview in a canvas tag. This is what the PreviewCanvas component was doing before and why we were rendering it if the type of our preview was not a string: our blurhash previews use an entire object — containing not only the preview string but also the image dimensions.

Let’s look at our PreviewCanvas component:

const PreviewCanvas: FunctionComponent<CanvasPreviewProps> = ({ preview }) => {
    const canvasRef = useRef<HTMLCanvasElement>(null);
  
    useLayoutEffect(() => {
      const pixels = decode(preview.blurhash, preview.w, preview.h);
      const ctx = canvasRef.current.getContext("2d");
      const imageData = ctx.createImageData(preview.w, preview.h);
      imageData.data.set(pixels);
      ctx.putImageData(imageData, 0, 0);
    }, [preview]);
  
    return <canvas ref={canvasRef} width={preview.w} height={preview.h} />;
  };

Not too terribly much going on here. We’re decoding our preview and then calling some fairly specific Canvas APIs.

Let’s see what the image previews look like:

In a sense, it’s less detailed than our previous previews. But I’ve also found them to be a bit smoother and less pixelated. And they take up a tiny fraction of the size.

Test and use what works best for you.

Wrapping up

There are many ways to prevent content reflow as your images load on the web. One approach is to prevent your UI from rendering until the images come in. The downside is that your user winds up waiting longer for content.

A good middle-ground is to immediately show a preview of the image and swap the real thing in when it’s loaded. This post walked you through two ways of accomplishing that: generating degraded, blurry versions of an image using a tool like Sharp and using BlurHash to generate an extremely small, Base83 encoded preview.

Happy coding!


Inline Image Previews with Sharp, BlurHash, and Lambda Functions originally published on CSS-Tricks. You should get the newsletter.

Remix Routes Demystified

Around six months ago, Remix became open source. It brings a lovely developer experience and approximates web development to the web platform in a refreshing way. It’s a known tale that naming is the hardest thing in programming, but the team nailed this one. Remix drinks from the community experience, puts the platform and browser behavior in a front seat; sprinkles the learning the authors got from React-Router, Unpkg, and from teaching React. Like a remixed record, its content mixes the old needs to novel solutions in order to deliver a flawless experience.

Writing a Remix app is fun, it gets developers scratching their heads about, “How did Forms actually work before?”, "Can Cache really do that?”, and (my personal favorite), “The docs just pointed me to Mozilla Dev Network!”

In this article, we will dig deeper and look beyond the hype, though. Let’s pick inside (one of) Remix’s secret sauces and see one of the features that powers most of its features and fuels many of its conventions: routes. Buckle up!

Anatomy Of A Remix Repository

If pasting npx create-remix@latest, following the prompt, and opening the scanning the bare bones project file-tree, a developer will be faced with a structure similar to the one bellow:

├───/.cache
├───/public
├───/app
│   ├───/routes
│   ├───entry.client.jsx
│   ├───entry.server.jsx
│   └───root.tsx
├───remix.config.js
└───package.json
  • .cache will show up, once there’s a build output;
  • public is for static assets;
  • app is where the fun will happen, think of it as a /src for now;
  • the files on the root are the configuration ones, for Remix, and for NPM.

Remix can be deployed to any JavaScript environment (even without Node.js). Depending on which platform you choose, the starter may output more files to make sure it all works as intended.

We have all seen similar repositories on other apps. But things already don’t feel the same with those entry.server.jsx and entry.client.jsx: every route has a client and a server runtime. Remix embraces the server-side from the very beginning, making it a truly isomorphic app.

While entry.client.jsx is pretty much a regular DOM renderer with Remix built-in sauce, entry.server.jsx already shows a powerful strategy of Remix routing. It is possible and clear to determine an app-wide configuration for headers, response, and metadata straight from there. The foundation for a multi-page app is set from the very beginning.

Routes Directory

Out of the 3 folders inside /app on the snippet above, routes is definitely the most important. Remix brings a few conventions (one can opt-out with some configuration tweaks) that power the Developer Experience within the framework. The first convention, which has somewhat raised to a standard among such frameworks, is File System based routing. Within the /routes directory, a regular file will create a new route from the root of your app. If one wants myapp.com and myapp.com/about, for example, the following structure can achieve it:

├───/apps
│   ├───/routes
│   │   ├───index.jsx
│   │   └───about.jsx

Inside those files, there are regular React components as the default export, while special Remix methods can be named exports to provide additional functionalities like data-fetching with the loader and action methods or route configuration like the headers and meta methods.

And here is where things start to get interesting: Remix doesn’t separate your data by runtime. There’s no “server data”, or “build-time data”. It has a loader for loading data, it has an action for mutations, headers and meta for extending/overriding response headers and metatags on a per-route basis.

Route Matching And Layouts

Composability is an order of business within the React ecosystem. A componentized program excels when we allow it to wrap one component on another, decorating them and empowering them with each other. With that in mind, the Layout Pattern has surfaced, it consists of creating a component to wrap a route (a.k.a another component) and decorate it in order to enforce UI consistency or make important data available.

Remix puts the Layout Pattern front and center it’s possible to determine a Layout to render all routes which match its name.

├───/apps
│   ├───/routes
│   │   ├───/posts    // actual posts inside
│   │   └───posts.jsx // this is the layout

The posts.jsx component uses a built-in Remix component (<Outlet />) which will work in a similar way that React developers are used to have {children} for. This component will use the content within a /posts/my-post.jsx and render it within the layout. For example:

import { Outlet } from 'remix'

export default PostsLayout = () => (
  <main>
     <Navigation />
     <article>
       <Outlet />
     </article>
     <Footer />
  </main>
)

But not always the UI will walk in sync with the URL structure. There is a chance that developers will need to create layouts without nesting routes. Take for example the /about page and the /, they are often completely different, and this convention ties down the URL structure with UI look and feel. Unless there is an escape hatch.

Skipping Inheritance

When nesting route components like above, they become child components of another component with the same name as their directory, like posts.jsx is the parent component to everything inside /posts through <Outlet />. But eventually, it may be necessary to skip such inheritance while still having the URL segment. For example:

├───/apps
│   ├───/routes
│   │   ├───/posts                       // post
│   │   ├───posts.different-layout.jsx   // post
│   │   └───posts.jsx                    // posts layout

In the example above, posts.different-layout.tsx will be served in /posts/different-layout, but it won’t be a child component of posts.jsx layout.

Dynamic Routes

Creating routes for a complex multi-page app is almost impossible without some Dynamic Routing shenanigans. Of course, Remix has its covered. It is possible to declare the parameters by prefixing them with a $ in the file name, for example:

├───/apps
│   ├───/routes
│   |   └───/users
│   │         └───$userId.jsx

Now, your page component for $userId.jsx can look something like:

import { useParams } from 'remix'

export default function PostRoute() {
  const { userId } = useParams()

  return (
    <ul>
      <li>user: {userId}</li>
    </ul>
  )
}

Also there’s an additional twist: we can combine this with the Dot Limiters mentioned a few sections prior, and we can easily have:

├───/apps
│   ├───/routes
│   |   └───/users
│   |         ├───$userId.edit.jsx
│   │         └───$userId.jsx

Now the following path segment will not only be matched, but also carry out the parameter: /users/{{user-id}}/edit. Needless to say, the same structure can be combined to also carry additional parameters, for example: $appRegion.$userId.jsx will carry out the 2 parameters to your functions and page component: const { appRegion, userId } = useParams().

Catch-all With Splats

Eventually, developers may find themselves in situations where the number of parameters, or keys for each, a route is receiving is unclear. For these edge-cases Remix offers a way of catching everything. Splats will match everything which was not matched before by any of its siblings. For example, take this route structure:

├───/apps
│   ├───/routes
│   │   ├───about.jsx
│   │   ├───index.jsx
│   │   └───$.jsx      // Splat Route
  • mydomain.com/about will render about.jsx;
  • mydomain.com will render index.jsx;
  • anything that’s not the root nor /about will render $.jsx.

And Remix will pass a params object to both of its data handling methods (loader and action), and it has a useParams hook (exactly the same from React-Router) to use such parameters straight on the client-side. So, our $.jsx could look something like:

import { useParams } from 'remix'
import type { LoaderFunction, ActionFunction } from 'remix'

export const loader: LoaderFunction = async ({
  params
}) => {
  return (params['*'] || '').split('/')
};

export const action: ActionFunction = async ({
  params
}) => {
  return (params['*'] || '').split('/')
};

export default function SplatRoute() {
  const params = useParams()
  console.log(return (params['*'] || '').split('/'))

  return (<div>Wow. Much dynamic!</div>)
}

Check the Load data and the Mutating data sections for an in-depth explanation of loader and action methods respectively.

The params[''] will be a string with the all params. For example: mydomain.com/this/is/my/route will yield “this/is/my/route”. So, in this case we can just use .split('/') to turn into an array like ['this', 'is', 'my', 'route'].

Load Data

Each route is able to specify a method that will provide and handle its data on the server right before rendering. This method is the loader function, it must return a serializable piece of data which can then be accessed on the main component via the special useLoaderData hook from Remix.

import type { LoaderFunction } from 'remix'
import type { ProjectProps } from '~/types'
import { useLoaderData } from 'remix'

export const loader: LoaderFunction = async () => {
  const repositoriesResp = await fetch(
    'https://api.github.com/users/atilafassina/repos'
  )
  return repositoriesResp.json()
}

export default function Projects() {
  const repositoryList: ProjectProps[] = useLoaderData()

  return (<div>{repositoryList.length}</div>
}

It’s important to point out, that the loader will always run on the server. Every logic there will not arrive in the client-side bundle, which means that any dependency used only there will not be sent to the user either. The loader function can run in 2 different scenarios:

  1. Hard navigation:
    When the user navigates via the browser window (arrives directly to that page).
  2. Client-side navigation:
    When the user was in another page in your Remix app and navigates via a <Link /> component to this route.

When hard navigation happens, the loader method runs, provides the renderer with data, and the route is Server-Side Rendered to finally be sent to the user. On the client-side navigation, Remix fires a fetch request by itself and uses the loader function as an API endpoint to fuel fresh data to this route.

Mutating Data

Remix carries multiple ways of firing a data mutation from the client-side: HTML form tag, and extremely configurable <Form /> component, and the useFetcher and useFetchers hooks. Each of them has its own intended use-cases, and they are there to power the whole concept of an Optimistic UI that made Remix famous. We will park those concepts for now and address them in a future article because all these methods unfailingly communicate with a single server-side method: the action function.

Action and Loader are fundamentally the same method, the only thing which differentiates them is the trigger. Actions will be triggered on any non-GET request and will run before the loader is called by the re-rendering of the route. So, post a user interaction, the following cascade will happen on Remix’s side:

  1. Client-side triggers Action function,
  2. Action function connects to the data source (database, API, …),
  3. Re-render is triggered, calls Loader function,
  4. Loader function fetches data and feeds Remix rendering,
  5. Response is sent back to the client.
Headers And Meta

As previously mentioned, there are other specific methods for each route that aren’t necessarily involved with fetching and handling data. They are responsible for your document headers and metatags.

Exporting meta function allows the developer to override the metatag values defined in the root.jsx and tailor it to that specific route. If a value isn’t changed, it will seamlessly inherit. The same logic will apply to the headers function, but with a twist.

Data usually is what determines how long a page can be cached, so, naturally, the document inherits the headers from its data. If headers function doesn’t explicitly declare otherwise, the loader function headers will dictate the headers of your whole document, not only data. And once declared, the headers function will receive both: the parent headers and the loader headers as parameters.

import type { HeadersFunction } from 'remix'

export const headers: HeadersFunction = ({ loaderHeaders, parentHeaders }) => ({
  ...parentHeaders,
  ...loaderHeaders,
  "x-magazine": "smashing",
  "Cache-Control": "max-age: 60, stale-while-revalidate=3600",
})
Resource Routes

These routes are essentially one which doesn’t exist naturally in the website’s navigation pattern. Usually, a resource route does not return a React component. Besides this, they behave exactly the same as others: for GET requests, the loader function will run, for any other request method, the action function will return the response.

Resource routes can be used in a number of use cases when you need to return a different file type: a pdf or csv document, a sitemap, or other. For example, here we are creating a PDF file and returning it as a resource to the user.

export const loader: LoaderFunction = async () => {
  const pdf = somethingToPdf()

  return new Response(pdf, {
    headers: {
      'Content-Disposition': 'attachment;',
      'Content-Type': 'application/pdf',
    },
  })
}

Remix makes it straightforward to adjust the response headers, so we can even use Content-Disposition to instruct the browser that this specific resource should be saved to the file system instead of displaying inline to the browser.

Remix Secret Sauce: Nested Routes

Here is where a multi-page app meets single-page apps. Since Remix’s routing is powered by React-Router, it brings its partial routing capabilities to the architecture. Each route is responsible for its own piece of logic and presentation, and this all can be declared used by the File-System heuristics again. Check this:

├───/apps
│   ├───/routes
│   │   ├───/dashboard
│   │   |    ├───profile.jsx
│   │   |    ├───settings.jsx
│   │   |    └───posts.jsx
│   │   └───dashboard.jsx      // Parent route

And just like we did implicitly on our Layout paradigm before, and how Remix handles the root//routes relationship, we will determine a parent route which will render all its children routes inside the <Outlet /> component. So, our dashboard.jsx looks something like this:

import { Outlet } from 'remix'

export default function Dashboard () {
  return (
   <div>
     some content that will show at every route
     <Outlet />
   </div>
  )
}

This way, Remix can infer which resources to pre-fetch before the user asks for the page. because it allows the framework to identify relationships between each route and more intelligently infer what will be needed. Fetching all of your page's data dependencies in parallel drastically boosts the performance of your app by eliminating those render and fetch waterfalls we dread so much seeing in (too) many web apps today.

So, thanks to Nested Routes, Remix is able to preload data for each URL segment, it knows what the app needs before it renders. On top of that, the only things that actually need re-rendering are the components inside the specific URL segment.

For example, take our above app , once users navigate to /dashboard/activity and then to /dashboard/friends the components it will render and data it will fetch are only the ones inside /friends route. The components and resources matching the /dashboard segment are already there.

So now Remix is preventing the browser from re-rendering the entire UI and only doing it for the sections that actually changed. It can also prefetch resources for the next page so once actual navigation occurs the transition is instant because data will be waiting at the browser cache. Remix is able to optimize it all out of the box with fine-grained precision, thanks to Nested Routes and powered by partial routing.

Wrapping Up

Routing is arguably the most important structure of a web app because it dictates the foundation where every component will relate to each other, and how the whole app will be able to scale going forward. Looking closely through Remix’s decisions for handling routes was a fun and refreshing ride, and this is only the scratch on the surface of what this framework has under its hood. If you want to dig deeper into more resources, be sure to check this amazing interactive guide for Remix routes by Dilum Sanjaya.

Though an extremely powerful feature and a backbone of Remix, we have just scratched the surface with routes and these examples. Remix shows its true potential on highly interactive apps, and it’s a very powerful set of features: data mutation with forms and special hooks, authentication and cookie management, and more.

How To Make A Drag-and-Drop File Uploader With Vue.js 3

What’s different about the file uploader we’re building in this article versus the previous one? The previous drag-and-drop file uploader was built with Vanilla JS and really focused on how to make file uploading and drag-and-drop file selection work, so its feature set was limited. It uploaded the files immediately after you chose them with a simple progress bar and an image thumbnail preview. You can see all of this at this demo.

In addition to using Vue, we’ll be changing the features up: after an image is added, it will not upload immediately. Instead, a thumbnail preview will show up. There will be a button on the top right of the thumbnail that will remove the file from the list in case you didn’t mean to select an image or change your mind about uploading it.

You’ll then click on the “Upload” button to send the image data to the server and each image will display its upload status. To top it all off, I crafted some snazzy styles (I’m no designer, though, so don’t judge too harshly). We won’t be digging into those styles in this tutorial, but they’ll be available for you to copy or sift through yourself in the GitHub Repository — though, if you’re going to copy them, make sure you set up your project to be able to use Stylus styles (or you can set it up to use Sass and change lang to scss for the style blocks and it will work that way). You can also see what we’re building today on the demo page.

Note: I will assume that readers have strong JavaScript knowledge and a good grasp of the Vue features and APIs, especially Vue 3’s composition API, but not necessarily the best ways to use them. This article is to learn how to create a drag-and-drop uploader in the context of a Vue app while discussing good patterns and practices and will not go deep into how to use Vue itself.

Setup

There are a lot of ways to set up a Vue project: Vue CLI, Vite, Nuxt, and Quasar all have their own project scaffolding tools, and I’m sure there are more. I’m not all that familiar with most of them, and I’m not going to prescribe any one tool as of right for this project, so I recommend reading the documentation for whichever you choose to figure out how to set up the way we need it for this little project.

We need to be set up with Vue 3 with the script setup syntax, and if you’re snatching my styles from the Github repo, you’ll need to make sure you’re set up to have your Vue styles compiled from Stylus (or you can set it up to use Sass and change lang to “scss” for the style blocks and it will work that way).

Drop Zone

Now that we have the project set up, let’s dive into the code. We’ll start with a component that handles the drag-and-drop functionality. This will be a simple wrapper div element with a bunch of event listeners and emitters for the most part. This sort of element is a great candidate for a reusable component (despite it only being used once in this particular project): it has a very specific job to do and that job is generic enough to be used in a lot of different ways/places without the need of a ton of customization options or complexity.

This is one of those things good developers are always keeping an eye out for. Cramming a ton of functionality into a single component would be a bad idea for this project or any other because then 1) it can’t be reused if you find a similar situation later and 2) it’s more difficult to sort through the code and figure out how each piece relates to each other. So, we’re going to do what we can to follow this principle and it starts here with the DropZone component. We’ll start with a simple version of the component and then spruce it up a bit to help you grok what’s going on a bit easier, so let’s create a DropZone.vue file in the src/components folder:

<template>
    <div @drop.prevent="onDrop">
        <slot></slot>
    </div>
</template>

<script setup>
import { onMounted, onUnmounted } from 'vue'
const emit = defineEmits(['files-dropped'])

function onDrop(e) {
    emit('files-dropped', [...e.dataTransfer.files])
}

function preventDefaults(e) {
    e.preventDefault()
}

const events = ['dragenter', 'dragover', 'dragleave', 'drop']

onMounted(() => {
    events.forEach((eventName) => {
        document.body.addEventListener(eventName, preventDefaults)
    })
})

onUnmounted(() => {
    events.forEach((eventName) => {
        document.body.removeEventListener(eventName, preventDefaults)
    })
})
</script>

First, looking at the template, you’ll see a div with a drop event handler (with a prevent modifier to prevent default actions) calling a function that we’ll get to in a moment. Inside that div is a slot, so we can reuse this component with custom content inside it. Then we get to the JavaScript code, which is inside a script tag with the setup attribute.

Note: If you’re unfamiliar with what benefits we get from this attribute and you didn’t read the link we added above, head over to the <script setup> documentation for single file components.

Inside the script, we define an event that we’ll emit called ‘files-dropped’ that other components can use to do something with the files that get dropped here. Then we define the function onDrop to handle the drop event. Right now, all it does is emit the event we just defined and add an array of the files that were just dropped as the payload. Note, we’re using a trick with the spread operator to convert the list of files from the FileList that e.dataTransfer.files gives us to an array of Files so all the array methods can be called on it by the part of the system that takes the files.

Finally, we come to the place where we handle the other drag/drop events that happen on the body, preventing the default behavior during the drag and drop (namely that it’ll open one of the files in the browser. We create a function that simply calls preventDefault on the event object. Then, in the onMounted lifecycle hook we iterate over the list of events and prevent default behavior for that even on the document body. In the onUnmounted hook, we remove those listeners.

Active State

So, what extra functionality can we add? The one thing I decided to add was some state indicating whether the drop zone was “active”, meaning that a file is currently hovering over the drop zone. That’s simple enough; create a ref called active, set it to true on the events when the files are dragged over the drop zone and false when they leave the zone or are dropped.

We’ll also want to expose this state to the components using DropZone, so we’ll turn our slot into a scoped slot and expose that state there. Instead of the scoped slot (or in addition to it for added flexibility), we could emit an event to inform the outside of the value of active as it changes. The advantage of this is that the entire component that is using DropZone can have access to the state, rather than it being limited to the components/elements within the slot in the template. We’re going to stick with the scoped slot for this article though.

Finally, for good measure, we’ll add a data-active attribute that reflects active's value so we can key off it for styling. You could also use a class if you prefer, but I tend to like data attributes for state modifiers.

Let’s write it out:

<template>
    <!-- add data-active and the event listeners -->
    <div :data-active="active" @dragenter.prevent="setActive" @dragover.prevent="setActive" @dragleave.prevent="setInactive" @drop.prevent="onDrop">
        <!-- share state with the scoped slot -->
        <slot :dropZoneActive="active"></slot>
    </div>
</template>

<script setup>
// make sure to import ref from Vue
import { ref, onMounted, onUnmounted } from 'vue'
const emit = defineEmits(['files-dropped'])

// Create active state and manage it with functions
let active = ref(false)

function setActive() {
    active.value = true
}
function setInactive() {
    active.value = false
}

function onDrop(e) {
    setInactive() // add this line too
    emit('files-dropped', [...e.dataTransfer.files])
}

// ... nothing changed below this
</script>

I threw some comments in the code to note where the changes were, so I won’t dive too deep into it, but I have some notes. We’re using the prevent modifiers on all the event listeners again to make sure that default behavior doesn’t activate. Also, you’ll notice that the setActive and setInactive functions seem like a bit of overkill since you could just set active directly, and you could make that argument for sure, but just wait a bit; there will be another change that truly justifies the creation of functions.

You see, there’s an issue with what we’ve done. As you can see in the video below, using this code for the drop zone means that it can flicker between active and inactive states while you drag something around inside the drop zone.

Nice! Now Let’s make this a bit more accessible for users who can’t (or don’t want to) drag and drop, by adding a hidden file input (that becomes visible when focused via keyboard for those that need it, assuming you’re using my styles) and wrapping a big label around everything to allow us to use it despite its invisibility. Finally, we’ll need to add an event listener to the file input so that when a user selects a file, we can add it to our file list.

Let’s start with the changes to the script section. We’re just going to add a function to the end of it:

function onInputChange(e) {
    addFiles(e.target.files)
    e.target.value = null
}

This function handles the “change” event fired from the input and adds the files from the input to the file list. Note the last line in the function resetting the value of the input. If a user adds a file via the input, decides to remove it from our file list, then changes their mind and decides to use the input to add that file again, then the file input will not fire the “change” event because the file input has not changed. By resetting the value like this, we ensure the event will always be fired.

Now, let’s make our changes to the template. Change all of the code inside the DropZone slot to the following:

<label for="file-input">
    <span v-if="dropZoneActive">
        <span>Drop Them Here</span>
        <span class="smaller">to add them</span>
    </span>
    <span v-else>
        <span>Drag Your Files Here</span>
        <span class="smaller">
            or <strong><em>click here</em></strong> to select files
        </span>
    </span>

    <input type="file" id="file-input" multiple @change="onInputChange" />
</label>

We wrap the entire thing in a label that is linked to the file input, then we add our dynamic messages back in, though I’ve added a bit more messages to inform users they can click to select files. I also added a bit for the “drop them” message so that they have the same number of lines of text so the drop zone won’t change size when active. Finally, we add the file input, set the multiple attribute to allow users to select multiple files at a time, then wire up the “change” event listener to the function we just wrote.

Run the app again, if you stopped it, we should see the same result in the Vue DevTools whether we drag and drop files or click the box to use the file selector.

Previewing Selected Images

Great, but users aren’t going to be using Vue DevTools to see if the files they dropped are actually added, so let’s start showing the users those files. We’ll start just by editing App.vue (or whatever component file you added the DropZone to) and showing a simple text list with the file names.

Let’s add the following bit of code to the template immediately following the label we just added in the previous step:

<ul v-show="files.length">
    <li v-for="file of files" :key="file.id">{{ file.file.name }}</li>
</ul>

Now, with the app running, if you add some files to the list, you should see a bulleted list of the file names. If you copied my styles, it might look a bit odd, but that’s alright because we’re changing it soon. Make note that thanks to adding the file’s ID in the file list manager, we now have a key in the loop. The only thing that annoys me personally is that since we wrapped the files, we need to write file.file to access the original file object to get its name. In the end, though, it’s a small sacrifice to make.

Now, let’s start showing the images instead of just listing their names, but it’s time to move this functionality out of this main component. We certainly could, keep putting the file preview functionality here, but there are two good reasons to pull it out:

  1. The functionality is potentially reusable in other cases.
  2. As this functionality expands, separating it out prevents the main component from getting too bloated.

So, let’s create /src/FilePreview.vue to put this functionality in and we’ll start with just showing the image in a wrapper.

<template>
    <component :is="tag" class="file-preview">
        <img :src="file.url" :alt="file.file.name" :title="file.file.name" />
    </component>
</template>

<script setup>
defineProps({
    file: { type: Object, required: true },
    tag: { type: String, default: 'li' },
})
</script>

Once again, the styles aren’t included here, but you can find them on GitHub. First thing to note about the code we have, though, is that we’re wrapping this in a component tag and setting what type of tag it is with a tag prop. This can be a good way to make a component more generic and reusable. We’re currently using this inside an unordered list, so li is the obvious choice, but if we want to use this component somewhere else at some point, it might not be in a list, so we would want a different tag.

For the image, we’re using the URL created by the file list manager, and we’re using the file name as the alt text and as the title attribute so we get that free functionality of users being able to hover over the image and see the file name as a tooltip. Of course, you can always create your own file preview where the file name is written out where it’s always visible for the user. There’s certainly a lot of freedom in how this can be handled.

Moving on to the JavaScript, we see props defined so we can pass in the file that we’re previewing and a tag name to customize the wrapper in order to make this usable in more situations.

Of course, if you try to run this, it doesn’t seem to do anything because we currently aren’t using the FilePreview components. Let’s remedy that now. In the template, replace the current list with this:

<ul class="image-list" v-show="files.length">
    <FilePreview v-for="file of files" :key="file.id" :file="file" tag="li" />
</ul>

Also, we need to import our new component in the script section:

import  FilePreview  from  './components/FilePreview.vue'

Now if you run this, you’ll see some nice thumbnails of each image you drop or select.

Remove Files From the List

Let’s augment this with the ability to remove a file from the list. We’ll add a button with an “X” in the corner of the image that people can click/tap on to remove the image. To do this, we’ll need to add 2 lines of code to FilePreview.vue. In the template, just above the img tag add the following:

<button @click="$emit('remove', file)" class="close-icon" aria-label="Remove">×</button>

Then add this line somewhere in the script section:

defineEmits(['remove'])

Now, clicking that button will fire a remove event, passing the file along as the payload. Now we need to head back to the main app component to handle that event. All we need to do is to add the event listener to the FilePreview tag:

<FilePreview  v-for="file  of  files" :key="file.id" :file="file"  tag="li" @remove="removeFile" />

Thanks to removeFile already being defined by the file list manager and taking the same arguments that we’re passing from the event, we’re done in seconds. Now if you run the app and select some images, you can click on the little “X” and the corresponding image will disappear from the list.

Possible Improvements

As usual, there are improvements that could be made to this if you’re so inclined and your application is able to reuse this component elsewhere if it is more generic or customizable.

First of all, you could manage the styles better. I know that I didn’t post the styles here, but if you copied them from GitHub and you’re a person that cares a lot about which components control which styles, then you may be thinking that it’d be wiser to have some specific files moved out of this component. As with most of these possible improvements, this is mostly to do with making the component more useful in more situations. Some of the styles are very specific to how I wanted to display the previews for this one little app, but to make it more reusable, we either need to make styles customizable via props or pull them out and let an outer component define the styles.

Another possible change would be to add props that allow you to hide certain elements such as the button that fires the “remove” event. There are more elements coming later in the article that might be good to hide via props as well.

And finally, it might be wise to separate the file prop into multiple props such as url, name, and — as we’ll see later — status. This would allow this component to be used in situations where you just have an image URL and name rather than an UploadableFile instance, so it’s more useful in more situations.

Uploading Files

Alright, we have the drag and drop and a preview of the files selected, so now we need to upload those files and keep the user informed of the status of those uploads. We’ll start with creating a new file: /compositions/file-uploader.js. In this file, we’ll export some functions that allow our component to upload the files.

export async function uploadFile(file, url) {
    // set up the request data
    let formData = new FormData()
    formData.append('file', file.file)

    // track status and upload file
    file.status = 'loading'
    let response = await fetch(url, { method: 'POST', body: formData })

    // change status to indicate the success of the upload request
    file.status = response.ok

    return response
}

export function uploadFiles(files, url) {
    return Promise.all(files.map((file) => uploadFile(file, url)))
}

export default function createUploader(url) {
    return {
        uploadFile: function (file) {
            return uploadFile(file, url)
        },
        uploadFiles: function (files) {
            return uploadFiles(files, url)
        },
    }
}

Before looking into specific functions, note that every function in this file is exported separately so it can be used on its own, but you’ll see that we’ll only be using one of them in our application. This gives some flexibility in how this module is used without actually making the code any more complicated since all we do is add an export statement to enable it.

Now, starting at the top, we have an asynchronous function for uploading a single file. This is constructed in a very similar manner to how it was done in the previous article, but we are using an async function instead (for that wonderful await keyword) and we’re updating the status property on the provided file to keep track of the upload’s progress. This status can have 4 possible values:

  • null: initial value; indicates that it has not started uploading;
  • "loading": indicates that the upload is in progress;
  • true: indicates the upload was successful;
  • false: indicates the upload failed.

So, when we start the upload, we mark the status as "loading". Once it’s finished, we mark it as true or false depending on the result’s ok property. Soon we’ll be using these values to show different messages in the FilePreview component. Finally, we return the response in case the caller can use that information.

Note: Depending on which service you upload your files to, you may need some additional headers for authorization or something, but you can get those from the documentation for those services since I can’t write an example for every service out there.

The next function, uploadFiles, is there to allow you to easily upload an array of files. The final function, createUploader, is a function that grants you the ability to use the other functions without having to specify the URL that you’re uploading to every time you call it. It “caches” the URL via a closure and returns versions of each of the two previous functions that don’t require the URL parameter to be passed in.

Using the Uploader

Now that we have these functions defined, we need to use them, so go back to our main app component. Somewhere in the script section, we’ll need to add the following two lines:

import  createUploader  from  './compositions/file-uploader'
const { uploadFiles } = createUploader('YOUR URL HERE')

Of course, you’ll need to change the URL to match whatever your upload server uses. Now we just need to call uploadFiles from somewhere, so let’s add a button that calls it in its click handler. Add the following at the end of the template:

<button @click.prevent="uploadFiles(files)"  class="upload-button">Upload</button>

There you go. Now if you run the app, add some images, and smash that button, they should be headed for the server. But… we can’t tell if it worked or not — at least not without checking the server or the network panel in the dev tools. Let’s fix that.

Showing The Status

Open up FilePreview.vue. In the template after the img tag but still within component, let’s add the following:

<span class="status-indicator loading-indicator" v-show="file.status == 'loading'">In Progress</span>
<span class="status-indicator success-indicator" v-show="file.status == true">Uploaded</span>
<span class="status-indicator failure-indicator" v-show="file.status == false">Error</span>

All the styles are already included to control how these look if you copied the styles from GitHub earlier. These all sit in the bottom right corner of the images displaying the current status. Only one of them is shown at a time based on file.status.

I used v-show here, but it also makes a lot of sense to use v-if, so you can use either one. By using v-show, it always has the elements in the DOM but hides them. This means we can inspect the elements and cause them to show up even if they aren’t in the correct state, so we can test if they look right without trying to do it by putting the app into a certain state. Alternatively, you could go into the Vue DevTools, make sure you’re in the “Inspector” screen, click the three dots menu button in the top right and toggle “Editable props” to true, then edit the props or state in the component(s) to bring about the states needed to test each indicator.

Note: Just be aware that once you edit the file state/prop, it is no longer the same object as the one that was passed in, so clicking the button to remove the image will not work (can’t remove a file that isn’t in the array) and clicking “Upload” won’t show any state changes for that image (because the one in the array that is being uploaded isn’t the same file object as the one being displayed by the preview).

Possible Improvements

As with other parts of this app, there are a few things we could do to make this better, but that we won’t actually be changing. First of all, the status values are pretty ambiguous. It would be a good idea to implement the values as constants or an enum (TypeScript supports enums). This would ensure that you don’t misspell a value such as “loading” or try to set the status to “error” instead of false and run into a bug. The status could also be implemented as a state machine since there is a very defined set of rules for how the state changes.

In addition to better statuses, there should be better error handling. We inform the users that there was an issue with the upload, but they have no idea what the error is. Is it a problem with their internet? Was the file too big? Is the server down? Who knows? Users need to know what the problem is so they know what they can do about it — if anything.

We could also keep the users better apprised of the upload. By using XHR instead of fetch (which I discussed in the previous drag-and-drop uploader article), we can track “progress” events to know the percentage of the upload that was completed, which is very useful for large files and slow internet connections because it can prove to the user that progress is actually being made and that it didn’t get stuck.

The one change that can increase the reusability of the code is opening up the file uploader to additional options (such as request headers) to be able to be passed in. In addition, we could check the status of a file to prevent us from uploading a file that’s already in progress or is already uploaded. To further help with this, we could disable the “Upload” button during the upload, and it should probably also be disabled when there are no files selected.

And last, but most certainly not least, we should add some accessibility improvements. In particular, when adding files, removing them, and uploading them (with all those status changes), we should audibly inform screen reader users that things have changed using Live Regions. I’m no expert on this, and they fall a bit outside the scope of this article, so I will not be going into any kind of detail, but it’s definitely something everyone should look into.

Job’s Done

Well, that’s it. The Vue Drag-and-Drop Image Uploader is done! As mentioned at the beginning, you can see the finished product here and look at the final code in the GitHub Repository.

I hope you spend some time trying to implement the possible improvements that I’ve laid out in the previous sections to help you deepen your understanding of this app and keep sharpening your skills by thinking things through on your own. Do you have any other improvements that could be made to this uploader? Leave some suggestions in the comments and if you implemented any of the suggestions from above, you can share your work in the comments, too.

God bless and happy coding!