#275: Measuring Things

Show Description

Chris and Marie talk about how looking at the numbers behind changes can help guide and improve decisions we make at CodePen, including tools like Redash and Appcues, and how we use data to help sell PRO subscriptions.

Time Jumps

  • 01:33 Being a numbers person
  • 02:55 Using Redash
  • 06:07 Finding out if changes work
  • 11:09 Sponsor: Netlify
  • 13:18 Nudging users in the right direction
  • 26:09 Other types of analytics

Sponsor: Netlify

Netlify is the fastest way to build the fastest sites, Jamstack style. Netlify is a web host that hosts your static files, but for not-so-static sites. The static file hosting is just so that the site can be served over a global CDN as quickly and securely as is possible to do. Any kind of functionality is possible through things like built-in form processing, auth, and cloud functions, and the developer experience is second to none.

Show Links

CodePen Links

The post #275: Measuring Things appeared first on CodePen Blog.

WordCamp Attendance Badges Could Be a Good Thing, but That’s the Wrong Discussion

Example screenshot of multiple WordPress profile badges.
WordPress profile badges.

On July 3, Timi Wahalahti opened a discussion on the Community WordPress blog on whether WordCamp volunteers, WordCamp attendees, or Meetup attendees should be awarded a WordPress.org profile badge. The discussion stemmed from a nearly two-year-old Meta ticket that was recently resurfaced.

The general consensus from the comments on the post seems to be that volunteers should receive badges because they are making direct contributions to the community. Most argue that merely attending an event is not badge-worthy. There are also some technical concerns. However, they should not be a real issue considering we are a community of programmers and problem solvers.

I see the rationale behind not giving badges to attendees. In one way, it feels like it diminishes the badges that others have earned, quite often, through hours of valuable time freely given back to the project.

I am taking a wild guess here and will say that most people would agree that direct, measurable contributions should be rewarded. Whether it is contributing a patch to core, reviewing code as part of the Themes Team, or handing out sandwiches at your local WordCamp lunch line, you should be recognized for giving back to the community.

WordCamp attendance badges would become the participation trophies of the WordPress world.

I get the argument. I do. When I first read the community post, my gut reaction was to make that same argument.

In some parts of American culture, at least, participation trophies are often looked upon as something to be ashamed of — if you don’t earn MVP, it’s not a real trophy. I have seen the culture change, seemingly overnight, in my local community. Fathers will not allow their sons to accept a trophy for merely being on the football team (anyone deserves a trophy for making it through training camp in Alabama’s sweltering August heat). I watch as community members — grown adults — tear down others’ kids on Facebook over the same idea.

The discussion on WordCamp attendance badges feels much the same. However, the argument is valid only because that is how the system is set up. It was created to award based on merit. The awards go to those who put in the time and effort, typically over the long haul.

On the surface, that feels like a good system. However, other systems have benefits that perhaps our community has been overlooking, particularly those that gamify participation. Currently, WordPress profile badges are not being utilized to their full potential. The missing piece is that we are not encouraging more participation. We are not helping the first-time user level up and earn more badges/awards.

NaNoWriMo writing and personal achievement badges.
NaNoWriMo writing and personal achievement badges.

In 2018, I successfully completed National Novel Writing Month (NaNoWriMo). It is an event where thousands of people go through the insane process of writing a 50,000-word novel in 30 days. One of the things that pushed me through the month, aside from sheer willpower and encouragement from family and friends, was the encouragement from the NaNoWriMo website itself.

The website has two categories of badges. The first category is its writing badges. These badges are awarded based on actually doing work. They are also awarded in stages. Write for a two-day streak. Earn a badge. Surpass 5,000 words. Earn a badge. Finish the month-long challenge. Earn a badge. Throughout the process of NaNoWriMo, earning these writing badges was a big motivator toward keeping the dream of writing a novel alive. If I wasn’t motivated to write on a particular day, I could look at the next badge I would earn by just putting pen to paper for another half hour or so.

The thing about these writing badges that was so important was not that they gave me any bragging rights. The badges were not for showing other people how awesome I was. They were deeply personal. They were things that helped motivate me to continue on. OK, I did brag about them a little bit.

At the end of the day, these achievement-based badges were not about other people. They made me feel good about myself, and that is what mattered.

NaNoWriMo’s second category was for personal badges. They were not awarded for any achievement. Every user on the site could pick and choose the badges they wanted. They were reflections of the person. It told others a little something about you.

One of my favorite badges was the “pantser” badge. It let people in the NaNoWriMo community know that I was writing without a novel outline or any real plan — literally by the seat of my pants. Others would choose the “planner” or even the combo “plantser” badge. And, the site had several other badges that simply added to the fun.

We do not have to think about badges as something that must be awarded based on hard work. Sure, we should have those “gold level” badges that are earned through direct contributions and being on a particular team. Joining the Documentation Team or submitting a plugin to the official plugin directory is a big deal. Those achievements should be shown on your profile. However, they are not the only achievements that matter.

Remember that badges are sometimes personal. Being awarded for even the smallest of things can help build the confidence that some people need to do that second small thing.

Simple badges for asking or answering your first support forum question could be a great motivator to become more involved. Attending a WordCamp for the first time? Get a badge. That might help motivate you to earn the five-time WordCamp attendee badge next.

I would even love to see badges for individual WordCamps. How cool would it be for someone to earn a badge for attending a WordCamp in every corner of the world? Or just on one continent?

There is so much lost potential with the current badge system. We are having the wrong discussion. Whether someone should earn a badge for attending a WordCamp is too narrow of a focus. Let’s start looking at how we can gamify participation in the WordPress community and use that system to get more people involved.

If we maintain the current system of giving badges only for contributions and teams, yeah, WordCamp volunteers should get those. Attendees have done nothing to earn a badge in that system. That seems like an easy call to make and not worth much discussion. But, since we are here, let’s rethink this whole thing.

Compare The Best Field Service Management Software

Want to jump straight to the answer? The best field service management software for most people is Jobber or Housecall Pro.

When your employees are out in the field, you need a way to track their activities, keep them organized, and ensure their safety. Field service management software allows you to track your employees and technicians when they’re out working—while also processing service orders and keeping your customers happy.

The Top 6 Field Service Management Software Options

  • Jobber – Best field service management software for mid-sized companies
  • Housecall Pro – Best field service management software for large-sized companies
  • Field Service Lightning by Salesforce – Best customizable field service management software
  • Service Fusion – Most easy-to-use field service management software
  • Verizon Connect – Best field service management software for small-sized companies
  • ServiceTitan – Best field service management software for HVAC, electrical, and plumbing businesses
How to choose the best field service management software. Quicksprout.com's methodology for reviewing field service management software.

For our full reviews for why these FSM software made the list, keep reading.

Jobber – Best for mid-sized teams

  • Serves 50+ industries
  • Book thru website or Facebook
  • Intuitive mobile app
  • One-click route optimization
Starts at $69/month

Serving 50+ industries including landscaping, cleaning, tree care, HVAC, and plumbing, Jobber is a great FSM solution for teams looking for a good solution for their residential service business.

With Jobber, your customers will be able to book your services online via your website or even your Facebook page. Every time you get a booking request, you’ll receive a notification on your phone via the Jobber app.

Its automated features, such as batch invoicing and a fantastic one-click route optimization, means that you’ll waste less time on the day-to-day minutia of running your business and more time attending to your customers’ needs.

Jobber employee calendar and schedule example.

Pricing for Jobber is as follows:

Core: $69 monthly or $49 per month billed annually

  • 1 user
  • Product support
  • CRM
  • Quoting and invoicing
  • Consumer financing
  • Tip collection
  • Instant payouts
  • Jobber app marketplace
  • Scheduling with notes/attachments
  • Mobile app (Android, iOS)
  • Credit card processing
  • Reporting
  • Client hub

Connect: $169 monthly or $149 per month billed annually

  • Up to 5 users
  • Everything in the Core plan
  • Client notification
  • Automated invoice follow-ups
  • Routing
  • Live GPS tracking
  • Time and expense tracking
  • Job forms
  • Quickbooks sync
  • Online booking
  • Custom fields

Grow: $349 monthly or $249 per month billed annually

  • Up to 15 users
  • Everything in the Connect plan
  • Two-way text messaging
  • Lead management
  • Referrals
  • Markups
  • Line-item images
  • Email and postcard marketing
  • Facebook and Instagram ads
  • Task automation
  • Automated quote and invoice follow-ups
  • Optional add-ons

Housecall Pro – Best field service management for large-sized businesses

  • Used by 15,000+ businesses
  • Automatic form filling
  • GPS tracking
  • Drag-and-drop scheduling
Start a 14-day free trial

Housecall Pro is a very powerful FSM software leveraged by 15,000+ home service businesses.

Their software is jam-packed with any and all features you might need to keep your customers happy and your business streamlined. Their drag-and-drop scheduling tool allows you to assign jobs to your employees and edit them at your discretion.

It also performs a lot of tasks automatically for you. So dispatchers and managers don’t have to worry about filling out a ton of digital forms before and after service.

Housecall Pro messaging example.

As far as drawbacks, Housecall Pro can get just a little bit expensive with prices starting at $49 / month for just one user. There’s also the little matter of needing to pay for certain features–which can add up in the long run.

Luckily, they do offer a 14 day free trial so you can get your toes wet before jumping into the pool.

Pricing for Housecall Pro is as follows:

Start:

  • $49 / month
  • 1 user (additional users is $30 / month)
  • Review generation
  • Online booking
  • Drag & drop scheduling
  • Invoicing
  • Mobile app (Android, iOS)
  • Text messages to customers
  • Mobile payment processing
  • Google Calendar integration

Grow:

  • $109 / month
  • 1 – 5 users (additional users is $30 / month)
  • Everything in the Start plan
  • Postcard and email marketing
  • Zapier integration
  • Website chat bubble
  • In-app employee chat
  • Employee GPS tracking
  • Time tracking
  • Customized SMS number
  • Company expense card
  • Quickbooks integration

Manage:

  • $199 / month
  • 1 – 9 users (additional users is $30 / month)
  • Everything in the Grow plan

XL:

  • Pricing based on your business needs
  • 1 – 100+ users (additional users is $30 / month)
  • Everything in the Manage plan
  • Sales proposal tool
  • Recurring service plans
  • Website builder
  • API
  • Advanced reporting
  • Dedicated account manager
  • Escalated phone support

Field Service Lightning by Salesforce – Best customizable field service management software

  • Customizable mobile app
  • GPS tracking
  • Mobile messaging
  • Push notifications
Try it today

Billed as the “world’s #1 platform for service,” Field Service Lightning certainly lives up to its reputation.

Though Salesforce is typically known for their CRM platform, their FSM software allows businesses to handle any issues that might happen in the field easily and seamlessly.

Employees and technicians will be able to receive push notifications from a customizable mobile app and also view all their jobs for a given day. No more wondering what needs to get done and when. The app takes care of it for you.

Employees will also be able to communicate with each other as well as dispatch through the app for easy collaboration.

Field Service Lightning by Salesforce status of job example.

Pricing details are as follows:

Contractor:

  • $50 per user / month
  • Work orders
  • Appointment booking
  • Asset, inventory, and product tracking
  • Mobile app (Android, iOS)

Contractor Plus:

  • $75 per user / month
  • Work orders
  • Appointment booking
  • Asset, inventory, and product tracking
  • Mobile app (Android, iOS)
  • Opportunities and quotes
  • Dispatcher console

Technician:

  • $150 per user / month
  • Work orders
  • Appointment booking
  • Asset, inventory, and product tracking
  • Opportunities and quotes
  • Mobile app (Android, iOS)

Dispatcher:

  • $150 per user / month
  • Work orders
  • Appointment booking
  • Asset, inventory, and product tracking
  • Opportunities and quotes
  • Optimization
  • Dispatcher console

Service Fusion – Most easy-to-use field service management software

  • All-in-one platform
  • Quickbooks integration
  • Mobile messaging
  • Estimated invoices
Starts at $99/month

Service Fusion is an intuitive field service management software that has enough features to satisfy the needs of any business.

With its all-in-one platform, their software helps you do everything from managing your customers, send invoices, create estimates with pre-populated service line items, and dispatch technicians out into a field quickly and easily. You’ll also be able to communicate with clients and employees via text message so you can let them know when a technician is on their way or if they have a job scheduled.

If your company was using QuickBooks before, you’re in luck. Service Fusion integrates seamlessly with the invoicing platform, and allows you to sync with it with just a single click.

Service Fusion integration with QuickBooks multiple devices example.

Pricing details are as follows:

Starter:

  • $99 / month
  • Customer management
  • Estimates for jobs
  • Scheduling dispatching
  • Invoicing
  • Payment processing
  • Reporting
  • Text message alerts

Plus:

  • $199 / month
  • Everything in the Starter plan
  • Job photo uploads
  • Inventory management
  • Job costing

Pro:

  • $349 / month
  • Everything in the Plus plan
  • Open API integration
  • Custom documents
  • eSign documents
  • Customer web portal
  • Customer-facing mobile app

Verizon Connect – Best for small teams

  • Great for small businesses
  • Track technicians in real time
  • Free trial on dashcams
  • Mobile messaging
Try it today!

Verizon Connect gives you the reliability you’d expect from the US’s most popular mobile phone service into a field service management software.

With Verizon Connect, you’ll be able to track technicians out in the field in real time and even see how fast they are traveling.

Verizon Connect track technicians in the field real time example.

In typical Verizon fashion, they also come with trial deals on tools such as a dash cam and asset tracking for three months.

Getting your technicians to your customers is a breeze too. With just one click, you’ll be able to directly message your employee via the Verizon Connect app on their mobile device complete with the job assignment and location of the client.

Pricing is obscured for the service though, so you’ll have to fill out their online survey to get a quote.

ServiceTitan – Best for HVAC, electrical, and plumbing businesses

  • Intuitive mobile app
  • All-in-one platform
  • End-to-end customer management
  • Great for HVAC, electrical, and plumbing
Get a quote today

ServiceTitan is a great field service management software that equips technicians with all they need to get to customers quickly and do what they need to do.

Their all-in-one platform allows you to take care of your client’s needs from start to finish. Whether you’re booking a one-time service call or creating a recurring one or dispatching your technicians, ServiceTitan’s platform helps you get it done.

Their mobile app is very intuitive and easy-to-use for employees out on the field, while their drag-and-drop dashboard makes dispatchers lives easier. You’ll be able to arm anyone with all of the customer’s information to help them do the best job they can.

ServiceTitan is built with HVAC, electrical, and plumbing businesses in mind, but they also help other industries such as chimney sweeping and water treatment as well.

ServiceTitan dashboard and company metrics example.

One downside to ServiceTitan is that its pricing is obscured. It’s not listed on the website, and if you want a quote, you’re going to have to contact them to request a demo. For that, you’ll need to provide your email, name, phone number, and company name.

What is a field service management software?

A field service management software (aka FSM software) is a piece of software that allows you to oversee your employees as they work out in the field.

With it, you’ll be able to remotely keep track of different operations and services as they happen. It provides a line of clear communication between you and whoever is out working. This can help ensure your employees’ safety as well as the ability to address and issues that come up.

A good FSM software can help keep your customers happy too as you can also respond to different types of service requests and dispatch technicians as needed.

Most FSM software also includes a method of scheduling and tracking your employees’ time as they work. The very best ones can furnish reporting and analytics of all the work done out on the field.

How to find a field service management software for you

When looking for a good field management software, there were a number of different criteria that we considered. Ultimately, it came down to five very important factors that determined our choices. They might differ for you, but we think that any good FSM software will have these features:

Mobile app

Technicians need to be able to communicate with dispatch, managers, and fellow techs while out in the field. As such, a good field service management software will come with a solid mobile app available for Android or iOS.

It’s not enough to simply include an app though. The app needs to have a user-friendly interface but include all the features you need to keep your employees safe and customers happy. These features might include GPS tracking, messaging, and schedule viewing.

Quick dispatching

The ability to coordinate, communicate with, and dispatch technicians to the field is an indispensable feature of a good field service management software. This ensures that your employees are able to address your customers’ needs in a timely and efficient manner.

A good dispatching tool will have an easy-to-use interface that coordinates well with your employees’ schedules — which brings us to …

Intuitive scheduling

A big role of a field service management software is its scheduling capabilities. Gone are the days of clocking in with a time card or writing down on a calendar when you’re going to be in and out of the office. Everything is digitized now. That means your scheduling tools need to be digital too.

To that end, you need to be able to take a look at a calendar view of your employees’ schedule. At a glance, you should know who will be available to work and when they will be in. Only then can things like dispatching work smoothly.

Customer orders

FSM software is there to facilitate the relationship between the customer and the technician. That’s why you need good software that allows you to go from customer request, to servicing, to invoicing, to payment processing easily and seamlessly.

The order should be available to the technician throughout the service time too. That way they can take notes, photos, videos, and whatever else to record how their work was completed. This creates a great system that allows you to address any blockers as a manager — and also address any issues with the customer if there’s a dispute.

Best Field Service Management Software: Your Top Questions Answered

The Top Field Service Management Software in Summary

Field service management software is the modern way to track and manage your workers in the field. From HVAC to electrical, plumbing, and contracting, the best field service management tools have you covered.

In addition to monitoring your employees, field service management software doubles as a tool for quotes, invoicing, scheduling, and dispatching.

WordProof Wins €1 Million Grant to Advance Blockchain Timestamping Concept

WordProof, the company behind the WordProof Timestamp plugin for WordPress, has received a €1 million grant from the European Commission as the reward for winning a competition called “Blockchains for Social Good.” The Dutch startup beat 175 other participants from around Europe.

The competition was designed to reward developers’ efforts in exploring decentralized applications of blockchains for social innovation. WordProof was one of five finalists selected to receive €1 million, after submitting its Timestamp Ecosystem concept, which seeks to increase transparency and accountability by proving authenticity of content on the web. In addition to its WordPress plugin, the timestamping ecosystem aims to provide solutions for other content management platforms, e-commerce, and social media.

WordProof founder Sebastiaan van der Lans said the grant is evidence of the company gaining traction with governments and universities.

“With the recognition and financial support from Europe, we can roll out the Timestamp Ecosystem at a higher pace and make WordProof grow even faster as a company,” Van der Lans said. “This will enable Europe to define the standard for a reliable Internet for consumers and organisations.”

Van der Lans said WordProof is still very much “a WordPress-focused company” and plans to use the funds to extend its timestamping plugin to work with WooCommerce. They also plan to begin working with major publishers and WooCommerce shops to integrate timestamping solutions. The company began working with Yoast two months ago on deeply integrating with Schema.org to provide structured data for SEO.

In the coming weeks, van der Lans said the company plans to announce “a significant investment from the WordPress space.” WordProof is currently focused on advocacy with/at the European Commission to make timestamping an open source standard that would be independent from the control of any single company.

Building Serverless GraphQL API in Node with Express and Netlify

I’ve always wanted to build an API, but was scared away by just how complicated things looked. I’d read a lot of tutorials that start with “first, install this library and this library and this library” without explaining why that was important. I’m kind of a Luddite when it comes to these things.

Well, I recently rolled up my sleeves and got my hands dirty. I wanted to build and deploy a simple read-only API, and goshdarnit, I wasn’t going to let some scary dependency lists and fancy cutting-edge services stop me¹.

What I discovered is that underneath many of the tutorials and projects out there is a small, easy-to-understand set of tools and techniques. In less than an hour and with only 30 lines of code, I believe anyone can write and deploy their very own read-only API. You don’t have to be a senior full-stack engineer — a basic grasp of JavaScript and some experience with npm is all you need.

At the end of this article you’ll be able to deploy your very own API without the headache of managing a server. I’ll list out each dependency and explain why we’re incorporating it. I’ll also give you an intro to some of the newer concepts involved, and provide links to resources to go deeper.

Let’s get started!

A rundown of the API concepts

There are a couple of common ways to work with APIs. But let’s begin by (super briefly) explaining what an API is all about: reading and updating data.

Over the past 20 years, some standard ways to build APIs have emerged. REST (short for REpresentational State Transfer) is one of the most common. To use a REST API, you make a call to a server through a URL — say api.example.com/rest/books — and expect to get a list of books back in a format like JSON or XML. To get a single book, we’d go back to the server at a URL — like api.example.com/rest/books/123 — and expect the data for book #123. Adding a new book or updating a specific book’s data means more trips to the server at similar, purpose-defined URLs.

That’s the basic idea of two concepts we’ll be looking at here: GraphQL and Serverless.

GraphQL

Applications that do a lot of getting and updating of data make a lot of API calls. Complicated software, like Twitter, might make hundreds of calls to get the data for a single page. Collecting the right data from a handful of URLs and formatting it can be a real headache. In 2012, Facebook developers starting looking for new ways to get and update data more efficiently.

Their key insight was that for the most part, data in complicated applications has relationships to other data. A user has followers, who are each users themselves, who each have their own followers, and those followers have tweets, which have replies from other users. Drawing the relationships between data results in a graph and that graph can help a server do a lot of clever work formatting and sending (or updating) data, and saving front-end developers time and frustration. Graph Query Language, aka GraphQL, was born.

GraphQL is different from the REST API approach in its use of URLs and queries. To get a list of books from our API using GraphQL, we don’t need to go to a specific URL (like our api.example.com/graphql/books example). Instead, we call up the API at the top level — which would be api.example.com/graphql in our example — and tell it what kind of information we want back with a JSON object:

{
  books {
    id
    title
    author
  }
}

The server sees that request, formats our data, and sends it back in another JSON object:

{
  "books" : [
    {
      "id" : 123
      "title" : "The Greatest CSS Tricks Vol. I"
      "author" : "Chris Coyier"
    }, {
      // ...
    }
  ]
}

Sebastian Scholl compares GraphQL to REST using a fictional cocktail party that makes the distinction super clear. The bottom line: GraphQL allows us to request the exact data we want while REST gives us a dump of everything at the URL.

Concept 2: Serverless

Whenever I see the word “serverless,” I think of Chris Watterston’s famous sticker.

Similarly, there is no such thing as a truly “serverless” application. Chris Coyier nice sums it up his “Serverless” post:

What serverless is trying to mean, it seems to me, is a new way to manage and pay for servers. You don’t buy individual servers. You don’t manage them. You don’t scale them. You don’t balance them. You aren’t really responsible for them. You just pay for what you use.

The serverless approach makes it easier to build and deploy back-end applications. It’s especially easy for folks like me who don’t have a background in back-end development. Rather than spend my time learning how to provision and maintain a server, I often hand the hard work off to someone (or even perhaps something) else.

It’s worth checking out the CSS-Tricks guide to all things serverless. On the Ideas page, there’s even a link to a tutorial on building a serverless API!

Picking our tools

If you browse through that serverless guide you’ll see there’s no shortage of tools and resources to help us on our way to building an API. But exactly which ones we use requires some initial thought and planning. I’m going to cover two specific tools that we’ll use for our read-only API.

Tool 1: NodeJS and Express

Again, I don’t have much experience with back-end web development. But one of the few things I have encountered is Node.js. Many of you are probably aware of it and what it does, but it’s essentially JavaScript that runs on a server instead of a web browser. Node.js is perfect for someone coming from the front-end development side of things because we can work directly in JavaScript — warts and all — without having to reach for some back-end language.

Express is one of the most popular frameworks for Node.js. Back before React was king (How Do You Do, Fellow Kids?), Express was the go-to for building web applications. It does all sorts of handy thing like routing, templating, and error handling.

I’ll be honest: frameworks like Express intimidate me. But for a simple API, Express is extremely easy to use and understand. There’s an official GraphQL helper for Express, and a plug-and-play library for making a serverless application called serverless-http. Neat, right?!

Tool 2: Netlify functions

The idea of running an application without maintaining a server sounds too good to be true. But check this out: not only can you accomplish this feat of modern sorcery, you can do it for free. Mind blowing.

Netlify offers a free plan with serverless functions that will give you up to 125,000 API calls in a month. Amazon offers a similar service called Lambda. We’ll stick with Netlify for this tutorial.

Netlify includes Netlify Dev which is a CLI for Netlify’s platform. Essentially, it lets us run a simulation of our in a fully-featured production environment, all within the safety of our local machine. We can use it to build and test our serverless functions without needing to deploy them.

At this point, I think it’s worth noting that not everyone agrees that running Express in a serverless function is a good idea. As Paul Johnston explains, if you’re building your functions for scale, it’s best to break each piece of functionality out into its own single-purpose function. Using Express the way I have means that every time a request goes to the API, the whole Express server has to be booted up from scratch — not very efficient. Deploy to production at your own risk.

Let’s get building!

Now that we have out tools in place, we can kick off the project. Let’s start by creating a new folder, navigating to fit in terminal, then running npm init  on it. Once npm creates a package.json file, we can install the dependencies we need. Those dependencies are:

  1. Express
  2. GraphQL and express-graphql. These allow us to receive and respond to GraphQL requests.
  3. Bodyparser. This is a small layer that translates the requests we get to and from JSON, which is what GraphQL expects.
  4. Serverless-http. This serves as a wrapper for Express that makes sure our application can be used on a serverless platform, like Netlify.

That’s it! We can install them all in a single command:

npm i express express-graphql graphql body-parser serverless-http

We also need to install Netlify Dev as a global dependency so we can use it as a CLI:

npm i -g netlify-dev

File structure

There’s a few files that are required for our API to work correctly. The first is netlify.toml which should be created at the project’s root directory. This is a configuration file to tell Netlify how to handle our project. Here’s what we need in the file to define our startup command, our build command and where our serverless functions are located:

[build]


  # This command builds the site
  command = "npm run build"


  # This is the directory that will be deployed
  publish = "build"


  # This is where our functions are located
  functions = "functions"

That functions line is super important; it tells Netlify where we’ll be putting our API code.

Next, let’s create that /functions folder at the project’s root, and create a new file inside it called api.js.  Open it up and add the following lines to the top so our dependencies are available to use and are included in the build:

const express = require("express");
const bodyParser = require("body-parser");
const expressGraphQL = require("express-graphql");
const serverless = require("serverless-http");

Setting up Express only takes a few lines of code. First, we’ll initial Express and wrap it in the serverless-http serverless function:

const app = express();
module.exports.handler = serverless(app);

These lines initialize Express, and wrap it in the serverless-http function. module.exports.handler lets Netlify know that our serverless function is the Express function.

Now let’s configure Express itself:

app.use(bodyParser.json());
app.use(
  "/",
  expressGraphQL({
    graphiql: true
  })
);

These two declarations tell Express what middleware we’re running. Middleware is what we want to happen between the request and response. In our case, we want to parse JSON using bodyparser, and handle it with express-graphql. The graphiql:true configuration for express-graphql will give us a nice user interface and playground for testing.

Defining the GraphQL schema

In order to understand requests and format responses, GraphQL needs to know what our data looks like. If you’ve worked with databases then you know that this kind of data blueprint is called a schema. GraphQL combines this well-defined schema with types — that is, definitions of different kinds of data — to work its magic.

The very first thing our schema needs is called a root query. This will handle any data requests coming in to our API. It’s called a “root” query because it’s accessed at the root of our API— say, api.example.com/graphql.

For this demonstration, we’ll build a hello world example; the root query should result in a response of “Hello world.”

So, our GraphQL API will need a schema (composed of types) for the root query. GraphQL provides some ready-built types, including a schema, a generic object², and a string.

Let’s get those by adding this below the imports:

const {
  GraphQLSchema,
  GraphQLObjectType,
  GraphQLString
} = require("graphql");

Then we’ll define our schema like this:

const schema = new GraphQLSchema({
  query: new GraphQLObjectType({
    name: 'HelloWorld',
    fields: () => ({ /* we'll put our response here */ })
  })
})

The first element in the object, with the key query, tells GraphQL how to handle a root query. Its value is a GraphQL object with the following configuration:

  • name – A reference used for documentation purposes
  • fields – Defines the data that our server will respond with. It might seem strange to have a function that just returns an object here, but this allows us to use variables and functions defined elsewhere in our file without needing to define them first³.
const schema = new GraphQLSchema({
  query: new GraphQLObjectType({
    name: "HelloWorld",
    fields: () => ({
      message: {
        type: GraphQLString,
        resolve: () => "Hello World",
      },
    }),
  }),
});

The fields function returns an object and our schema only has a single message field so far. The message we want to respond with is a string, so we specify its type as a GraphQLString. The resolve function is run by our server to generate the response we want. In this case, we’re only  returning “Hello World” but in a more complicated application, we’d probably use this function to go to our database and retrieve some data.

That’s our schema! We need to tell our Express server about it, so let’s open up api.js and make sure the Express configuration is updated to this:

app.use(
  "/",
  expressGraphQL({
    schema: schema,
    graphiql: true
  })
);

Running the server locally

Believe it or not, we’re ready to start the server! Run netlify dev in Terminal from the project’s root folder. Netlify Dev will read the netlify.toml configuration, bundle up your api.js function, and make it available locally from there. If everything goes according to plan, you’ll see a message like “Server now ready on http://localhost:8888.” 

If you go to localhost:8888 like I did the first time, you might be a little disappointed to get a 404 error.

But fear not! Netlify is running the function, only in a different directory than you might expect, which is /.netlify/functions. So, if you go to localhost:8888/.netlify/functions/api, you should see the GraphiQL interface as expected. Success!

Now, that’s more like it!

The screen we get is the GraphiQL playground and we can use it to test out the API. First, clear out the comments in the left pane and replace them with the following:

{
  message
}

This might seem a little… naked… but you just wrote a GraphQL query! What we’re saying is that we’d like to see the message field we defined in api.js. Click the “Run” button, and on the righth, you’ll see the following:

{
  "data": {
    "message": "Hello World"
  }
}

I don’t know about you, but I did a little fist pump when I did this the first time. We built an API!

Bonus: Redirecting requests

One of my hang-ups while learning about Netlify’s serverless functions is that they run on the /.netlify/functions path. It wasn’t ideal to type or remember it and I nearly bailed for another solution. But it turns out you can easily redirect requests when running and deploying on Netlfiy. All it takes is creating a file in the project’s root directory called _redirects (no extension necessary) with the following line in it:

/api /.netlify/functions/api 200!

This tells Netlify that any traffic that goes to yoursite.com/api should be sent to /.netlify/functions/api. The 200! bit instructs the server to send back a status code of 200 (meaning everything’s OK).

Deploying the API

To deploy the project, we need to connect the source code to Netlfiy. I host mine in a GitHub repo, which allows for continuous deployment.

After connecting the repository to Netlfiy, the rest is automatic: the code is processed and deployed as a serverless function! You can log into the Netlify dashboard to see the logs from any function.

Conclusion

Just like that, we are able to create a serverless API using GraphQL with a few lines of JavaScript and some light configuration. And hey, we can even deploy — for free. 

The possibilities are endless. Maybe you want to create your own personal knowledge base, or a tool to serve up design tokens. Maybe you want to try your hand at making your own PokéAPI. Or, maybe you’re interesting in working with GraphQL.

Regardless of what you make, it’s these sorts of technologies that are getting more and more accessible every day. It’s exciting to be able to work with some of the most modern tools and techniques without needing a deep technical back-end knowledge.

If you’d like to see at the complete source code for this project, it’s available on GitHub.

Some of the code in this tutorial was adapted from Web Dev Simplified’s “Learn GraphQL in 40 minutes” article. It’s a great resource to go one step deeper into GraphQL. However, it’s also focused on a more traditional server-full Express.


  1. If you’d like to see the full result of my explorations, I’ve written a companion piece called “A design API in practice” on my website.
  2. The reasons you need a special GraphQL object, instead of a regular ol’ vanilla JavaScript object in curly braces, is a little beyond the scope of this tutorial. Just keep in mind that GraphQL is a finely-tuned machine that uses these specialized types to be fast and resilient.
  3. Scope and hoisting are some of the more confusing topics in JavaScript. MDN has a good primer that’s worth checking out.

The post Building Serverless GraphQL API in Node with Express and Netlify appeared first on CSS-Tricks.

Understanding Plugin Development In Gatsby

Understanding Plugin Development In Gatsby

Understanding Plugin Development In Gatsby

Aleem Isiaka

Gatsby is a React-based static-site generator that has overhauled how websites and blogs are created. It supports the use of plugins to create custom functionality that is not available in the standard installation.

In this post, I will introduce Gatsby plugins, discuss the types of Gatsby plugins that exist, differentiate between the forms of Gatsby plugins, and, finally, create a comment plugin that can be used on any Gatsby website, one of which we will install by the end of the tutorial.

What Is A Gatsby Plugin?

Gatsby, as a static-site generator, has limits on what it can do. Plugins are means to extend Gatsby with any feature not provided out of the box. We can achieve tasks like creating a manifest.json file for a progressive web app (PWA), embedding tweets on a page, logging page views, and much more on a Gatsby website using plugins.

Types Of Gatsby Plugins

There are two types of Gatsby plugins, local and external. Local plugins are developed in a Gatsby project directory, under the /plugins directory. External plugins are those available through npm or Yarn. Also, they may be on the same computer but linked using the yarn link or npm link command in a Gatsby website project.

Forms Of Gatsby Plugins

Plugins also exist in three primary forms and are defined by their use cases:

Components Of A Gatsby Plugin

To create a Gatsby plugin, we have to define some files:

  • gatsby-node.js
    Makes it possible to listen to the build processes of Gatsby.
  • gatsby-config.js
    Mainly used for configuration and setup.
  • gatsby-browser.js
    Allows plugins to run code during one of the Gatsby’s processes in the browser.
  • gatsby-ssr.js
    Customizes and adds functionality to the server-side rendering (SSR) process.

These files are referred to as API files in Gatsby’s documentation and should live in the root of a plugin’s directory, either local or external.

Not all of these files are required to create a Gatsby plugin. In our case, we will be implementing only the gatsby-node.js and gatsby-config.js API files.

Building A Comment Plugin For Gatsby

To learn how to develop a Gatsby plugin, we will create a comment plugin that is installable on any blog that runs on Gatsby. The full code for the plugin is on GitHub.

Serving and Loading Comments

To serve comments on a website, we have to provide a server that allows for the saving and loading of comments. We will use an already available comment server at gatsbyjs-comment-server.herokuapp.com for this purpose.

The server supports a GET /comments request for loading comments. POST /comments would save comments for the website, and it accepts the following fields as the body of the POST /comments request:

  • content: [string]
    The comment itself,
  • author: [string]
    The name of the comment’s author,
  • website
    The website that the comment is being posted from,
  • slug
    The slug for the page that the comment is meant for.

Integrating the Server With Gatsby Using API Files

Much like we do when creating a Gatsby blog, to create an external plugin, we should start with plugin boilerplate.

Initializing the folder

In the command-line interace (CLI) and from any directory you are convenient with, let’s run the following command:

gatsby new gatsby-source-comment-server https://github.com/Gatsbyjs/gatsby-starter-plugin

Then, change into the plugin directory, and open it in a code editor.

Installing axios for Network Requests

To begin, we will install the axios package to make web requests to the comments server:

npm install axios --save
// or
yarn add axios
Adding a New Node Type

Before pulling comments from the comments server, we need to define a new node type that the comments would extend. For this, in the plugin folder, our gatsby-node.js file should contain the code below:

exports.sourceNodes = async ({ actions }) => {
  const { createTypes } = actions;
  const typeDefs = `
    type CommentServer implements Node {
      _id: String
      author: String
      string: String
      content: String
      website: String
      slug: String
      createdAt: Date
      updatedAt: Date
    }
  `;
  createTypes(typeDefs);
};

First, we pulled actions from the APIs provided by Gatsby. Then, we pulled out the createTypes action, after which we defined a CommentServer type that extends Node.js. Then, we called createTypes with the new node type that we set.

Fetching Comments From the Comments Server

Now, we can use axios to pull comments and then store them in the data-access layer as the CommentServer type. This action is called “node sourcing” in Gatsby.

To source for new nodes, we have to implement the sourceNodes API in gatsby-node.js. In our case, we would use axios to make network requests, then parse the data from the API to match a GraphQL type that we would define, and then create a node in the GraphQL layer of Gatsby using the createNode action.

We can add the code below to the plugin’s gatsby-node.js API file, creating the functionality we’ve described:

const axios = require("axios");

exports.sourceNodes = async (
  { actions, createNodeId, createContentDigest },
  pluginOptions
) => {
  const { createTypes } = actions;
  const typeDefs = `
    type CommentServer implements Node {
      _id: String
      author: String
      string: String
      website: String
      content: String
      slug: String
      createdAt: Date
      updatedAt: Date
    }
  `;
  createTypes(typeDefs);

  const { createNode } = actions;
  const { limit, website } = pluginOptions;
  const _website = website || "";

  const result = await axios({
    url: `https://Gatsbyjs-comment-server.herokuapp.com/comments?limit=${_limit}&website=${_website}`,
  });

  const comments = result.data;

  function convertCommentToNode(comment, { createContentDigest, createNode }) {
    const nodeContent = JSON.stringify(comment);

    const nodeMeta = {
      id: createNodeId(`comments-${comment._id}`),
      parent: null,
      children: [],
      internal: {
        type: `CommentServer`,
        mediaType: `text/html`,
        content: nodeContent,
        contentDigest: createContentDigest(comment),
      },
    };

    const node = Object.assign({}, comment, nodeMeta);
    createNode(node);
  }

  for (let i = 0; i < comments.data.length; i++) {
    const comment = comments.data[i];
    convertCommentToNode(comment, { createNode, createContentDigest });
  }
};

Here, we have imported the axios package, then set defaults in case our plugin’s options are not provided, and then made a request to the endpoint that serves our comments.

We then defined a function to convert the comments into Gatsby nodes, using the action helpers provided by Gatsby. After this, we iterated over the fetched comments and called convertCommentToNode to convert the comments into Gatsby nodes.

Transforming Data (Comments)

Next, we need to resolve the comments to posts. Gatsby has an API for that called createResolvers. We can make this possible by appending the code below in the gatsby-node.js file of the plugin:

exports.createResolvers = ({ createResolvers }) => {
  const resolvers = {
    MarkdownRemark: {
      comments: {
        type: ["CommentServer"],
        resolve(source, args, context, info) {
          return context.nodeModel.runQuery({
            query: {
              filter: {
                slug: { eq: source.fields.slug },
              },
            },
            type: "CommentServer",
            firstOnly: false,
          });
        },
      },
    },
  };
  createResolvers(resolvers);
};

Here, we are extending MarkdownRemark to include a comments field. The newly added comments field will resolve to the CommentServer type, based on the slug that the comment was saved with and the slug of the post.

Final Code for Comment Sourcing and Transforming

The final code for the gatsby-node.js file of our comments plugin should look like this:

const axios = require("axios");

exports.sourceNodes = async (
  { actions, createNodeId, createContentDigest },
  pluginOptions
) => {
  const { createTypes } = actions;
  const typeDefs = `
    type CommentServer implements Node {
      _id: String
      author: String
      string: String
      website: String
      content: String
      slug: String
      createdAt: Date
      updatedAt: Date
    }
  `;
  createTypes(typeDefs);

  const { createNode } = actions;
  const { limit, website } = pluginOptions;
  const _limit = parseInt(limit || 10000); // FETCH ALL COMMENTS
  const _website = website || "";

  const result = await axios({
    url: `https://Gatsbyjs-comment-server.herokuapp.com/comments?limit=${_limit}&website=${_website}`,
  });

  const comments = result.data;

  function convertCommentToNode(comment, { createContentDigest, createNode }) {
    const nodeContent = JSON.stringify(comment);

    const nodeMeta = {
      id: createNodeId(`comments-${comment._id}`),
      parent: null,
      children: [],
      internal: {
        type: `CommentServer`,
        mediaType: `text/html`,
        content: nodeContent,
        contentDigest: createContentDigest(comment),
      },
    };

    const node = Object.assign({}, comment, nodeMeta);
    createNode(node);
  }

  for (let i = 0; i < comments.data.length; i++) {
    const comment = comments.data[i];
    convertCommentToNode(comment, { createNode, createContentDigest });
  }
};

exports.createResolvers = ({ createResolvers }) => {
  const resolvers = {
    MarkdownRemark: {
      comments: {
        type: ["CommentServer"],
        resolve(source, args, context, info) {
          return context.nodeModel.runQuery({
            query: {
              filter: {
                website: { eq: source.fields.slug },
              },
            },
            type: "CommentServer",
            firstOnly: false,
          });
        },
      },
    },
  };
  createResolvers(resolvers);
};
Saving Comments as JSON Files

We need to save the comments for page slugs in their respective JSON files. This makes it possible to fetch the comments on demand over HTTP without having to use a GraphQL query.

To do this, we will implement the createPageStatefully API in thegatsby-node.js API file of the plugin. We will use the fs module to check whether the path exists before creating a file in it. The code below shows how we can implement this:

import fs from "fs"
import {resolve: pathResolve} from "path"
exports.createPagesStatefully = async ({ graphql }) => {
  const comments = await graphql(
    `
      {
        allCommentServer(limit: 1000) {
          edges {
            node {
              name
              slug
              _id
              createdAt
              content
            }
          }
        }
      }
    `
  )

  if (comments.errors) {
    throw comments.errors
  }

  const markdownPosts = await graphql(
    `
      {
        allMarkdownRemark(
          sort: { fields: [frontmatter___date], order: DESC }
          limit: 1000
        ) {
          edges {
            node {
              fields {
                slug
              }
            }
          }
        }
      }
    `
  )

  const posts = markdownPosts.data.allMarkdownRemark.edges
  const _comments = comments.data.allCommentServer.edges

  const commentsPublicPath = pathResolve(process.cwd(), "public/comments")

  var exists = fs.existsSync(commentsPublicPath) //create destination directory if it doesn't exist

  if (!exists) {
    fs.mkdirSync(commentsPublicPath)
  }

  posts.forEach((post, index) => {
    const path = post.node.fields.slug
    const commentsForPost = _comments
      .filter(comment => {
        return comment.node.slug === path
      })
      .map(comment => comment.node)

    const strippedPath = path
      .split("/")
      .filter(s => s)
      .join("/")
    const _commentPath = pathResolve(
      process.cwd(),
      "public/comments",
      `${strippedPath}.json`
    )
    fs.writeFileSync(_commentPath, JSON.stringify(commentsForPost))
  })
}

First, we require the fs, and resolve the function of the path module. We then use the GraphQL helper to pull the comments that we stored earlier, to avoid extra HTTP requests. We remove the Markdown files that we created using the GraphQL helper. And then we check whether the comment path is not missing from the public path, so that we can create it before proceeding.

Finally, we loop through all of the nodes in the Markdown type. We pull out the comments for the current posts and store them in the public/comments directory, with the post’s slug as the name of the file.

The .gitignore in the root in a Gatsby website excludes the public path from being committed. Saving files in this directory is safe.

During each rebuild, Gatsby would call this API in our plugin to fetch the comments and save them locally in JSON files.

Rendering Comments

To render comments in the browser, we have to use the gatsby-browser.js API file.

Define the Root Container for HTML

In order for the plugin to identify an insertion point in a page, we would have to set an HTML element as the container for rendering and listing the plugin’s components. We can expect that every page that requires it should have an HTML element with an ID set to commentContainer.

Implement the Route Update API in the gatsby-browser.js File

The best time to do the file fetching and component insertion is when a page has just been visited. The onRouteUpdate API provides this functionality and passes the apiHelpers and pluginOpions as arguments to the callback function.

exports.onRouteUpdate = async (apiHelpers, pluginOptions) => {
  const { location, prevLocation } = apiHelpers
}
Create Helper That Creates HTML Elements

To make our code cleaner, we have to define a function that can create an HTML element, set its className, and add content. At the top of the gatsby-browser.js file, we can add the code below:

// Creates element, set class. innerhtml then returns it.
 function createEl (name, className, html = null) {
  const el = document.createElement(name)
  el.className = className
  el.innerHTML = html
  return el
}
Create Header of Comments Section

At this point, we can add a header into the insertion point of comments components, in the onRouteUpdate browser API . First, we would ensure that the element exists in the page, then create an element using the createEl helper, and then append it to the insertion point.

// ...

exports.onRouteUpdate = async ({ location, prevLocation }, pluginOptions) => {
  const commentContainer = document.getElementById("commentContainer")
  if (commentContainer && location.path !== "/") {
    const header = createEl("h2")
    header.innerHTML = "Comments"
    commentContainer.appendChild(header)
  }
}
Listing Comments

To list comments, we would append a ul element to the component insertion point. We will use the createEl helper to achieve this, and set its className to comment-list:

exports.onRouteUpdate = async ({ location, prevLocation }, pluginOptions) => {
  const commentContainer = document.getElementById("commentContainer")
  if (commentContainer && location.path !== "/") {
    const header = createEl("h2")
    header.innerHTML = "Comments"
    commentContainer.appendChild(header)
    const commentListUl = createEl("ul")
    commentListUl.className = "comment-list"
    commentContainer.appendChild(commentListUl)
}

Next, we need to render the comments that we have saved in the public directory to a ul element, inside of li elements. For this, we define a helper that fetches the comments for a page using the path name.

// Other helpers
const getCommentsForPage = async slug => {
  const path = slug
    .split("/")
    .filter(s => s)
    .join("/")
  const data = await fetch(`/comments/${path}.json`)
  return data.json()
}
// ... implements routeupdate below

We have defined a helper, named getCommentsForPage, that accepts paths and uses fetch to load the comments from the public/comments directory, before parsing them to JSON and returning them back to the calling function.

Now, in our onRouteUpdate callback, we will load the comments:

// ... helpers
exports.onRouteUpdate = async ({ location, prevLocation }, pluginOptions) => {
  const commentContainer = document.getElementById("commentContainer")
  if (commentContainer && location.path !== "/") {
    //... inserts header
    const commentListUl = createEl("ul")
    commentListUl.className = "comment-list"
    commentContainer.appendChild(commentListUl)
   const comments = await getCommentsForPage(location.pathname)
}

Next, let’s define a helper to create the list items:

// .... other helpers

const getCommentListItem = comment => {
  const li = createEl("li")
  li.className = "comment-list-item"

  const nameCont = createEl("div")
  const name = createEl("strong", "comment-author", comment.name)
  const date = createEl(
    "span",
    "comment-date",
    new Date(comment.createdAt).toLocaleDateString()
  )
  // date.className="date"
  nameCont.append(name)
  nameCont.append(date)

  const commentCont = createEl("div", "comment-cont", comment.content)

  li.append(nameCont)
  li.append(commentCont)
  return li
}

// ... onRouteUpdateImplementation

In the snippet above, we created an li element with a className of comment-list-item, and a div for the comment’s author and time. We then created another div for the comment’s text, with a className of comment-cont.

To render the list items of comments, we iterate through the comments fetched using the getComments helper, and then call the getCommentListItem helper to create a list item. Finally, we append it to the <ul class="comment-list"></ul> element:

// ... helpers
exports.onRouteUpdate = async ({ location, prevLocation }, pluginOptions) => {
  const commentContainer = document.getElementById("commentContainer")
  if (commentContainer && location.path !== "/") {
    //... inserts header
    const commentListUl = createEl("ul")
    commentListUl.className = "comment-list"
    commentContainer.appendChild(commentListUl)
   const comments = await getCommentsForPage(location.pathname)
    if (comments && comments.length) {
      comments.map(comment => {
        const html = getCommentListItem(comment)
        commentListUl.append(html)
        return comment
      })
    }
}

Posting a Comment

Post Comment Form Helper

To enable users to post a comment, we have to make a POST request to the /comments endpoint of the API. We need a form in order to create this form. Let’s create a form helper that returns an HTML form element.

// ... other helpers
const createCommentForm = () => {
  const form = createEl("form")
  form.className = "comment-form"
  const nameInput = createEl("input", "name-input", null)
  nameInput.type = "text"
  nameInput.placeholder = "Your Name"
  form.appendChild(nameInput)
  const commentInput = createEl("textarea", "comment-input", null)
  commentInput.placeholder = "Comment"
  form.appendChild(commentInput)
  const feedback = createEl("span", "feedback")
  form.appendChild(feedback)
  const button = createEl("button", "comment-btn", "Submit")
  button.type = "submit"
  form.appendChild(button)
  return form
}

The helper creates an input element with a className of name-input, a textarea with a className of comment-input, a span with a className of feedback, and a button with a className of comment-btn.

Append the Post Comment Form

We can now append the form into the insertion point, using the createCommentForm helper:

// ... helpers
exports.onRouteUpdate = async ({ location, prevLocation }, pluginOptions) => {
  const commentContainer = document.getElementById("commentContainer")
  if (commentContainer && location.path !== "/") {
    // insert header
    // insert comment list
    commentContainer.appendChild(createCommentForm())
  }
}

Post Comments to Server

To post a comment to the server, we have to tell the user what is happening — for example, either that an input is required or that the API returned an error. The <span class="feedback" /> element is meant for this. To make it easier to update this element, we create a helper that sets the element and inserts a new class based on the type of the feedback (whether error, info, or success).

// ... other helpers
// Sets the class and text of the form feedback
const updateFeedback = (str = "", className) => {
  const feedback = document.querySelector(".feedback")
  feedback.className = `feedback ${className ? className : ""}`.trim()
  feedback.innerHTML = str
  return feedback
}
// onRouteUpdate callback

We are using the querySelector API to get the element. Then we set the class by updating the className attribute of the element. Finally, we use innerHTML to update the contents of the element before returning it.

Submitting a Comment With the Comment Form

We will listen to the onSubmit event of the comment form to determine when a user has decided to submit the form. We don’t want empty data to be submitted, so we would set a feedback message and disable the submit button until needed:

exports.onRouteUpdate = async ({ location, prevLocation }, pluginOptions) => {
  // Appends header
  // Appends comment list
  // Appends comment form
  document
    .querySelector("body .comment-form")
    .addEventListener("submit", async function (event) {
      event.preventDefault()
      updateFeedback()
      const name = document.querySelector(".name-input").value
      const comment = document.querySelector(".comment-input").value
      if (!name) {
        return updateFeedback("Name is required")
      }
      if (!comment) {
        return updateFeedback("Comment is required")
      }
      updateFeedback("Saving comment", "info")
      const btn = document.querySelector(".comment-btn")
      btn.disabled = true
      const data = {
        name,
        content: comment,
        slug: location.pathname,
        website: pluginOptions.website,
      }

      fetch(
        "https://cors-anywhere.herokuapp.com/gatsbyjs-comment-server.herokuapp.com/comments",
        {
          body: JSON.stringify(data),
          method: "POST",
          headers: {
            Accept: "application/json",
            "Content-Type": "application/json",
          },
        }
      ).then(async function (result) {
        const json = await result.json()
        btn.disabled = false

        if (!result.ok) {
          updateFeedback(json.error.msg, "error")
        } else {
          document.querySelector(".name-input").value = ""
          document.querySelector(".comment-input").value = ""
          updateFeedback("Comment has been saved!", "success")
        }
      }).catch(async err => {
        const errorText = await err.text()
        updateFeedback(errorText, "error")
      })
    })
}

We use document.querySelector to get the form from the page, and we listen to its submit event. Then, we set the feedback to an empty string, from whatever it might have been before the user attempted to submit the form.

We also check whether the name or comment field is empty, setting an error message accordingly.

Next, we make a POST request to the comments server at the /comments endpoint, listening for the response. We use the feedback to tell the user whether there was an error when they created the comment, and we also use it to tell them whether the comment’s submission was successful.

Adding a Style Sheet

To add styles to the component, we have to create a new file, style.css, at the root of our plugin folder, with the following content:

#commentContainer {
}

.comment-form {
  display: grid;
}

At the top of gatsby-browser.js, import it like this:

import "./style.css"

This style rule will make the form’s components occupy 100% of the width of their container.

Finally, all of the components for our comments plugin are complete. Time to install and test this fantastic plugin we have built.

Test the Plugin

Create a Gatsby Website

Run the following command from a directory one level above the plugin’s directory:

// PARENT
// ├── PLUGIN
// ├── Gatsby Website

gatsby new private-blog https://github.com/gatsbyjs/gatsby-starter-blog

Install the Plugin Locally and Add Options

Next, change to the blog directory, because we need to create a link for the new plugin:

cd /path/to/blog
npm link ../path/to/plugin/folder
Add to gatsby-config.js

In the gatsby-config.js file of the blog folder, we should add a new object that has a resolve key and that has name-of-plugin-folder as the value of the plugin’s installation. In this case, the name is gatsby-comment-server-plugin:

module.exports = {
  // ...
  plugins: [
    // ...
    "gatsby-plugin-dom-injector",
    {
      resolve: "gatsby-comment-server-plugin",
      options: {website: "https://url-of-website.com"},
    },
  ],
}

Notice that the plugin accepts a website option to distinguish the source of the comments when fetching and saving comments.

Update the blog-post Component

For the insertion point, we will add <section class="comments" id="commentContainer"> to the post template component at src/templates/blog-post.js of the blog project. This can be inserted at any suitable position; I have inserted mine after the last hr element and before the footer.

Start the Development Server

Finally, we can start the development server with gatsby develop, which will make our website available locally at http://localhost:8000. Navigating to any post page, like http://localhost:8000/new-beginnings, will reveal the comment at the insertion point that we specified above.

Create a Comment

We can create a comment using the comment form, and it will provide helpful feedback as we interact with it.

List Comments

To list newly posted comments, we have to restart the server, because our content is static.

Conclusion

In this tutorial, we have introduced Gatsby plugins and demonstrated how to create one.

Our plugin uses different APIs of Gatsby and its own API files to provide comments for our website, illustrating how we can use plugins to add significant functionality to a Gatsby website.

Although we are pulling from a live server, the plugin is saving the comments in JSON files. We could make the plugin load comments on demand from the API server, but that would defeat the notion that our blog is a static website that does not require dynamic content.

The plugin built in this post exists as an npm module, while the full code is on GitHub.

References:

Resources:

  • Gatsby’s blog starter, GitHub
    A private blog repository available for you to create a Gatsby website to consume the plugin.
  • Gatsby Starter Blog, Netlify
    The blog website for this tutorial, deployed on Netlify for testing.
Smashing Editorial (yk)

Posters! (for CSS Flexbox and CSS Grid)

Any time I chat with a fellow web person and CSS-Tricks comes up in conversation, there is a good chance they’ll say: oh yeah, that guide on CSS flexbox, I use that all the time!

Indeed that page, and it’s cousin the CSS grid guide, are among our top trafficked pages. I try to take extra care with them making sure the information on them is current, useful, and the page loads speedily and properly. A while back, in a round of updates I was doing on the guides, I reached out to Lynn Fisher, who always does incredible work on everything, to see if she’d be up for re-doing the illustrations on the guides. Miraculously, she agreed, and we have the much more charismatic illustrations that live on the guides today.

In a second miracle, I asked Lynn again if she’d be up for making physical paper poster designs of the guides, and see agreed again! And so they live!

Here they are:

You better believe I have it right next to me in my office:

They are $25 each which includes shipping anywhere in the world.

The post Posters! (for CSS Flexbox and CSS Grid) appeared first on CSS-Tricks.

How to Put WordPress Into Maintenance Mode

Among the biggest advantages of a content management system (CMS) such as WordPress is the ability to easily make changes on a live site. While that’s great, there are times when you don’t want visitors to access your website during the process. In those cases, using maintenance mode makes sense.

Today, we’ll introduce you to the concept of WordPress maintenance mode as well as example scenarios for its use. Then, we’ll show you how easy it is to implement on your own website. Let’s get started!

2100+ WordPress Themes, Plugins, Templates: ONLY $16.50 per Month
WordPress Themes

WP Themes
1,200+ Themes

WordPress Plugins

WP Plugins
500+ Plugins

WP Template Kits

WP Template Kits
250+ Template Kits

DOWNLOAD NOW
Envato Elements


What Is Maintenance Mode?

Changing a page or post within WordPress is a simple process. You can add, edit or delete content as needed. But there are cases when you need to do more – and it could have a negative impact on your site’s visitors.

Maintenance mode is built for these instances. Through the use of a plugin, it limits the general public’s access to the front end of your website. At the same time, it allows logged-in site administrators to view the website as usual.

This makes it extremely handy for implementing large or complex changes. Users who come to your website will see a customized screen, thus avoiding the possibility of experiencing a broken page or feature.

For developers, it provides peace of mind. You can complete your work without the fear of causing issues for users. When you’ve finished and tested the results, it’s easy to return to normal.

A marquee sign.

Potential Usage Scenarios

Routine changes, such as adding a new blog post or editing the contents of a page, won’t necessitate putting your website into maintenance mode. However, there are a number of scenarios where it may be appropriate:

eCommerce Websites

If you’re making significant changes to your WooCommerce shop, you probably won’t want customers browsing, adding items to their cart or completing orders. A customer caught in the middle of these updates could be charged incorrectly or encounter usability issues.

Online Courses

WordPress makes for a great learning management system, as you can use it to build interactive courses. But, just as with eCommerce, you won’t want to implement changes while students are in the middle of a course. This could hamper their progress – not to mention the potential for causing confusion.

Bug Fixes

This is also ideal for times when you’re performing actual maintenance. If, for example, a feature is broken and is rendering all or part of your site unusable, temporarily shutting it off from public view may be the best way to go.

Of course, there are a number of other scenarios where this tool could come in handy. Hopefully these examples provide some context as to when to implement it on your WordPress website.

Putting Your Website in Maintenance Mode

Now that we’ve covered what maintenance mode is and why you’d want to use it, let’s put it into action. Thankfully, it’s a fairly straightforward process.

1. Install the WP Maintenance Mode Plugin

There are a number of different WordPress plugins that enable the use of maintenance mode. They all have their own strengths, and it’s worth taking the time to find a good fit for your needs. But for our purposes, we’re going to use WP Maintenance Mode, a popular choice that offers all the basics.

To install it from the WordPress Dashboard, go to Plugins > Add New and search for “WP Maintenance Mode”. From there, install and activate the plugin.

The WordPress Add Plugins screen.

2. Customize Your Maintenance Mode Screen

Next, it’s time to make sure that our maintenance mode screen looks good and provides the right information to users. To do so, let’s head over to Settings > WP Maintenance Mode.

Then, click on the Design tab to start customizing. WP Maintenance Mode lets us choose a heading, text, colors and a background for the screen. You can even upload a background image, if you like.

The WP Maintenance Mode Design screen.

Once you’ve set things up the way you want, save the settings and then click on the Modules tab.

The WP Maintenance Mode Modules screen.

Among the available modules are a countdown timer, social media links and general contact info. Google Analytics may also be added here. There’s even a built-in method for users to input their email address and find out when the site is back online.

To keep things simple, we’ll just choose a countdown timer, add a couple social media links and contact info.

3. Testing It Out

Since we’re experimenting with a local install of WordPress, there’s no harm in checking out what our custom screen will look like.

We can turn on maintenance mode by visiting Settings > WP Maintenance Mode and making sure we’re within the General tab.

The WP Maintenance Mode General screen.

Here, we can choose to activate maintenance mode. In addition, there are options for defining which user roles can access the front and back ends (administrators always have access), how to handle search engines and more.

Remember, since administrators still have regular access to the website, you’ll need to log out in order to see things as a typical user would. Once we’ve done that, we can take a look at the results.

A website in maintenance mode.

4. Return to Normal

Putting your website back to normal is quick and easy. Visit Settings > WP Maintenance Mode and make sure you’re within the General tab. Click the “Deactivated” radio button and save the settings. That’s all there is to it!

11,000+
Icons
20,000+
Illustrations
Envato Elements

DOWNLOAD NOW
14,000+
UI & UX Kits
16,000+
Web Templates

A Helpful Tool for WordPress Maintenance

The great thing about utilizing a WordPress maintenance mode plugin is that it allows you to work on your website while keeping users out. This gives you room to make necessary changes while preventing any publicly viewable mishaps.

In addition, it’s incredibly easy to use. Set up a few options, activate maintenance mode and you’re good to go. Turning it off is just as quick.

Now that you have this handy addition to your toolbox, maintaining your WordPress website will be a little less stressful.

How to Add a Free Shipping Bar in WooCommerce

Do you offer free shipping on your WooCommerce store?

Offering free shipping is a proven way to reduce cart abandonment and increase overall sales conversion. The challenge is that most store owners fail to clearly highlight the free shipping offer.

In this article, we’ll teach you how to add a free shipping bar in WooCommerce to boost sales.

Adding a free shipping bar in WooCommerce

Adding a Free Shipping Bar in WooCommmerce

Shipping costs are one of the top reasons behind abandoned cart sales. Many online stores deal with this by offering free shipping.

Letting customers know about your free shipping offer is a great way to boost sales from your online store.

Here’s an example of a free shipping bar. It’s a prominent website element that stays at the top of the screen as the user scrolls down:

Free shipping bar example

As you can see, the example store above is encouraging users to spend a certain amount to get free shipping. This is a great to boost cart order value and over sales volume.

You can enable incentivized free shipping by using the Advanced Coupon plugin for WooCommerce.

Regardless of how you enable free shipping, properly highlighting it on your site is key, and that’s where a free shipping bar comes in.

To create the shipping bar, we’ll be using OptinMonster. It’s a powerful tool for creating all types of WordPress popups.

Creating an OptinMonster Account and Connecting WordPress

First, you’ll need to visit the OptinMonster website and sign up for an account.

Note: OptinMonster was co-created by Syed Balkhi, WPBeginner’s founder. It’s an extremely popular optin tool that we use here on WPBeginner and we highly recommend it.

After you’ve signed up for OptinMonster, the next step is to install and activate the free OptinMonster plugin for WordPress. If you’re not sure how, check out our step by step guide on how to install a WordPress plugin.

This plugin connects the OptinMonster app to your WordPress site. After activating it, click on the OptinMonster menu in your WordPress dashboard. Then, click the ‘Connect Your Account’ button and connect WordPress to your OptinMonster account.

Connect OptinMonster to your WordPress site

Creating The Free Shipping Bar for WooCommerce

Now, you can create your free shipping bar. First, go to the OptinMonster page in your WordPress dashboard and then click the ‘Create New Campaign’ button on the top right.

Creating a new campaign in OptinMonster

This will take you to the OptinMonster campaign builder where you need to select ‘Floating Bar’ as your campaign type.

Select the Floating Bar campaign

Next, scroll down and choose your campaign template. OptinMonster has a great range of professional templates to choose from. We’re going to use the Alert template for our WooCommerce free shipping bar.

Bring your mouse cursor over the template and click the ‘Use Template’ button:

Select the Alert template to start building your free shipping bar

OptinMonster will now prompt you to name your campaign. You can use any name that you like here.

Name your OptinMonster campaign

You can also choose which website(s) you want to run the campaign on. OptinMonster should have added your website here for you.

Once you’re ready, click the Start Building button.

You’ll then see the OptinMonster builder interface. It’ll look like this:

The default alert bar template in OptinMonster

By default, the floating bar will stick to the bottom of the screen. If you want it at the top of the screen instead, that’s easy to change.

Simply click on Floating Settings in the left-hand menu. Then, turn on the ‘Load Floating Bar at Top of Page?’ option.

Load the floating bar at the top of the page

To change the text in the bar, simply click on it. The text editor will open up on the left-hand side of your screen.

Editing the text of your floating bar in OptinMonster

You can customize the text as needed. You can also change the font style, color, font size, and more. The preview of your campaign will automatically update to show how it’ll look live.

Here, we’ve changed the text and the font. We’ve also slightly increased the font size:

Change the text of your floating bar

You can change the button on your free shipping bar, too. Simply click on the button, and the settings will open up in the left hand panel. You can change the text of the button there.

Editing the button on your free shipping bar

To direct customers to a page on your website about free shipping, click on the Action tab. Then, enter the correct Redirect URL:

Edit where the button redirects to on your website

If you want to use a different color for your free shipping bar, that’s easy too. First, click on the Home button to return to the main design settings:

Click the Home button to return to the main display settings for your campaign

Then, you can go to Optin Settings » Optin View Styles to change the color of your optin.

Go to Optin Settings - Optin View Styles to change the colors of your optin

After you click in the Background Color box, you can select a new color. You can either enter the hex code or use the color picker. We’ve chosen green for ours:

Changing the background color for your campaign

Once you’re happy with your free shipping bar, click the Save button on the top right.

Save your campaign using the green Save button at the top of the screen

Next, you’ll need to select the display rules which controls who should see the free shipping bar on your site. Simply click on the Display Rules tab at the top of your screen.

Setting your display rules in OptinMonster

By default, OptinMonster displays your floating bar after the user has been on any page of your site for 5 seconds.

You may want to have your free shipping bar load instantly. To do this, simply remove this rule. Click the minus icon on the right hand side to do so:

Have your free shipping bar appear instantly

Now, you should be left with the rule ‘Current URL path is any page.’

The final step is to put your campaign live on your site. Click the Publish tab at the top of the screen. While you’re building your campaign, it’s paused by default. You can switch it on here.

Switch on your campaign under the Publish tab in OptinMonster

To put your free shipping bar on your WordPress site, go to OptinMonster » Campaigns in your WordPress dashboard. Click the Refresh Campaigns button to see your free shipping bar campaign listed here.

Refresh your campaigns list in the WordPress dashboard

Now, you can visit your site and see your free shipping bar live:

Free shipping bar example

We hope this article helped you learn how to add a free shipping bar in WooCommerce. You might also want to check out our list of the best WooCommerce plugins for your online store, and our comparison of the best business phone services with smart call routing features.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post How to Add a Free Shipping Bar in WooCommerce appeared first on WPBeginner.