Converting Speech to PDF with NextJS and ExpressJS

With speech interfaces becoming more of a thing, it’s worth exploring some of the things we can do with speech interactions. Like, what if we could say something and have that transcribed and pumped out as a downloadable PDF?

Well, spoiler alert: we absolutely can do that! There are libraries and frameworks we can cobble together to make it happen, and that’s what we’re going to do together in this article.

These are the tools we‘re using

First off, these are the two big players: Next.js and Express.js.

Next.js tacks on additional functionalities to React, including key features for building static sites. It’s a go-to for many developers because of what it offers right out of the box, like dynamic routing, image optimization, built-in-domain and subdomain routing, fast refreshes, file system routing, and API routes… among many, many other things.

In our case, we definitely need Next.js for its API routes on our client server. We want a route that takes a text file, converts it to PDF, writes it to our filesystem, then sends a response to the client.

Express.js allows us to get a little Node.js app going with routing, HTTP helpers, and templating. It’s a server for our own API, which is what we’ll need as we pass and parse data between things.

We have some other dependencies we’ll be putting to use:

  1. react-speech-recognition: A library for converting speech to text, making it available to React components.
  2. regenerator-runtime: A library for troubleshooting the “regeneratorRuntime is not defined” error that shows up in Next.js when using react-speech-recognition
  3. html-pdf-node: A library for converting an HTML page or public URL into a PDF
  4. axios: A library for making HTTP requests in both the browser and Node.js
  5. cors: A library that allows cross-origin resource sharing

Setting up

The first thing we want to do is create two project folders, one for the client and one for the server. Name them whatever you’d like. I’m naming mine audio-to-pdf-client and audio-to-pdf-server, respectively.

The fastest way to get started with Next.js on the client side is to bootstrap it with create-next-app. So, open your terminal and run the following command from your client project folder:

npx create-next-app client

Now we need our Express server. We can get it by cd-ing into the server project folder and running the npm init command. A package.json file will be created in the server project folder once it’s done.

We still need to actually install Express, so let’s do that now with npm install express. Now we can create a new index.js file in the server project folder and drop this code in there:

const express = require("express")
const app = express()

app.listen(4000, () => console.log("Server is running on port 4000"))

Ready to run the server?

node index.js

We’re going to need a couple more folders and and another file to move forward:

  • Create a components folder in the client project folder.
  • Create a SpeechToText.jsx file in the components subfolder.

Before we go any further, we have a little cleanup to do. Specifically, we need to replace the default code in the pages/index.js file with this:

import Head from "next/head";
import SpeechToText from "../components/SpeechToText";

export default function Home() {
  return (
    <div className="home">
      <Head>
        <title>Audio To PDF</title>
        <meta
          name="description"
          content="An app that converts audio to pdf in the browser"
        />
        <link rel="icon" href="/favicon.ico" />
      </Head>

      <h1>Convert your speech to pdf</h1>

      <main>
        <SpeechToText />
      </main>
    </div>
  );
}

The imported SpeechToText component will eventually be exported from components/SpeechToText.jsx.

Let’s install the other dependencies

Alright, we have the initial setup for our app out of the way. Now we can install the libraries that handle the data that’s passed around.

We can install our client dependencies with:

npm install react-speech-recognition regenerator-runtime axios

Our Express server dependencies are up next, so let’s cd into the server project folder and install those:

npm install html-pdf-node cors

Probably a good time to pause and make sure the files in our project folders are in tact. Here’s what you should have in the client project folder at this point:

/audio-to-pdf-web-client
├─ /components
|  └── SpeechToText.jsx
├─ /pages
|  ├─ _app.js
|  └── index.js
└── /styles
    ├─globals.css
    └── Home.module.css

And here’s what you should have in the server project folder:

/audio-to-pdf-server
└── index.js

Building the UI

Well, our speech-to-PDF wouldn’t be all that great if there’s no way to interact with it, so let’s make a React component for it that we can call <SpeechToText>.

You can totally use your own markup. Here’s what I’ve got to give you an idea of the pieces we’re putting together:

import React from "react";

const SpeechToText = () => {
  return (
    <>
      <section>
        <div className="button-container">
          <button type="button" style={{ "--bgColor": "blue" }}>
            Start
          </button>
          <button type="button" style={{ "--bgColor": "orange" }}>
            Stop
          </button>
        </div>
        <div
          className="words"
          contentEditable
          suppressContentEditableWarning={true}
        ></div>
        <div className="button-container">
          <button type="button" style={{ "--bgColor": "red" }}>
            Reset
          </button>
          <button type="button" style={{ "--bgColor": "green" }}>
            Convert to pdf
          </button>
        </div>
      </section>
    </>
  );
};

export default SpeechToText;

This component returns a React fragment that contains an HTML <``section``> element that contains three divs:

  • .button-container contains two buttons that will be used to start and stop speech recognition.
  • .words has contentEditable and suppressContentEditableWarning attributes to make this element editable and suppress any warnings from React.
  • Another .button-container holds two more buttons that will be used to reset and convert speech to PDF, respectively.

Styling is another thing altogether. I won’t go into it here, but you’re welcome to use some styles I wrote either as a starting point for your own styles/global.css file.

View Full CSS
html,
body {
  padding: 0;
  margin: 0;
  font-family: -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Oxygen,
    Ubuntu, Cantarell, Fira Sans, Droid Sans, Helvetica Neue, sans-serif;
}

a {
  color: inherit;
  text-decoration: none;
}

* {
  box-sizing: border-box;
}

.home {
  background-color: #333;
  min-height: 100%;
  padding: 0 1rem;
  padding-bottom: 3rem;
}

h1 {
  width: 100%;
  max-width: 400px;
  margin: auto;
  padding: 2rem 0;
  text-align: center;
  text-transform: capitalize;
  color: white;
  font-size: 1rem;
}

.button-container {
  text-align: center;
  display: flex;
  justify-content: center;
  gap: 3rem;
}

button {
  color: white;
  background-color: var(--bgColor);
  font-size: 1.2rem;
  padding: 0.5rem 1.5rem;
  border: none;
  border-radius: 20px;
  cursor: pointer;
}

button:hover {
  opacity: 0.9;
}

button:active {
  transform: scale(0.99);
}

.words {
  max-width: 700px;
  margin: 50px auto;
  height: 50vh;
  border-radius: 5px;
  padding: 1rem 2rem 1rem 5rem;
  background-image: -webkit-gradient(
    linear,
    0 0,
    0 100%,
    from(#d9eaf3),
    color-stop(4%, #fff)
  ) 0 4px;
  background-size: 100% 3rem;
  background-attachment: scroll;
  position: relative;
  line-height: 3rem;
  overflow-y: auto;
}

.success,
.error {
  background-color: #fff;
  margin: 1rem auto;
  padding: 0.5rem 1rem;
  border-radius: 5px;
  width: max-content;
  text-align: center;
  display: block;
}

.success {
  color: green;
}

.error {
  color: red;
}

The CSS variables in there are being used to control the background color of the buttons.

Let’s see the latest changes! Run npm run dev in the terminal and check them out.

You should see this in browser when you visit http://localhost:3000:

Our first speech to text conversion!

The first action to take is to import the necessary dependencies into our <SpeechToText> component:

import React, { useRef, useState } from "react";
import SpeechRecognition, {
  useSpeechRecognition,
} from "react-speech-recognition";
import axios from "axios";

Then we check if speech recognition is supported by the browser and render a notice if not supported:

const speechRecognitionSupported =
  SpeechRecognition.browserSupportsSpeechRecognition();

if (!speechRecognitionSupported) {
  return <div>Your browser does not support speech recognition.</div>;
}

Next up, let’s extract transcript and resetTranscript from the useSpeechRecognition() hook:

const { transcript, resetTranscript } = useSpeechRecognition();

This is what we need for the state that handles listening:

const [listening, setListening] = useState(false);

We also need a ref for the div with the contentEditable attribute, then we need to add the ref attribute to it and pass transcript as children:

const textBodyRef = useRef(null);

…and:

<div
  className="words"
  contentEditable
  ref={textBodyRef}
  suppressContentEditableWarning={true}
  >
  {transcript}
</div>

The last thing we need here is a function that triggers speech recognition and to tie that function to the onClick event listener of our button. The button sets listening to true and makes it run continuously. We’ll disable the button while it’s in that state to prevent us from firing off additional events.

const startListening = () => {
  setListening(true);
  SpeechRecognition.startListening({
    continuous: true,
  });
};

…and:

<button
  type="button"
  onClick={startListening}
  style={{ "--bgColor": "blue" }}
  disabled={listening}
>
  Start
</button>

Clicking on the button should now start up the transcription.

More functions

OK, so we have a component that can start listening. But now we need it to do a few other things as well, like stopListening, resetText and handleConversion. Let’s make those functions.

const stopListening = () => {
  setListening(false);
  SpeechRecognition.stopListening();
};

const resetText = () => {
  stopListening();
  resetTranscript();
  textBodyRef.current.innerText = "";
};

const handleConversion = async () => {}

Each of the functions will be added to an onClick event listener on the appropriate buttons:

<button
  type="button"
  onClick={stopListening}
  style={{ "--bgColor": "orange" }}
  disabled={listening === false}
>
  Stop
</button>

<div className="button-container">
  <button
    type="button"
    onClick={resetText}
    style={{ "--bgColor": "red" }}
  >
    Reset
  </button>
  <button
    type="button"
    style={{ "--bgColor": "green" }}
    onClick={handleConversion}
  >
    Convert to pdf
  </button>
</div>

The handleConversion function is asynchronous because we will eventually be making an API request. The “Stop” button has the disabled attribute that would be be triggered when listening is false.

If we restart the server and refresh the browser, we can now start, stop, and reset our speech transcription in the browser.

Now what we need is for the app to transcribe that recognized speech by converting it to a PDF file. For that, we need the server-side path from Express.js.

Setting up the API route

The purpose of this route is to take a text file, convert it to a PDF, write that PDF to our filesystem, then send a response to the client.

To setup, we would open the server/index.js file and import the html-pdf-node and fs dependencies that will be used to write and open our filesystem.

const HTMLToPDF = require("html-pdf-node");
const fs = require("fs");
const cors = require("cors)

Next, we will setup our route:

app.use(cors())
app.use(express.json())

app.post("/", (req, res) => {
  // etc.
})

We then proceed to define our options required in order to use html-pdf-node inside the route:

let options = { format: "A4" };
let file = {
  content: `<html><body><pre style='font-size: 1.2rem'>${req.body.text}</pre></body></html>`,
};

The options object accepts a value to set the paper size and style. Paper sizes follow a much different system than the sizing units we typically use on the web. For example, A4 is the typical letter size.

The file object accepts either the URL of a public website or HTML markup. In order to generate our HTML page, we will use the html, body, pre HTML tags and the text from the req.body.

You can apply any styling of your choice.

Next, we will add a trycatch to handle any errors that might pop up along the way:

try {

} catch(error){
  console.log(error);
  res.status(500).send(error);
}

Next, we will use the generatePdf from the html-pdf-node library to generate a pdfBuffer (the raw PDF file) from our file and create a unique pdfName:

HTMLToPDF.generatePdf(file, options).then((pdfBuffer) => {
  // console.log("PDF Buffer:-", pdfBuffer);
  const pdfName = "./data/speech" + Date.now() + ".pdf";

  // Next code here
}

From there, we use the filesystem module to write, read and (yes, finally!) send a response to the client app:

fs.writeFile(pdfName, pdfBuffer, function (writeError) {
  if (writeError) {
    return res
      .status(500)
      .json({ message: "Unable to write file. Try again." });
  }

  fs.readFile(pdfName, function (readError, readData) {
    if (!readError && readData) {
      // console.log({ readData });
      res.setHeader("Content-Type", "application/pdf");
      res.setHeader("Content-Disposition", "attachment");
      res.send(readData);
      return;
    }

    return res
      .status(500)
      .json({ message: "Unable to write file. Try again." });
  });
});

Let’s break that down a bit:

  • The writeFile filesystem module accepts a file name, data and a callback function that can returns an error message if there’s an issue writing to the file. If you’re working with a CDN that provides error endpoints, you could use those instead.
  • The readFile filesystem module accepts a file name and a callback function that is capable or returning a read error as well as the read data. Once we have no read error and the read data is present, we will construct and send a response to the client. Again, this can be replaced with your CDN’s endpoints if you have them.
  • The res.setHeader("Content-Type", "application/pdf"); tells the browser that we are sending a PDF file.
  • The res.setHeader("Content-Disposition", "attachment"); tells the browser to make the received data downloadable.

Since the API route ready, we can use it in our app at http://localhost:4000. We can the proceed to the client part of our application to complete the handleConversion function.

Handling the conversion

Before we can start working on a handleConversion function, we need to create a state that handles our API requests for loading, error, success, and other messages. We’re going use React’s useState hook to set that up:

const [response, setResponse] = useState({
  loading: false,
  message: "",
  error: false,
  success: false,
});

In the handleConversion function, we will check for when the web page has been loaded before running our code and make sure the div with the editable attribute is not empty:

if (typeof window !== "undefined") {
const userText = textBodyRef.current.innerText;
  // console.log(textBodyRef.current.innerText);

  if (!userText) {
    alert("Please speak or write some text.");
    return;
  }
}

We proceed by wrapping our eventual API request in a trycatch, handling any error that may arise, and updating the response state:

try {

} catch(error){
  setResponse({
    ...response,
    loading: false,
    error: true,
    message:
      "An unexpected error occurred. Text not converted. Please try again",
    success: false,
  });
}

Next, we set some values for the response state and also set config for axios and make a post request to the server:

setResponse({
  ...response,
  loading: true,
  message: "",
  error: false,
  success: false,
});
const config = {
  headers: {
    "Content-Type": "application/json",
  },
  responseType: "blob",
};

const res = await axios.post(
  "http://localhost:4000",
  {
    text: textBodyRef.current.innerText,
  },
  config
);

Once we have gotten a successful response, we set the response state with the appropriate values and instruct the browser to download the received PDF:

setResponse({
  ...response,
  loading: false,
  error: false,
  message:
    "Conversion was successful. Your download will start soon...",
  success: true,
});

// convert the received data to a file
const url = window.URL.createObjectURL(new Blob([res.data]));
// create an anchor element
const link = document.createElement("a");
// set the href of the created anchor element
link.href = url;
// add the download attribute, give the downloaded file a name
link.setAttribute("download", "yourfile.pdf");
// add the created anchor tag to the DOM
document.body.appendChild(link);
// force a click on the link to start a simulated download
link.click();

And we can use the following below the contentEditable div for displaying messages:

<div>
  {response.success && <i className="success">{response.message}</i>}
  {response.error && <i className="error">{response.message}</i>}
</div>

Final code

I’ve packaged everything up on GitHub so you can check out the full source code for both the server and the client.


Converting Speech to PDF with NextJS and ExpressJS originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Scheduled Cron Jobs With Render

Programmers often need to run some recurring process automatically at fixed intervals or at specific times. A common solution for this problem is to use a cron job. When you have full access to your own server, configuring cron jobs is quite straightforward. However, how hard is it to configure cron jobs when you use an application hosting service? Some services, thankfully, provide a way for you to do this.

In this article, we’ll walk through a sample mini-project that shows how to easily set up and deploy a cron job on Render.

A Guide to the Next JS Framework

Next.js is a framework extensively used by TikTok, Twitch mobile, Nike, IGN, PlayStation, Marvel, and many others. It offers all the functionality we need to deploy our application in production, with a hybrid system with static pages and server-side rendered (SSR) pages. It has support for Typescript and can be deployed without having to configure anything.

SSR Benefits (Next.js)

  • Performance
  • Isomorphic: Works on both server and client (browser)
  • Build: Next.js in the build retrieves the necessary data and ejects HTML with React components.
  • Static export: Compile static files to be able to upload to the server
  • 0 config (No need to configure anything to deploy Next, however it has a very extensible config)
  • Api routes
  • Deploy with Vercel
  • Next head: To modify the head part of the page to improve the SEO
  • Typescript support
  • Environment variables are used in the browser code, not just server code.
  • Fast Refresh: New experience of hot reloading in React components
  • Code splitting: loads chunk corresponding to the page path

Comparison With Gatsby

Gatsby is primarily used for building websites that generate static HTML content and web pages that have a fixed or predictable number of pages and stable content. An example might be an e-commerce site that has only 50 products available for sale.

How To Build a Webex Chatbot in Node.Js

Workers in healthcare, education, finance, retail—and pretty much everywhere else—are clocking in by logging on from home. This has opened up opportunities for developers to build tools to support hybrid work for every industry, not just their own. One of those opportunities is in the area of ChatOps, the use of chat applications to trigger workflows for operations.

As software developers, we’ve been doing ChatOps for years—sending commands from inside a chat space to deploy applications, restart servers, and open pull requests. However, IT professionals aren’t the only ones collaborating through virtual meetings and team platforms these days. In 2020, everybody else started doing it, too.

Creating a Vue.JS Websocket Server

Using a WebSocket server is a great way to speed up your applications. APIs inherently come with their own HTTP overhead, which means every time you call an API, you have to wait a little bit for the HTTP response.

This is mostly fine, but if you have an application with a lot of time-sensitive and frequent server requests, it can become a problem. A good example of this is a chat app, where you need to see what the other person is saying immediately. APIs can still work in this scenario, but it's not the best solution for the job.

Building Express Applications With Doppler and FaunaDB

Building full-stack Express applications have always involved connecting to a database or external resources that are needed to have functional applications. When connecting to these external resources, we would need a means of verifying our identity before making a successful connection. 

We use special secret keys or API keys that are specific to our application to verify our identity. These keys should always be kept secret from the public eye, as the web is an open village, or we stand to lose our application to cybersecurity attacks. 

Alexa and Kubernetes: Dockerizing the Alexa Skill (III)

The second task we have to do to run our Alexa Skill in a Kubernetes environment is to dockerize our Alexa Skill backend, which now is a NodeJs Express app.

As Kubernetes is a kind of Container orchestrator, this is a mandatory step in our process to run the Alexa Skill in a Kubernetes environment.

How to Create an Automated Sitemap With Node.js

Site maps are a very important aspect of SEO optimization. Google and other search engines can use a sitemap to figure out where all your pages are and how they link together. In this tutorial, we will be creating an automated site map with Node.JS and Express.

I will be using MongoDB as the database tool, but if you use MySQL or something else, you can easily swap these components out.

Building Serverless GraphQL API in Node with Express and Netlify

I’ve always wanted to build an API, but was scared away by just how complicated things looked. I’d read a lot of tutorials that start with “first, install this library and this library and this library” without explaining why that was important. I’m kind of a Luddite when it comes to these things.

Well, I recently rolled up my sleeves and got my hands dirty. I wanted to build and deploy a simple read-only API, and goshdarnit, I wasn’t going to let some scary dependency lists and fancy cutting-edge services stop me¹.

What I discovered is that underneath many of the tutorials and projects out there is a small, easy-to-understand set of tools and techniques. In less than an hour and with only 30 lines of code, I believe anyone can write and deploy their very own read-only API. You don’t have to be a senior full-stack engineer — a basic grasp of JavaScript and some experience with npm is all you need.

At the end of this article you’ll be able to deploy your very own API without the headache of managing a server. I’ll list out each dependency and explain why we’re incorporating it. I’ll also give you an intro to some of the newer concepts involved, and provide links to resources to go deeper.

Let’s get started!

A rundown of the API concepts

There are a couple of common ways to work with APIs. But let’s begin by (super briefly) explaining what an API is all about: reading and updating data.

Over the past 20 years, some standard ways to build APIs have emerged. REST (short for REpresentational State Transfer) is one of the most common. To use a REST API, you make a call to a server through a URL — say api.example.com/rest/books — and expect to get a list of books back in a format like JSON or XML. To get a single book, we’d go back to the server at a URL — like api.example.com/rest/books/123 — and expect the data for book #123. Adding a new book or updating a specific book’s data means more trips to the server at similar, purpose-defined URLs.

That’s the basic idea of two concepts we’ll be looking at here: GraphQL and Serverless.

GraphQL

Applications that do a lot of getting and updating of data make a lot of API calls. Complicated software, like Twitter, might make hundreds of calls to get the data for a single page. Collecting the right data from a handful of URLs and formatting it can be a real headache. In 2012, Facebook developers starting looking for new ways to get and update data more efficiently.

Their key insight was that for the most part, data in complicated applications has relationships to other data. A user has followers, who are each users themselves, who each have their own followers, and those followers have tweets, which have replies from other users. Drawing the relationships between data results in a graph and that graph can help a server do a lot of clever work formatting and sending (or updating) data, and saving front-end developers time and frustration. Graph Query Language, aka GraphQL, was born.

GraphQL is different from the REST API approach in its use of URLs and queries. To get a list of books from our API using GraphQL, we don’t need to go to a specific URL (like our api.example.com/graphql/books example). Instead, we call up the API at the top level — which would be api.example.com/graphql in our example — and tell it what kind of information we want back with a JSON object:

{
  books {
    id
    title
    author
  }
}

The server sees that request, formats our data, and sends it back in another JSON object:

{
  "books" : [
    {
      "id" : 123
      "title" : "The Greatest CSS Tricks Vol. I"
      "author" : "Chris Coyier"
    }, {
      // ...
    }
  ]
}

Sebastian Scholl compares GraphQL to REST using a fictional cocktail party that makes the distinction super clear. The bottom line: GraphQL allows us to request the exact data we want while REST gives us a dump of everything at the URL.

Concept 2: Serverless

Whenever I see the word “serverless,” I think of Chris Watterston’s famous sticker.

Similarly, there is no such thing as a truly “serverless” application. Chris Coyier nice sums it up his “Serverless” post:

What serverless is trying to mean, it seems to me, is a new way to manage and pay for servers. You don’t buy individual servers. You don’t manage them. You don’t scale them. You don’t balance them. You aren’t really responsible for them. You just pay for what you use.

The serverless approach makes it easier to build and deploy back-end applications. It’s especially easy for folks like me who don’t have a background in back-end development. Rather than spend my time learning how to provision and maintain a server, I often hand the hard work off to someone (or even perhaps something) else.

It’s worth checking out the CSS-Tricks guide to all things serverless. On the Ideas page, there’s even a link to a tutorial on building a serverless API!

Picking our tools

If you browse through that serverless guide you’ll see there’s no shortage of tools and resources to help us on our way to building an API. But exactly which ones we use requires some initial thought and planning. I’m going to cover two specific tools that we’ll use for our read-only API.

Tool 1: NodeJS and Express

Again, I don’t have much experience with back-end web development. But one of the few things I have encountered is Node.js. Many of you are probably aware of it and what it does, but it’s essentially JavaScript that runs on a server instead of a web browser. Node.js is perfect for someone coming from the front-end development side of things because we can work directly in JavaScript — warts and all — without having to reach for some back-end language.

Express is one of the most popular frameworks for Node.js. Back before React was king (How Do You Do, Fellow Kids?), Express was the go-to for building web applications. It does all sorts of handy thing like routing, templating, and error handling.

I’ll be honest: frameworks like Express intimidate me. But for a simple API, Express is extremely easy to use and understand. There’s an official GraphQL helper for Express, and a plug-and-play library for making a serverless application called serverless-http. Neat, right?!

Tool 2: Netlify functions

The idea of running an application without maintaining a server sounds too good to be true. But check this out: not only can you accomplish this feat of modern sorcery, you can do it for free. Mind blowing.

Netlify offers a free plan with serverless functions that will give you up to 125,000 API calls in a month. Amazon offers a similar service called Lambda. We’ll stick with Netlify for this tutorial.

Netlify includes Netlify Dev which is a CLI for Netlify’s platform. Essentially, it lets us run a simulation of our in a fully-featured production environment, all within the safety of our local machine. We can use it to build and test our serverless functions without needing to deploy them.

At this point, I think it’s worth noting that not everyone agrees that running Express in a serverless function is a good idea. As Paul Johnston explains, if you’re building your functions for scale, it’s best to break each piece of functionality out into its own single-purpose function. Using Express the way I have means that every time a request goes to the API, the whole Express server has to be booted up from scratch — not very efficient. Deploy to production at your own risk.

Let’s get building!

Now that we have out tools in place, we can kick off the project. Let’s start by creating a new folder, navigating to fit in terminal, then running npm init  on it. Once npm creates a package.json file, we can install the dependencies we need. Those dependencies are:

  1. Express
  2. GraphQL and express-graphql. These allow us to receive and respond to GraphQL requests.
  3. Bodyparser. This is a small layer that translates the requests we get to and from JSON, which is what GraphQL expects.
  4. Serverless-http. This serves as a wrapper for Express that makes sure our application can be used on a serverless platform, like Netlify.

That’s it! We can install them all in a single command:

npm i express express-graphql graphql body-parser serverless-http

We also need to install Netlify Dev as a global dependency so we can use it as a CLI:

npm i -g netlify-dev

File structure

There’s a few files that are required for our API to work correctly. The first is netlify.toml which should be created at the project’s root directory. This is a configuration file to tell Netlify how to handle our project. Here’s what we need in the file to define our startup command, our build command and where our serverless functions are located:

[build]


  # This command builds the site
  command = "npm run build"


  # This is the directory that will be deployed
  publish = "build"


  # This is where our functions are located
  functions = "functions"

That functions line is super important; it tells Netlify where we’ll be putting our API code.

Next, let’s create that /functions folder at the project’s root, and create a new file inside it called api.js.  Open it up and add the following lines to the top so our dependencies are available to use and are included in the build:

const express = require("express");
const bodyParser = require("body-parser");
const expressGraphQL = require("express-graphql");
const serverless = require("serverless-http");

Setting up Express only takes a few lines of code. First, we’ll initial Express and wrap it in the serverless-http serverless function:

const app = express();
module.exports.handler = serverless(app);

These lines initialize Express, and wrap it in the serverless-http function. module.exports.handler lets Netlify know that our serverless function is the Express function.

Now let’s configure Express itself:

app.use(bodyParser.json());
app.use(
  "/",
  expressGraphQL({
    graphiql: true
  })
);

These two declarations tell Express what middleware we’re running. Middleware is what we want to happen between the request and response. In our case, we want to parse JSON using bodyparser, and handle it with express-graphql. The graphiql:true configuration for express-graphql will give us a nice user interface and playground for testing.

Defining the GraphQL schema

In order to understand requests and format responses, GraphQL needs to know what our data looks like. If you’ve worked with databases then you know that this kind of data blueprint is called a schema. GraphQL combines this well-defined schema with types — that is, definitions of different kinds of data — to work its magic.

The very first thing our schema needs is called a root query. This will handle any data requests coming in to our API. It’s called a “root” query because it’s accessed at the root of our API— say, api.example.com/graphql.

For this demonstration, we’ll build a hello world example; the root query should result in a response of “Hello world.”

So, our GraphQL API will need a schema (composed of types) for the root query. GraphQL provides some ready-built types, including a schema, a generic object², and a string.

Let’s get those by adding this below the imports:

const {
  GraphQLSchema,
  GraphQLObjectType,
  GraphQLString
} = require("graphql");

Then we’ll define our schema like this:

const schema = new GraphQLSchema({
  query: new GraphQLObjectType({
    name: 'HelloWorld',
    fields: () => ({ /* we'll put our response here */ })
  })
})

The first element in the object, with the key query, tells GraphQL how to handle a root query. Its value is a GraphQL object with the following configuration:

  • name – A reference used for documentation purposes
  • fields – Defines the data that our server will respond with. It might seem strange to have a function that just returns an object here, but this allows us to use variables and functions defined elsewhere in our file without needing to define them first³.
const schema = new GraphQLSchema({
  query: new GraphQLObjectType({
    name: "HelloWorld",
    fields: () => ({
      message: {
        type: GraphQLString,
        resolve: () => "Hello World",
      },
    }),
  }),
});

The fields function returns an object and our schema only has a single message field so far. The message we want to respond with is a string, so we specify its type as a GraphQLString. The resolve function is run by our server to generate the response we want. In this case, we’re only  returning “Hello World” but in a more complicated application, we’d probably use this function to go to our database and retrieve some data.

That’s our schema! We need to tell our Express server about it, so let’s open up api.js and make sure the Express configuration is updated to this:

app.use(
  "/",
  expressGraphQL({
    schema: schema,
    graphiql: true
  })
);

Running the server locally

Believe it or not, we’re ready to start the server! Run netlify dev in Terminal from the project’s root folder. Netlify Dev will read the netlify.toml configuration, bundle up your api.js function, and make it available locally from there. If everything goes according to plan, you’ll see a message like “Server now ready on http://localhost:8888.” 

If you go to localhost:8888 like I did the first time, you might be a little disappointed to get a 404 error.

But fear not! Netlify is running the function, only in a different directory than you might expect, which is /.netlify/functions. So, if you go to localhost:8888/.netlify/functions/api, you should see the GraphiQL interface as expected. Success!

Now, that’s more like it!

The screen we get is the GraphiQL playground and we can use it to test out the API. First, clear out the comments in the left pane and replace them with the following:

{
  message
}

This might seem a little… naked… but you just wrote a GraphQL query! What we’re saying is that we’d like to see the message field we defined in api.js. Click the “Run” button, and on the righth, you’ll see the following:

{
  "data": {
    "message": "Hello World"
  }
}

I don’t know about you, but I did a little fist pump when I did this the first time. We built an API!

Bonus: Redirecting requests

One of my hang-ups while learning about Netlify’s serverless functions is that they run on the /.netlify/functions path. It wasn’t ideal to type or remember it and I nearly bailed for another solution. But it turns out you can easily redirect requests when running and deploying on Netlfiy. All it takes is creating a file in the project’s root directory called _redirects (no extension necessary) with the following line in it:

/api /.netlify/functions/api 200!

This tells Netlify that any traffic that goes to yoursite.com/api should be sent to /.netlify/functions/api. The 200! bit instructs the server to send back a status code of 200 (meaning everything’s OK).

Deploying the API

To deploy the project, we need to connect the source code to Netlfiy. I host mine in a GitHub repo, which allows for continuous deployment.

After connecting the repository to Netlfiy, the rest is automatic: the code is processed and deployed as a serverless function! You can log into the Netlify dashboard to see the logs from any function.

Conclusion

Just like that, we are able to create a serverless API using GraphQL with a few lines of JavaScript and some light configuration. And hey, we can even deploy — for free. 

The possibilities are endless. Maybe you want to create your own personal knowledge base, or a tool to serve up design tokens. Maybe you want to try your hand at making your own PokéAPI. Or, maybe you’re interesting in working with GraphQL.

Regardless of what you make, it’s these sorts of technologies that are getting more and more accessible every day. It’s exciting to be able to work with some of the most modern tools and techniques without needing a deep technical back-end knowledge.

If you’d like to see at the complete source code for this project, it’s available on GitHub.

Some of the code in this tutorial was adapted from Web Dev Simplified’s “Learn GraphQL in 40 minutes” article. It’s a great resource to go one step deeper into GraphQL. However, it’s also focused on a more traditional server-full Express.


  1. If you’d like to see the full result of my explorations, I’ve written a companion piece called “A design API in practice” on my website.
  2. The reasons you need a special GraphQL object, instead of a regular ol’ vanilla JavaScript object in curly braces, is a little beyond the scope of this tutorial. Just keep in mind that GraphQL is a finely-tuned machine that uses these specialized types to be fast and resilient.
  3. Scope and hoisting are some of the more confusing topics in JavaScript. MDN has a good primer that’s worth checking out.

The post Building Serverless GraphQL API in Node with Express and Netlify appeared first on CSS-Tricks.

MEAN Full Stack on Amazon AWS

This is the first and main blog, in this, I will explain the overall idea. Please be informed that this is going to be a blog series exploring all the aspects of running the full stack application on AWS (Amazon Web Services). 

The main goal is to share the learning related to AWS but I will also explain a bit about Full stack application which we will implement in MEAN stack (Mongo, Express, Angular and Node).

Using Heroku for Static Web Content

In the "Moving Away From AWS and Onto Heroku" article, I provided an introduction of the application I wanted to migrate from Amazon's popular AWS solution to Heroku.  Subsequently, the "Destination Heroku" article illustrated the establishment of a new Heroku account and focused on introducing a Java API (written in Spring Boot) connecting to a ClearDB instance within this new platform-as-a-service (PaaS) ecosystem.  My primary goal is to find a solution that allows my limited time to be focused on providing business solutions instead of getting up to speed with DevOps processes.

Quick Recap

As a TL;DR (too long; didn't read) to the original article, I built an Angular client and a Java API for the small business owned by my mother-in-law.  After a year of running the application on Elastic Beanstalk and S3, I wanted to see if there was a better solution that would allow me to focus more on writing features and enhancements and not have to worry about learning, understanding, and executing DevOps-like aspects inherent within the AWS ecosystem.

NestJS: A Backend NodeJS Framework

Last week an idea stuck my mind to explore the frameworks available in the NodeJS ecosystem to develop backend APIs. I had been using ExpressJS for a long time, and I thought it was about time to see what alternative frameworks were like.

I started listing down all the features that I wanted in a good NodeJS framework:

Worker Threads: Node Parallelism

Concurrency vs Parallelism

Node.js has long excelled at concurrency. With the recent release of Node 13.0, Node now has a stable answer to parallelism as well. 

Concurrency can be thought of as switching between async processes, which all take turns executing, and, while idle, return control back to the event loop. On the other hand, parallelism is the ability for a process to separate and run simultaneously on multiple threads. There are other solutions in JavaScript that have tried to address this problem. For an in-depth comparison, I found this article useful.

Create a GraphQL API With Node, Mongoose, and Express

Everything is connected...


GraphQL is a technology that helps developers across the board to build more robust software more quickly. The ability to request all of the information you need in a single request is a game-changer. It has simplified my backend development of APIs for consumption by mobile and web applications that would normally rely on RESTful APIs. A normal RESTful API may have several endpoints for various entities (e.g. users, submissions, etc.); with GraphQL, you can get all of this information in a single go using GraphQL's query language, also known as GQL.

Tutorial: Implement MongoDB to Your Angular App

Implement MongoDB to Your Angular App


When selecting a NoSQL database, MongoDB is often one of the top choices. Unlike traditional SQL databases, NoSQL databases are known for their ability to work with large datasets, which offers scalability and flexibility in app development. SQL databases, on the contrary, are comparatively stiff and not easily scalable when dealing with large sets of data

What Can Be Done to Strengthen the Node.js Package Ecosystem?

I asked Michael Dawson, IBM Node.js Community Lead at IBM and the Node.js community Board representative, OpenJS Board of Directors, to dig into key Node.js topics to find out the state of package quality, making developing Node containerised apps for the cloud easier, and what Node events, as a long time member of the Node community, are coming up that are best suited for people digging into these problems.

There are a great number of packages available through the repositories and package management sites (for example npm and github). There are different levels of quality, support available and people are concerned about, issues with dependencies. What can be done to address this for Node users and the maintainers of those packages?

The Best of Node and Express [Articles and Tutorials]

All aboard!

Built on top of Google Chrome's V8 Engine, Node.js (and its companion framework, Express.js) have come to dominate much of backend development, especially when JavaScript is your language of choice on the server-side. In this edition of "Best of DZone," we're going to take a look into the two frameworks to better understand key pieces of functionality and how they work in tandem to create applications.  

Before we begin, we'd like need to thank those who were a part of this article. DZone has and continues to be a community powered by contributors like you who are eager and passionate to share what they know with the rest of the world.