Reliably Send an HTTP Request as a User Leaves a Page

On several occasions, I’ve needed to send off an HTTP request with some data to log when a user does something like navigate to a different page or submit a form. Consider this contrived example of sending some information to an external service when a link is clicked:

<a href="/some-other-page" id="link">Go to Page</a>

<script>
document.getElementById('link').addEventListener('click', (e) => {
  fetch("/log", {
    method: "POST",
    headers: {
      "Content-Type": "application/json"
    }, 
    body: JSON.stringify({
      some: "data"
    })
  });
});
</script>

There’s nothing terribly complicated going on here. The link is permitted to behave as it normally would (I’m not using e.preventDefault()), but before that behavior occurs, a POST request is triggered on click. There’s no need to wait for any sort of response. I just want it to be sent to whatever service I’m hitting.

On first glance, you might expect the dispatch of that request to be synchronous, after which we’d continue navigating away from the page while some other server successfully handles that request. But as it turns out, that’s not what always happens.

Browsers don’t guarantee to preserve open HTTP requests

When something occurs to terminate a page in the browser, there’s no guarantee that an in-process HTTP request will be successful (see more about the “terminated” and other states of a page’s lifecycle). The reliability of those requests may depend on several things — network connection, application performance, and even the configuration of the external service itself.

As a result, sending data at those moments can be anything but reliable, which presents a potentially significant problem if you’re relying on those logs to make data-sensitive business decisions.

To help illustrate this unreliability, I set up a small Express application with a page using the code included above. When the link is clicked, the browser navigates to /other, but before that happens, a POST request is fired off.

While everything happens, I have the browser’s Network tab open, and I’m using a “Slow 3G” connection speed. Once the page loads and I’ve cleared the log out, things look pretty quiet:

Viewing HTTP request in the network tab

But as soon as the link is clicked, things go awry. When navigation occurs, the request is cancelled.

Viewing HTTP request fail in the network tab

And that leaves us with little confidence that the external service was actually able process the request. Just to verify this behavior, it also occurs when we navigate programmatically with window.location:

document.getElementById('link').addEventListener('click', (e) => {
+ e.preventDefault();

  // Request is queued, but cancelled as soon as navigation occurs. 
  fetch("/log", {
    method: "POST",
    headers: {
      "Content-Type": "application/json"
    }, 
    body: JSON.stringify({
      some: 'data'
    }),
  });

+ window.location = e.target.href;
});

Regardless of how or when navigation occurs and the active page is terminated, those unfinished requests are at risk for being abandoned.

But why are they cancelled?

The root of the issue is that, by default, XHR requests (via fetch or XMLHttpRequest) are asynchronous and non-blocking. As soon as the request is queued, the actual work of the request is handed off to a browser-level API behind the scenes.

As it relates to performance, this is good — you don’t want requests hogging the main thread. But it also means there’s a risk of them being deserted when a page enters into that “terminated” state, leaving no guarantee that any of that behind-the-scenes work reaches completion. Here’s how Google summarizes that specific lifecycle state:

A page is in the terminated state once it has started being unloaded and cleared from memory by the browser. No new tasks can start in this state, and in-progress tasks may be killed if they run too long.

In short, the browser is designed with the assumption that when a page is dismissed, there’s no need to continue to process any background processes queued by it.

So, what are our options?

Perhaps the most obvious approach to avoid this problem is, as much as possible, to delay the user action until the request returns a response. In the past, this has been done the wrong way by use of the synchronous flag supported within XMLHttpRequest. But using it completely blocks the main thread, causing a host of performance issues — I’ve written about some of this in the past — so the idea shouldn’t even be entertained. In fact, it’s on its way out of the platform (Chrome v80+ has already removed it).

Instead, if you’re going to take this type of approach, it’s better to wait for a Promise to resolve as a response is returned. After it’s back, you can safely perform the behavior. Using our snippet from earlier, that might look something like this:

document.getElementById('link').addEventListener('click', async (e) => {
  e.preventDefault();

  // Wait for response to come back...
  await fetch("/log", {
    method: "POST",
    headers: {
      "Content-Type": "application/json"
    }, 
    body: JSON.stringify({
      some: 'data'
    }),
  });

  // ...and THEN navigate away.
   window.location = e.target.href;
});

That gets the job done, but there are some non-trivial drawbacks.

First, it compromises the user’s experience by delaying the desired behavior from occurring. Collecting analytics data certainly benefits the business (and hopefully future users), but it’s less than ideal to make your present users to pay the cost to realize those benefits. Not to mention, as an external dependency, any latency or other performance issues within the service itself will be surfaced to the user. If timeouts from your analytics service cause a customer from completing a high-value action, everyone loses.

Second, this approach isn’t as reliable as it initially sounds, since some termination behaviors can’t be programmatically delayed. For example, e.preventDefault() is useless in delaying someone from closing a browser tab. So, at best, it’ll cover collecting data for some user actions, but not enough to be able to trust it comprehensively.

Instructing the browser to preserve outstanding requests

Thankfully, there are options to preserve outstanding HTTP requests that are built into the vast majority of browsers, and that don’t require user experience to be compromised.

Using Fetch’s keepalive flag

If the keepalive flag is set to true when using fetch(), the corresponding request will remain open, even if the page that initiated that request is terminated. Using our initial example, that’d make for an implementation that looks like this:

<a href="/some-other-page" id="link">Go to Page</a>

<script>
  document.getElementById('link').addEventListener('click', (e) => {
    fetch("/log", {
      method: "POST",
      headers: {
        "Content-Type": "application/json"
      }, 
      body: JSON.stringify({
        some: "data"
      }), 
      keepalive: true
    });
  });
</script>

When that link is clicked and page navigation occurs, no request cancellation occurs:

Viewing HTTP request succeed in the network tab

Instead, we’re left with an (unknown) status, simply because the active page never waited around to receive any sort of response.

A one-liner like this an easy fix, especially when it’s part of a commonly used browser API. But if you’re looking for a more focused option with a simpler interface, there’s another way with virtually the same browser support.

Using Navigator.sendBeacon()

The Navigator.sendBeacon()function is specifically intended for sending one-way requests (beacons). A basic implementation looks like this, sending a POST with stringified JSON and a “text/plain” Content-Type:

navigator.sendBeacon('/log', JSON.stringify({
  some: "data"
}));

But this API doesn’t permit you to send custom headers. So, in order for us to send our data as “application/json”, we’ll need to make a small tweak and use a Blob:

<a href="/some-other-page" id="link">Go to Page</a>

<script>
  document.getElementById('link').addEventListener('click', (e) => {
    const blob = new Blob([JSON.stringify({ some: "data" })], { type: 'application/json; charset=UTF-8' });
    navigator.sendBeacon('/log', blob));
  });
</script>

In the end, we get the same result — a request that’s allowed to complete even after page navigation. But there’s something more going on that may give it an edge over fetch(): beacons are sent with a low priority.

To demonstrate, here’s what’s shown in the Network tab when both fetch() with keepalive and sendBeacon() are used at the same time:

Viewing HTTP request in the network tab

By default, fetch() gets a “High” priority, while the beacon (noted as the “ping” type above) have the “Lowest” priority. For requests that aren’t critical to the functionality of the page, this is a good thing. Taken straight from the Beacon specification:

This specification defines an interface that […] minimizes resource contention with other time-critical operations, while ensuring that such requests are still processed and delivered to destination.

Put another way, sendBeacon() ensures its requests stay out of the way of those that really matter for your application and your user’s experience.

An honorable mention for the ping attribute

It’s worth mentioning that a growing number of browsers support the ping attribute. When attached to links, it’ll fire off a small POST request:

<a href="http://localhost:3000/other" ping="http://localhost:3000/log">
  Go to Other Page
</a>

And those requests headers will contain the page on which the link was clicked (ping-from), as well as the href value of that link (ping-to):

headers: {
  'ping-from': 'http://localhost:3000/',
  'ping-to': 'http://localhost:3000/other'
  'content-type': 'text/ping'
  // ...other headers
},

It’s technically similar to sending a beacon, but has a few notable limitations:

  1. It’s strictly limited for use on links, which makes it a non-starter if you need to track data associated with other interactions, like button clicks or form submissions.
  2. Browser support is good, but not great. At the time of this writing, Firefox specifically doesn’t have it enabled by default.
  3. You’re unable to send any custom data along with the request. As mentioned, the most you’ll get is a couple of ping-* headers, along with whatever other headers are along for the ride.

All things considered, ping is a good tool if you’re fine with sending simple requests and don’t want to write any custom JavaScript. But if you’re needing to send anything of more substance, it might not be the best thing to reach for.

So, which one should I reach for?

There are definitely tradeoffs to using either fetch with keepalive or sendBeacon() to send your last-second requests. To help discern which is the most appropriate for different circumstances, here are some things to consider:

You might go with fetch() + keepalive if:

  • You need to easily pass custom headers with the request.
  • You want to make a GET request to a service, rather than a POST.
  • You’re supporting older browsers (like IE) and already have a fetch polyfill being loaded.

But sendBeacon() might be a better choice if:

  • You’re making simple service requests that don’t need much customization.
  • You prefer the cleaner, more elegant API.
  • You want to guarantee that your requests don’t compete with other high-priority requests being sent in the application.

Avoid repeating my mistakes

There’s a reason I chose to do a deep dive into the nature of how browsers handle in-process requests as a page is terminated. A while back, my team saw a sudden change in the frequency of a particular type of analytics log after we began firing the request just as a form was being submitted. The change was abrupt and significant — a ~30% drop from what we had been seeing historically.

Digging into the reasons this problem arose, as well as the tools that are available to avoid it again, saved the day. So, if anything, I’m hoping that understanding the nuances of these challenges help someone avoid some of the pain we ran into. Happy logging!


Reliably Send an HTTP Request as a User Leaves a Page originally published on CSS-Tricks. You should get the newsletter.

How to Cancel Pending API Requests to Show Correct Data

I recently had to create a widget in React that fetches data from multiple API endpoints. As the user clicks around, new data is fetched and marshalled into the UI. But it caused some problems.

One problem quickly became evident: if the user clicked around fast enough, as previous network requests got resolved, the UI was updated with incorrect, outdated data for a brief period of time.

We can debounce our UI interactions, but that fundamentally does not solve our problem. Outdated network fetches will resolve and update our UI with wrong data up until the final network request finishes and updates our UI with the final correct state. The problem becomes more evident on slower connections. Furthermore, we’re left with useless networks requests that waste the user’s data.

Here is an example I built to illustrate the problem. It grabs game deals from Steam via the cool Cheap Shark API using the modern fetch() method. Try rapidly updating the price limit and you will see how the UI flashes with wrong data until it finally settles.

The solution

Turns out there is a way to abort pending DOM asynchronous requests using an AbortController. You can use it to cancel not only HTTP requests, but event listeners as well.

The AbortController interface represents a controller object that allows you to abort one or more Web requests as and when desired.

Mozilla Developer Network

The AbortController API is simple: it exposes an AbortSignal that we insert into our fetch() calls, like so:

const abortController = new AbortController()
const signal = abortController.signal
fetch(url, { signal })

From here on, we can call abortController.abort() to make sure our pending fetch is aborted.

Let’s rewrite our example to make sure we are canceling any pending fetches and marshalling only the latest data received from the API into our app:

The code is mostly the same with few key distinctions:

  1. It creates a new cached variable, abortController, in a useRef in the <App /> component.
  2. For each new fetch, it initializes that fetch with a new AbortController and obtains its corresponding AbortSignal.
  3. It passes the obtained AbortSignal to the fetch() call.
  4. It aborts itself on the next fetch.
const App = () => {
 // Same as before, local variable and state declaration
 // ...

 // Create a new cached variable abortController in a useRef() hook
 const abortController = React.useRef()

 React.useEffect(() => {
  // If there is a pending fetch request with associated AbortController, abort
  if (abortController.current) {
    abortController.abort()
  }
  // Assign a new AbortController for the latest fetch to our useRef variable
  abortController.current = new AbortController()
  const { signal } = abortController.current

  // Same as before
  fetch(url, { signal }).then(res => {
    // Rest of our fetching logic, same as before
  })
 }, [
  abortController,
  sortByString,
  upperPrice,
  lowerPrice,
 ])
}

Conclusion

That’s it! We now have the best of both worlds: we debounce our UI interactions and we manually cancel outdated pending network fetches. This way, we are sure that our UI is updated once and only with the latest data from our API.


The post How to Cancel Pending API Requests to Show Correct Data appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Using AbortController as an Alternative for Removing Event Listeners

The idea of an “abortable” fetch came to life in 2017 when AbortController was released. That gives us a way to bail on an API request initiated by fetch() — even multiple calls — whenever we want.

Here’s a super simple example using AbortController to cancel a fetch() request:

const controller = new AbortController();
const res = fetch('/', { signal: controller.signal });
controller.abort();
console.log(res); // => Promise(rejected): "DOMException: The user aborted a request"

You can really see its value when used for a modern interface of setTimeout. This way, making a fetch timeout after, say 10 seconds, is pretty straightforward:

function timeout(duration, signal) {
  return new Promise((resolve, reject) => {
    const handle = setTimeout(resolve, duration);
    signal?.addEventListener('abort', e => {
      clearTimeout(handle);
      reject(new Error('aborted'));
    });
  });
}

// Usage
const controller = new AbortController();
const promise = timeout(10000, controller.signal);
controller.abort();
console.log(promise); // => Promise(rejected): "Error: aborted"

But the big news is that addEventListener now accepts an Abort Signal as of Chrome 88. What’s cool about that? It can be used as an alternate of removeEventListener:

const controller = new AbortController();
eventTarget.addEventListener('event-type', handler, { signal: controller.signal });
controller.abort();

What’s even cooler than that? Well, because AbortController is capable of aborting multiple cancelable requests at once, it streamlines the process of removing multiple listeners in one fell swoop. I’ve already found it particularly useful for drag and drop.

Here’s how I would have written a drag and drop script without AbortController, relying two removeEventListener instances to wipe out two different events:

// With removeEventListener
el.addEventListener('mousedown', e => {
  if (e.buttons !== 1) return;

  const onMousemove = e => {
    if (e.buttons !== 1) return;
    /* work */
  }

  const onMouseup = e => {
    if (e.buttons & 1) return;
    window.removeEventListener('mousemove', onMousemove);
    window.removeEventListener('mouseup', onMouseup);
  }

  window.addEventListener('mousemove', onMousemove);
  window.addEventListener('mouseup', onMouseup); // Can’t use `once: true` here because we want to remove the event only when primary button is up
});

With the latest update, addEventListener accepts the signal property as its second argument, allowing us to call abort() once to stop all event listeners when they’re no longer needed:

// With AbortController
el.addEventListener('mousedown', e => {
  if (e.buttons !== 1) return;

  const controller = new AbortController();

  window.addEventListener('mousemove', e => {
    if (e.buttons !== 1) return;
    /* work */
  }, { signal: controller.signal });

  window.addEventListener('mouseup', e => {
    if (e.buttons & 1) return;
    controller.abort();
  }, { signal: controller.signal });
});

Again, Chrome 88 is currently the only place where addEventListener officially accepts an AbortSignal. While other major browsers, including Firefox and Safari, support AbortController, integrating its signal with addEventListener is a no go at the moment… and there are no signals (pun sorta intended) that they plan to work on it. That said, a polyfill is available.


The post Using AbortController as an Alternative for Removing Event Listeners appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Hey, let’s create a functional calendar app with the JAMstack

Hey, let's create a functional calendar app with the JAMstack

I’ve always wondered how dynamic scheduling worked so I decided to do extensive research, learn new things, and write about the technical part of the journey. It’s only fair to warn you: everything I cover here is three weeks of research condensed into a single article. Even though it’s beginner-friendly, it’s a healthy amount of reading. So, please, pull up a chair, sit down and let’s have an adventure.

My plan was to build something that looked like Google Calendar but only demonstrate three core features:

  1. List all existing events on a calendar
  2. Create new events
  3. Schedule and email notification based on the date chosen during creation. The schedule should run some code to email the user when the time is right.

Pretty, right? Make it to the end of the article, because this is what we’ll make.

A calendar month view with a pop-up form for creating a new event as an overlay.

The only knowledge I had about asking my code to run at a later or deferred time was CRON jobs. The easiest way to use a CRON job is to statically define a job in your code. This is ad hoc — statically means that I cannot simply schedule an event like Google Calendar and easily have it update my CRON code. If you are experienced with writing CRON triggers, you feel my pain. If you’re not, you are lucky you might never have to use CRON this way.

To elaborate more on my frustration, I needed to trigger a schedule based on a payload of HTTP requests. The dates and information about this schedule would be passed in through the HTTP request. This means there’s no way to know things like the scheduled date beforehand.

We (my colleagues and I) figured out a way to make this work and — with the help of Sarah Drasner’s article on Durable Functions — I understood what I needed learn (and unlearn for that matter). You will learn about everything I worked in this article, from event creation to email scheduling to calendar listings. Here is a video of the app in action:

You might notice the subtle delay. This has nothing to do with the execution timing of the schedule or running the code. I am testing with a free SendGrid account which I suspect have some form of latency. You can confirm this by testing the serverless function responsible without sending emails. You would notice that the code runs at exactly the scheduled time.

Tools and architecture

Here are the three fundamental units of this project:

  1. React Frontend: Calendar UI, including the UI to create, update or delete events.
  2. 8Base GraphQL: A back-end database layer for the app. This is where we will store, read and update our date from. The fun part is you won’t write any code for this back end.
  3. Durable Functions: Durable functions are kind of Serverless Functions that have the power of remembering their state from previous executions. This is what replaces CRON jobs and solves the ad hoc problem we described earlier.

See the Pen
durable-func1
by Chris Nwamba (@codebeast)
on CodePen.

The rest of this post will have three major sections based on the three units we saw above. We will take them one after the other, build them out, test them, and even deploy the work. Before we get on with that, let’s setup using a starter project I made to get us started.

Project Repo

Getting Started

You can set up this project in different ways — either as a full-stack project with the three units in one project or as a standalone project with each unit living in it's own root. Well, I went with the first because it’s more concise, easier to teach, and manageable since it’s one project.

The app will be a create-react-app project and I made a starter for us to lower the barrier to set up. It comes with supplementary code and logic that we don’t need to explain since they are out of the scope of the article. The following are set up for us:

  1. Calendar component
  2. Modal and popover components for presenting event forms
  3. Event form component
  4. Some GraphQL logic to query and mutate data
  5. A Durable Serverless Function scaffold where we will write the schedulers

Tip: Each existing file that we care about has a comment block at the top of the document. The comment block tells you what is currently happening in the code file and a to-do section that describes what we are required to do next.

Start by cloning the starter form Github:

git clone -b starter --single-branch https://github.com/christiannwamba/calendar-app.git

Install the npm dependencies described in the root package.json file as well as the serverless package.json:

npm install

Orchestrated Durable Functions for scheduling

There are two words we need to get out of the way first before we can understand what this term is — orchestration and durable.

Orchestration was originally used to describe an assembly of well-coordinated events, actions, etc. It is heavily borrowed in computing to describe a smooth coordination of computer systems. The key word is coordinate. We need to put two or more units of a system together in a coordinated way.

Durable is used to describe anything that has the outstanding feature of lasting longer.

Put system coordination and long lasting together, and you get Durable Functions. This is the most powerful feature if Azure’s Serverless Function. Durable Functions based in what we now know have these two features:

  1. They can be used to assemble the execution of two or more functions and coordinate them so race conditions do not occur (orchestration).
  2. Durable Functions remember things. This is what makes it so powerful. It breaks the number one rule of HTTP: stateless. Durable functions keep their state intact no matter how long they have to wait. Create a schedule for 1,000,000 years into the future and a durable function will execute after one million years while remembering the parameters that were passed to it on the day of trigger. That means Durable Functions are stateful.

These durability features unlock a new realm of opportunities for serverless functions and that is why we are exploring one of those features today. I highly recommend Sarah’s article one more time for a visualized version of some of the possible use cases of Durable Functions.

I also made a visual representation of the behavior of the Durable Functions we will be writing today. Take this as an animated architectural diagram:

Shows the touch-points of a serverless system.

A data mutation from an external system (8Base) triggers the orchestration by calling the HTTP Trigger. The trigger then calls the orchestration function which schedules an event. When the time for execution is due, the orchestration function is called again but this time skips the orchestration and calls the activity function. The activity function is the action performer. This is the actual thing that happens e.g. "send email notification".

Create orchestrated Durable Functions

Let me walk you through creating functions using VS Code. You need two things:

  1. An Azure account
  2. VS Code

Once you have both setup, you need to tie them together. You can do this using a VS Code extension and a Node CLI tool. Start with installing the CLItool:


npm install -g azure-functions-core-tools

# OR

brew tap azure/functions
brew install azure-functions-core-tools

Next, install the Azure Function extension to have VS Code tied to Functions on Azure. You can read more about setting up Azure Functions from my previous article.


Now that you have all the setup done, let’s get into creating these functions. The functions we will be creating will map to the following folders.

Folder Function
schedule Durable HTTP Trigger
scheduleOrchestrator Durable Orchestration
sendEmail Durable Activity

Start with the trigger.

  1. Click on the Azure extension icon and follow the image below to create the schedule function
    Shows the interface steps going from Browse to JavaScript to Durable Functions HTTP start to naming the function schedule.
  2. Since this is the first function, we chose the folder icon to create a function project. The icon after that creates a single function (not a project).
  3. Click Browse and create a serverless folder inside the project. Select the new serverless folder.
  4. Select JavaScript as the language. If TypeScript (or any other language) is your jam, please feel free.
  5. Select Durable Functions HTTP starter. This is the trigger.
  6. Name the first function as schedule

Next, create the orchestrator. Instead of creating a function project, create a function instead.

  1. Click on the function icon:
  2. Select Durable Functions orchestrator.
  3. Give it a name, scheduleOrchestrator and hit Enter.
  4. You will be asked to select a storage account. Orchestrator uses storage to preserve the state of a function-in-process.
  5. Select a subscription in your Azure account. In my case, I chose the free trial subscription.
  6. Follow the few remaining steps to create a storage account.

Finally, repeat the previous step to create an Activity. This time, the following should be different:

  • Select Durable Functions activity.
  • Name it sendEmail.
  • No storage account will be needed.

Scheduling with a durable HTTP trigger

The code in serverless/schedule/index.js does not need to be touched. This is what it looks like originally when the function is scaffolded using VS Code or the CLI tool.

const df = require("durable-functions");
module.exports = async function (context, req) {
  const client = df.getClient(context);
  const instanceId = await client.startNew(req.params.functionName, undefined, req.body);
  context.log(`Started orchestration with ID = '${instanceId}'.`);
  return client.createCheckStatusResponse(context.bindingData.req, instanceId);
};

What is happening here?

  1. We’re creating a durable function on the client side that is based on the context of the request.
  2. We’re calling the orchestrator using the client's startNew() function. The orchestrator function name is passed as the first argument to startNew() via the params object. A req.body is also passed to startNew() as third argument which is forwarded to the orchestrator.
  3. Finally, we return a set of data that can be used to check the status of the orchestrator function, or even cancel the process before it's complete.

The URL to call the above function would look like this:

http://localhost:7071/api/orchestrators/{functionName}

Where functionName is the name passed to startNew. In our case, it should be:

//localhost:7071/api/orchestrators/scheduleOrchestrator

It’s also good to know that you can change how this URL looks.

Orchestrating with a Durable Orchestrator

The HTTP trigger startNew call calls a function based on the name we pass to it. That name corresponds to the name of the function and folder that holds the orchestration logic. The serverless/scheduleOrchestrator/index.js file exports a Durable Function. Replace the content with the following:

const df = require("durable-functions");
module.exports = df.orchestrator(function* (context) {
  const input = context.df.getInput()
  // TODO -- 1
  
  // TODO -- 2
});

The orchestrator function retrieves the request body from the HTTP trigger using context.df.getInput().

Replace TODO -- 1 with the following line of code which might happen to be the most significant thing in this entire demo:

yield context.df.createTimer(new Date(input.startAt))

What this line does use the Durable Function to create a timer based on the date passed in from the request body via the HTTP trigger.

When this function executes and gets here, it will trigger the timer and bail temporarily. When the schedule is due, it will come back, skip this line and call the following line which you should use in place of TODO -- 2.

return yield context.df.callActivity('sendEmail', input);

The function would call the activity function to send an email. We are also passing a payload as the second argument.

This is what the completed function would look like:

const df = require("durable-functions");

module.exports = df.orchestrator(function* (context) {
  const input = context.df.getInput()
    
  yield context.df.createTimer(new Date(input.startAt))
    
  return yield context.df.callActivity('sendEmail', input);
});

Sending email with a durable activity

When a schedule is due, the orchestrator comes back to call the activity. The activity file lives in serverless/sendEmail/index.js. Replace what’s in there with the following:

const sgMail = require('@sendgrid/mail');
sgMail.setApiKey(process.env['SENDGRID_API_KEY']);

module.exports = async function(context) {
  // TODO -- 1
  const msg = {}
  // TODO -- 2
  return msg;
};

It currently imports SendGrid’s mailer and sets the API key. You can get an API Key by following these instructions.

I am setting the key in an environmental variable to keep my credentials safe. You can safely store yours the same way by creating a SENDGRID_API_KEY key in serverless/local.settings.json with your SendGrid key as the value:

{
  "IsEncrypted": false,
  "Values": {
    "AzureWebJobsStorage": "<<AzureWebJobsStorage>",
    "FUNCTIONS_WORKER_RUNTIME": "node",
    "SENDGRID_API_KEY": "<<SENDGRID_API_KEY>"
  }
}

Replace TODO -- 1 with the following line:

const { email, title, startAt, description } = context.bindings.payload;

This pulls out the event information from the input from the orchestrator function. The input is attached to context.bindings. payload can be anything you name it so go to serverless/sendEmail/function.json and change the name value to payload:

{
  "bindings": [
    {
      "name": "payload",
      "type": "activityTrigger",
      "direction": "in"
    }
  ]
}

Next, update TODO -- 2 with the following block to send an email:

const msg = {
  to: email,
  from: { email: 'chris@codebeast.dev', name: 'Codebeast Calendar' },
  subject: `Event: ${title}`,
  html: `<h4>${title} @ ${startAt}</h4> <p>${description}</p>`
};
sgMail.send(msg);

return msg;

Here is the complete version:

const sgMail = require('@sendgrid/mail');
sgMail.setApiKey(process.env['SENDGRID_API_KEY']);

module.exports = async function(context) {
  const { email, title, startAt, description } = context.bindings.payload;
  const msg = {
    to: email,
    from: { email: 'chris@codebeast.dev', name: 'Codebeast Calendar' },
    subject: `Event: ${title}`,
    html: `<h4>${title} @ ${startAt}</h4> <p>${description}</p>`
  };
  sgMail.send(msg);

  return msg;
};

Deploying functions to Azure

Deploying functions to Azure is easy. It’s merely a click away from the VS Code editor. Click on the circled icon to deploy and get a deploy URL:

Still with me this far in? You’re making great progress! It’s totally OK to take a break here, nap, stretch or get some rest. I definitely did while writing this post.

Data and GraphQL layer with 8Base

My easiest description and understanding of 8Base is "Firebase for GraphQL." 8Base is a database layer for any kind of app you can think of and the most interesting aspect of it is that it’s based on GraphQL.

The best way to describe where 8Base fits in your stack is to paint a picture of a scenario.

Imagine you are a freelance developer with a small-to-medium scale contract to build an e-commerce store for a client. Your core skills are on the web so you are not very comfortable on the back end. though you can write a bit of Node.

Unfortunately, e-commerce requires managing inventories, order management, managing purchases, managing authentication and identity, etc. "Manage" at a fundamental level just means data CRUD and data access.

Instead of the redundant and boring process of creating, reading, updating, deleting, and managing access for entities in our backend code, what if we could describe these business requirements in a UI? What if we can create tables that allow us to configure CRUD operations, auth and access? What if we had such help and only focus on building frontend code and writing queries? Everything we just described is tackled by 8Base

Here is an architecture of a back-end-less app that relies on 8Base as it’s data layer:

Create an 8Base table for events storage and retrieval

The first thing we need to do before creating a table is to create an account. Once you have an account, create a workspace that holds all the tables and logic for a given project.

Next, create a table, name the table Events and fill out the table fields.

We need to configure access levels. Right now, there’s nothing to hide from each user, so we can just turn on all access to the Events table we created:

Setting up Auth is super simple with 8base because it integrates with Auth0. If you have entities that need to be protected or want to extend our example to use auth, please go wild.

Finally, grab your endpoint URL for use in the React app:

Testing GraphQL Queries and mutation in the playground

Just to be sure that we are ready to take the URL to the wild and start building the client, let’s first test the API with a GraphQL playground and see if the setup is fine. Click on the explorer.

Paste the following query in the editor.

query {
  eventsList {
    count
    items {
      id
      title
      startAt
      endAt
      description
      allDay
      email
    }
  }
}

I created some test data through the 8base UI and I get the result back when I run they query:

You can explore the entire database using the schema document on the right end of the explore page.

Calendar and event form interface

The third (and last) unit of our project is the React App which builds the user interfaces. There are four major components making up the UI and they include:

  1. Calendar: A calendar UI that lists all the existing events
  2. Event Modal: A React modal that renders EventForm component to create a component
  3. Event Popover: Popover UI to read a single event, update event using EventForm or delete event
  4. Event Form: HTML form for creating new event

Before we dive right into the calendar component, we need to setup React Apollo client. The React Apollo provider empowers you with tools to query a GraphQL data source using React patterns. The original provider allows you to use higher order components or render props to query and mutate data. We will be using a wrapper to the original provider that allows you query and mutate using React Hooks.

In src/index.js, import the React Apollo Hooks and the 8base client in TODO -- 1:

import { ApolloProvider } from 'react-apollo-hooks';
import { EightBaseApolloClient } from '@8base/apollo-client';

At TODO -- 2, configure the client with the endpoint URL we got in the 8base setup stage:

const URI = 'https://api.8base.com/cjvuk51i0000701s0hvvcbnxg';

const apolloClient = new EightBaseApolloClient({
  uri: URI,
  withAuth: false
});

Use this client to wrap the entire App tree with the provider on TODO -- 3:

ReactDOM.render(
  <ApolloProvider client={apolloClient}>
    <App />
  </ApolloProvider>,
  document.getElementById('root')
);

Showing events on the calendar

The Calendar component is rendered inside the App component and the imports BigCalendar component from npm. Then :

  1. We render Calendar with a list of events.
  2. We give Calendar a custom popover (EventPopover) component that will be used to edit events.
  3. We render a modal (EventModal) that will be used to create new events.

The only thing we need to update is the list of events. Instead of using the static array of events, we want to query 8base for all store events.

Replace TODO -- 1 with the following line:

const { data, error, loading } = useQuery(EVENTS_QUERY);

Import the useQuery library from npm and the EVENTS_QUERY at the beginning of the file:

import { useQuery } from 'react-apollo-hooks';
import { EVENTS_QUERY } from '../../queries';

EVENTS_QUERY is exactly the same query we tested in 8base explorer. It lives in src/queries and looks like this:

export const EVENTS_QUERY = gql`
  query {
    eventsList {
      count
      items {
        id
        ...
      }
    }
  }
`;

Let’s add a simple error and loading handler on TODO -- 2:

if (error) return console.log(error);
  if (loading)
    return (
      <div className="calendar">
        <p>Loading...</p>
      </div>
    );

Notice that the Calendar component uses the EventPopover component to render a custom event. You can also observe that the Calendar component file renders EventModal as well. Both components have been setup for you, and their only responsibility is to render EventForm.

Create, update and delete events with the event form component

The component in src/components/Event/EventForm.js renders a form. The form is used to create, edit or delete an event. At TODO -- 1, import useCreateUpdateMutation and useDeleteMutation:

import {useCreateUpdateMutation, useDeleteMutation} from './eventMutationHooks'
  • useCreateUpdateMutation: This mutation either creates or updates an event depending on whether the event already existed.
  • useDeleteMutation: This mutation deletes an existing event.

A call to any of these functions returns another function. The function returned can then serve as an even handler.

Now, go ahead and replace TODO -- 2 with a call to both functions:

const createUpdateEvent = useCreateUpdateMutation(
  payload,
  event,
  eventExists,
  () => closeModal()
);
const deleteEvent = useDeleteMutation(event, () => closeModal());

These are custom hooks I wrote to wrap the useMutation exposed by React Apollo Hooks. Each hook creates a mutation and passes mutation variable to the useMutation query. The blocks that look like the following in src/components/Event/eventMutationHooks.js are the most important parts:

useMutation(mutationType, {
  variables: {
    data
  },
  update: (cache, { data }) => {
    const { eventsList } = cache.readQuery({
      query: EVENTS_QUERY
    });
    cache.writeQuery({
      query: EVENTS_QUERY,
      data: {
        eventsList: transformCacheUpdateData(eventsList, data)
      }
    });
    //..
  }
});

Call the Durable Function HTTP trigger from 8Base

We have spent quite some time in building the serverless structure, data storage and UI layers of our calendar app. To recap, the UI sends data to 8base for storage, 8base saves data and triggers the Durable Function HTTP trigger, the HTTP trigger kicks in orchestration and the rest is history. Currently, we are saving data with mutation but we are not calling the serverless function anywhere in 8base.

8base allows you to write custom logic which is what makes it very powerful and extensible. Custom logic are simple functions that are called based on actions performed on the 8base database. For example, we can set up a logic to be called every time a mutation occurs on a table. Let’s create one that is called when an event is created.

Start by installing the 8base CLI:

npm install -g 8base

On the calendar app project run the following command to create a starter logic:

8base init 8base

8base init command creates a new 8base logic project. You can pass it a directory name which in this case we are naming the 8base logic folder, 8base — don’t get it twisted.

Trigger scheduling logic

Delete everything in 8base/src and create a triggerSchedule.js file in the src folder. Once you have done that, drop in the following into the file:

const fetch = require('node-fetch');

module.exports = async event => {
  const res = await fetch('<HTTP Trigger URL>', {
    method: 'POST',
    body: JSON.stringify(event.data),
    headers: { 'Content-Type': 'application/json' }
  })
  const json = await res.json();
  console.log(event, json)
  return json;
};

The information about the GraphQL mutation is available on the event object as data.

Replace <HTTP Trigger URL> with the URL you got after deploying your function. You can get the URL by going to the function in your Azure URL and click "Copy URL."

You also need to install the node-fetch module, which will grab the data from the API:

npm install --save node-fetch

8base logic configuration

The next thing to do is tell 8base what exact mutation or query that needs to trigger this logic. In our case, a create mutation on the Events table. You can describe this information in the 8base.yml file:

functions:
  triggerSchedule:
    handler:
      code: src/triggerSchedule.js
    type: trigger.after
    operation: Events.create

In a sense, this is saying, when a create mutation happens on the Events table, please call src/triggerSchedule.js after the mutation has occurred.

We want to deploy all the things

Before anything can be deployed, we need to login into the 8Base account, which we can do via command line:

8base login

Then, let’s run the deploy command to send and set up the app logic in your workspace instance.

8base deploy

Testing the entire flow

To see the app in all its glory, click on one of the days of the calendar. You should get the event modal containing the form. Fill that out and put a future start date so we trigger a notification. Try a date more than 2-5 mins from the current time because I haven’t been able to trigger a notification any faster than that.

Yay, go check your email! The email should have arrived thanks to SendGrid. Now we have an app that allows us to create events and get notified with the details of the event submission.

The post Hey, let’s create a functional calendar app with the JAMstack appeared first on CSS-Tricks.

Earth day, API’s and sunshine.

Cassie Evans showcases some really nifty web design ideas and explores using the API provided by the company her team over at Clearleft recently hired to cover their building's roof with solar panels. Cassie outlines her journey designing a webpage that uses the API to populate some light data visualizations about the energy the building uses now that the solar panels are installed.

Here at Clearleft we’ve been taking small steps to reduce our environmental impact. In December 2018 we covered the roof of our home with solar panels.

With the first of the glorious summer sun starting to shine down on us, we started to ponder about what environmental impact they'd had over the last 5 months.

Luckily for us, our solar panels have an API, so we can not only find out that information, we can request it from SolarEdge and display it in our very own interface.

The post is a great practical look into using the Fetch API which is also something Zell Liew wrote up in thorough detail, covering the history of using APIs with JavaScript, handling errors and other weird things that might happen when working with it.

But, equally interesting and useful is reading through Cassie's thought process as she sketches a wireframe for the page, researches how to use the Fetch API, and integrates animation into a lovely SVG illustration. It's an exercise in both design and development that many of us can relate to but also learn from.

Oh and while we're on the topic of data visualizations, Dan Englishby posted something just today on the many ways if getting data into charts. Working with real-time APIs is covered there as well, and is a nice segue from Cassie's post.

Direct Link to ArticlePermalink

The post Earth day, API’s and sunshine. appeared first on CSS-Tricks.

Git Operations With Visual Studio, Part 2

Introduction

Today, I will show some more advanced Git operations using Visual Studio, without using the Git command line tool. This is the second part of the "Git Operation With Visual Studio" series. Please read the first article here. In the previous article, we have seen Git basic operations like creating repository and branch, cloning, commit, push changes, and more. Now, in this article, I will explain how to merge the branches and resolve the conflicts if any.

Update Local Branch

Update the local repository/branch to get changes from other members who have already made changes and merged. To update the code to keep it synced with others, there are three operations that come into the picture.