Chaining API Requests With API Gateway

As the number of APIs that need to be integrated increases, managing the complexity of API interactions becomes increasingly challenging. By using the API gateway, we can create a sequence of API calls, which breaks down the API workflows into smaller, more manageable steps. For example, in an online shopping website when a customer searches for a product, the platform can send a request to the product search API, then send a request to the product details API to retrieve more information about the products. In this article, we will create a custom plugin for Apache APISIX API Gateway to handle client requests that should be called in sequence.

What Is a Chaining API Request, and Why Do We Need It?

Chaining API requests (or pipeline requests, or sequential API calls) is a technique used in software development to manage the complexity of API interactions where software requires multiple API calls to complete a task. It is similar to batch request processing where you group multiple API requests into a single request and send them to the server as a batch. While they may seem similar, a pipeline request involves sending a single request to the server that triggers a sequence of API requests to be executed in a defined order. Each API request in the sequence can modify the request and response data, and the response from one API request is passed as input to the next API request in the sequence. Pipeline requests can be useful when a client needs to execute a sequence of dependent API requests that must be executed in a specific order.

A Guide To Redux Toolkit With TypeScript

If you are a React developer working on a complex application, you will need to use global state management for your app at some point. React Redux is one of the most popular libraries for state management used by many developers. However, React Redux has a complex setup process that I’ve found inefficient, not to mention it requires a lot of boilerplate code. The official developer of Redux developed the Redux Toolkit to simplify the process.

This article is for those with enough knowledge of React and TypeScript to work with Redux.

About Redux

Redux is the global state management library for React applications. If you have used useState() hooks for managing your app state, you will find it hard to access the state when you need it in the other parts of the application. With useState() hooks, the state can be passed from the parent component to the child, and you will be stuck with the problem of prop drilling if you need to pass it to multiple children. That’s where Redux comes in to manage the application state.

Introducing Redux Toolkit

Redux Toolkit is a set of opinionated and standardised tools that simplify application development using the Redux state management library.

The primary benefit of using Redux Toolkit is that it removes the overhead of writing a lot of boilerplates like you’d have to do with plain Redux.

It eliminates the need to write standard Redux setup code, such as defining actions, reducers, and store configuration, which can be a significant amount of code to write and maintain.

Jerry Navi has a great tutorial that shows the full Redux setup process.

Why I Prefer Redux Toolkit Over Redux

The Redux Toolkit has several key features which make me use this library over plain Redux:

  1. Defining reducers
    With Redux Toolkit, you can specify a slice with a few lines of code to define a reducer instead of defining actions and reducers separately, like Redux.
  2. Immutability helpers
    Redux Toolkit includes a set of utility functions that make it easy to update objects and arrays in an immutable way. This makes writing code that follows the Redux principles of immutability simpler.
  3. Built-in middleware
    Redux Toolkit includes built-in middleware that can handle asynchronous request tasks.
  4. DevTools integration
    Redux Toolkit includes integration with the Redux DevTools browser extension, which makes it easier to debug and analyse Redux code.
Using Redux Toolkit To Build A Project Issue Tracker

I think the best way to explain the value and benefits of using Redux Toolkit is simply to show them to you in a real-world context. So, let’s develop an app with it that is designed to create and track GitHub issues.

You can follow along with the code examples as we go and reference the full code anytime by grabbing it from GitHub. There is also a live deployment of this example that you can check out.

Start creating a new React app with the following command:

yarn create react-app project_issue_tracker --template typescript

This generates a folder for our project with the basic files we need for development. The –template typescript part of the command is used to add TypeScript to the stack.

Now, let’s install the dependencies packages required for our project and build the primary UI for the application before we implement Redux Toolkit. First, navigate to the project_issue_tracker project folder we just created:

cd project_issue_tracker

Then run the following command to install Material UI and Emotion, where the former is a design library we can use to style components, and the latter enables writing CSS in JavaScript files.

yarn add @mui/material @emotion/react @emotion/styled

Now we can install Redix Toolkit and Redux itself:

yarn add @reduxjs/toolkit react-redux

We have everything we need to start developing! We can start by building the user interface.

Developing The User Interface

In this section, we will be developing the UI of the app. Open the main project folder and create a new components subfolder directly in the root. Inside this new folder, create a new file called ProjectCard.tsx. This is where we will write the code for a ProjectCard component that contains information about an open issue in the project issue tracker.

Let’s import some design elements from the Material UI package we installed to the new /components/ProjectCard.tsx file to get us started:

import React from "react";
import { Typography, Grid, Stack, Paper} from "@mui/material";
interface IProps {
    issueTitle: string
}
const ProjectCard : React.FC<IProps> = ({ issueTitle }) => {
    return(
        <div className="project_card">
            <Paper elevation={1} sx={{p: '10px', m:'1rem'}}>
                <Grid container spacing={2}>
                    <Grid item xs={12} md={6}>
                        <Stack spacing={2}>
                            <Typography variant="h6" sx={{fontWeight: 'bold'}}>
                                Issue Title: {issueTitle}
                            </Typography>
                            <Stack direction='row' spacing={2}>
                                <Typography variant="body1">
                                    Opened: yesterday
                                </Typography>
                                <Typography variant="body1">
                                    Priority: medium
                                </Typography>
                            </Stack>
                        </Stack>
                    </Grid>
                </Grid>
            </Paper>
        </div>
    )
}
export default ProjectCard;

This creates the project card that displays an issue title, issue priority level, and the time the issue was “opened.” Notice that we are using an issueTitle prop that will be passed to the ProjectCard component to render the issue with a provided title.

Now, let’s create the component for the app’s HomePage to display all the issues. We’ll add a small form to the page for submitting new issues that contain a text field for entering the issue name and a button to submit the form. We can do that by opening up the src/HomePage.tsx file in the project folder and importing React’s useState hook, a few more styled elements from Material UI, and the ProjectCard component we set up earlier:

import React, { useState } from "react";
import { Box, Typography, TextField, Stack, Button } from "@mui/material";
import ProjectCard from "./components/ProjectCard";
const HomePage = () => {
    const [textInput, setTextInput] = useState('');
    const handleTextInputChange = (e:any) => {
        setTextInput(e.target.value);
    };
    return(
        <div className="home_page">
            <Box sx={{ml: '5rem', mr: '5rem'}}>
                <Typography variant="h4" sx={{textAlign: 'center'}}>
                    Project Issue Tracker
                </Typography>
                <Box sx={{display: 'flex'}}>
                    <Stack spacing={2}>
                        <Typography variant="h5">
                            Add new issue
                        </Typography>
                        <TextField 
                        id="outlined-basic" 
                        label="Title" 
                        variant="outlined" 
                        onChange={handleTextInputChange}
                        value={textInput}
                        />
                        <Button variant="contained">Submit</Button>
                    </Stack>
                </Box>
                <Box sx={{ml: '1rem', mt: '3rem'}}>
                    <Typography variant="h5" >
                        Opened issue
                    </Typography>
                        <ProjectCard issueTitle="Bug: Issue 1" />
                        <ProjectCard issueTitle="Bug: Issue 2" />
                </Box>
            </Box>
        </div>
    )
}
export default HomePage;

This results in a new HomePage component that a user can interact with to add new issues by entering an issue name in a form text input. When the issue is submitted, a new ProjectCard component is added to the HomePage, which acts as an index for viewing all open issues.

The only thing left for the interface is to render the HomePage, which we can do by adding it to the App.tsx file. The full code is available here on GitHub.

Using Redux Toolkit

Now that our UI is finalised, we can move on to implementing Redux Toolkit to manage the state of this app. We will use Redux Toolkit to manage the state of the ProjectCard list by storing all the issues in a store that can be accessed from anywhere in the application.

Before we move to the actual implementation, let’s understand a few Redux Toolkit concepts to help understand what we’re implementing:

  1. createSlice
    This function makes it easy to define the reducer, actions, and the initialState under one object. Unlike the plain redux, you don’t need to use a switch for actions and need to define the actions separately. This function accepts an object as a name (i.e., the name of the slice) and the initial state of the store and the reducer, where you define all the reducers along with their action types.
  2. configureStore
    This function is an abstraction for the Redux createStore() function. It removes the dependency of defining reducers separately and creating a store again. This way, the store is configured automatically and can be passed to the Provider.
  3. createAsyncThunk
    This function simplifies making asynchronous calls. It automatically dispatches many different actions for managing the state of the calls and provides a standardised way to handle errors.

Let’s implement all of this! We will create the issueReducer with an addIssue() action that adds any new submitted issue to the projectIssues store. This can be done by creating a new file in src/redux/ called IssueReducer.ts with this code:

// Part 1
import { createSlice, PayloadAction } from "@reduxjs/toolkit"

// Part 2
export interface IssueInitialState {
    projectIssues: string[]
}
const initialState: IssueInitialState = {
    projectIssues: []
}

// Part 3
export const issueSlice = createSlice({
    name: 'issue',
    initialState,
    reducers: {
        addIssue: (state, action: PayloadAction<string>) => {
            state.projectIssues = [...state.projectIssues, action.payload]
        }
    }
})

// Part 4
export const { addIssue } = issueSlice.actions
export default issueSlice.reducer

Let’s understand each part of the code. First, we are importing the necessary functions from the Redux @reduxjs/toolkit package.

Then, we create the type definition of our initial state and initialise the initialState for the issueReducer. The initialState has a projectIssues[] list that will be used to store all the submitted issues. We can have as many properties defined in the initialState as we need for the application.

Thirdly, we are defining the issueSlice using Redux Toolkit’s createSlice function, which has the logic of the issueReducer as well as the different actions associated with it. createSlice accepts an object with a few properties, including:

  • name: the name of the slice,
  • initialState: the initial state of the reducer function,
  • reducers: an object that accepts different actions we want to define for our reducer.

The slice name for the issueReducer is issueSlice. The initalState of it is defined, and a single adIssue action is associated with it. The addIssue action is dispatched whenever a new issue is submitted. We can have other actions defined, too, if the app requires it, but this is all we need for this example.

Finally, in the last part of the code, we export the actions associated with our reducer and the issueSlice reducer. We have fully implemented our issueReducer, which stores all the submitted issues by dispatching the addIssue action.

Now let’s configure the issueReducer in our store so we can use it in the app. Create a new file in src/redux/ called index.ts, and add the following code:

import { configureStore } from "@reduxjs/toolkit";
import IssueReducer from "./IssueReducer";
export const store = configureStore({
    reducer: {
        issue: IssueReducer
    }
})
export type RootState = ReturnType<typeof store.getState>
export type AppDispatch = typeof store.dispatch

This code configures and creates the store using the configureStore() function that accepts a reducer where we can pass all of the different reducers.

We are done adding the reducer and configuring the store with Redux Toolkit. Let’s do the final step of passing the store to our app. Start by updating the App.tsx file to pass the store using the Provider:

import React from 'react';
import { Provider } from "react-redux"
import { store } from './redux';
import HomePage from './HomePage';
function App() {
    return (
        <div className="App">
            <Provider store={store}>
                <HomePage />
            </Provider>
        </div>
     );
}
export default App;

Here, you can see that we are importing the store and directly passing through the Provider. We don’t need to write anything extra to create a store or configure DevTools like we would using plain Redux. This is definitely one of the ways Redux Toolkit streamlines things.

OK, we have successfully set up a store and a reducer for our app with Redux Toolkit. Let’s use our app now and see if it works. To quickly sum things up, the dispatch() function is used to dispatch any actions to the store, and useSelector() is used for accessing any state properties.

We will dispatch the addIssue action when the form button is clicked:

const handleClick = () => {
    dispatch(addIssue(textInput))
}

To access the projectIssue list stored in our reducer store, we can make use of useSelector() like this:

const issueList = useSelector((state: RootState) => state.issue.projectIssues)

Finally, we can render all the issues by map()-ping the issueList to the ProjectCard component:

{
    issueList.map((issue) => {
        return(
            <ProjectCard issueTitle={issue} />
        )
    })
}

The final code for HomePage.tsx looks like this:

import React, { useState } from "react";
import { useDispatch, useSelector } from "react-redux";
import { RootState } from "./redux/index"
import { Box, Typography, TextField, Stack, Button } from "@mui/material";
import ProjectCard from "./components/ProjectCard";
import { addIssue } from "./redux/IssueReducer";
const HomePage = () => {
    const dispatch = useDispatch();
    const issueList = useSelector((state: RootState) => state.issue.projectIssues)
    const [textInput, setTextInput] = useState('');
    const handleTextInputChange = (e:any) => {
        setTextInput(e.target.value);
    };
    const handleClick = () => {
        dispatch(addIssue(textInput))
    }
    return(
        <div className="home_page">
            <Box sx={{ml: '5rem', mr: '5rem'}}>
                <Typography variant="h4" sx={{textAlign: 'center'}}>
                    Project Issue Tracker
                </Typography>
                <Box sx={{display: 'flex'}}>
                    <Stack spacing={2}>
                        <Typography variant="h5">
                            Add new issue
                        </Typography>
                        <TextField 
                        id="outlined-basic" 
                        label="Title" 
                        variant="outlined" 
                        onChange={handleTextInputChange}
                        value={textInput}
                        />
                        <Button variant="contained" onClick={handleClick}>Submit</Button>
                    </Stack>
                </Box>
                <Box sx={{ml: '1rem', mt: '3rem'}}>
                    <Typography variant="h5" >
                        Opened issue
                    </Typography>
                    {
                        issueList.map((issue) => {
                            return(
                                <ProjectCard issueTitle={issue} />
                            )
                        })
                    }
                </Box>
            </Box>
        </div>
    )
}
export default HomePage;

Now, when we add and submit an issue using the form, that issue will be rendered on the homepage.

This section covered how to define any reducer and how they’re used in the app. The following section will cover how Redux Toolkit makes asynchronous calls a relatively simple task.

Making Asynchronous Calls With Redux Toolkit

We implemented our store to save and render any newly added issue to our app. What if we want to call GitHub API for any repository and list all the issues of it in our app? In this section, we will see how to use the createAsyncThunk() API with the slice to get data and render all the repository issues using an API call.

I always prefer to use the createAsyncThunk() API of the redux toolkit because it standardises the way different states are handled, such as loading, error, and fulfilled. Another reason is that we don’t need to add extra configurations for the middleware.

Let’s add the code for creating a GithubIssue reducer first before we break it down to understand what’s happening. Add a new GithubIssueReducer.ts file in the /redux folder and add this code:

import { createAsyncThunk, createSlice } from '@reduxjs/toolkit';
export const fetchIssues = createAsyncThunk<string[], void, { rejectValue: string }>(
  "githubIssue/fetchIssues",
  async (_, thunkAPI) => {
    try {
      const response = await fetch("https://api.github.com/repos/github/hub/issues");
      const data = await response.json();
      const issues = data.map((issue: { title: string }) => issue.title);
      return issues;
    } catch (error) {
      return thunkAPI.rejectWithValue("Failed to fetch issues.");
    }
  }
);
interface IssuesState {
  issues: string[];
  loading: boolean;
  error: string | null;
}
const initialState: IssuesState = {
  issues: [],
  loading: false,
  error: null,
};
export const issuesSliceGithub = createSlice({
  name: 'github_issues',
  initialState,
  reducers: {},
  extraReducers: (builder) => {
    builder
      .addCase(fetchIssues.pending, (state) => {
        state.loading = true;
        state.error = null;
      })
      .addCase(fetchIssues.fulfilled, (state, action) => {
        state.loading = false;
        state.issues = action.payload;
      })
      .addCase(fetchIssues.rejected, (state, action) => {
        state.loading = false;
        state.error = action.error.message || 'Something went wrong';
      });
  },
});
export default issuesSliceGithub.reducer;

Let’s understand the fetchIssues part first:

  1. We are using the createAsyncThunk() API provided by the Redux Toolkit. It helps create asynchronous actions and handles the app’s loading and error states.
  2. The action type name is the first argument passed to createAsyncThunk(). The specific action type name we have defined is githubIssue/fetchIssues.
  3. The second argument is a function that returns a Promise, which resolves to the value that dispatches the action. This is when the asynchronous function fetches data from a GitHub API endpoint and maps the response data to a list of issue titles.
  4. The third argument is an object that contains configuration options for the async thunk. In this case, we have specified that the async thunk will not be dispatched with any arguments (hence the void type) and that if the Promise returned by the async function is rejected, the async thunk will return an action with a rejected status along with a rejectValue property that contains the string “Failed to fetch issues.”

When this action is dispatched, the API calls will be made, and the githubIssuesList data will be stored. We can follow this exact same sequence of steps to make any API calls we need.

The second section of the code is similar to what we used when we created the issueSlice, but with three differences:

  1. extraReducers
    This object contains the reducers logic for the reducers not defined in the createSlice reducers object. It takes a builder object where different cases can be added using addCase for specific action types.
  2. addCase
    This method on the builder object creates a new case for the reducer function.
  3. API call states
    The callback function passed to the addCase method is dispatched by createAsyncThunk(), which updates the different store objects based on the API call states (pending, fulfilled, and error).

We can now use the GithubIssue reducer actions and the store in our app. Let’s add the GithubIssueReducer to our store first. Update the /redux/index.ts file with this code:


import { configureStore } from "@reduxjs/toolkit";
import { useDispatch } from "react-redux";
import IssueReducer from "./IssueReducer";
import GithubIssueReducer from "./GithubIssueReducer";
export const store = configureStore({
    reducer: {
        issue: IssueReducer,
        githubIssue: GithubIssueReducer
    }
})
export type RootState = ReturnType<typeof store.getState>
export type AppDispatch = typeof store.dispatch
export const useAppDispatch = () => useDispatch<AppDispatch>()

We just added the GithubIssueReducer to our store with the name mapped to githubIssue. We can now use this reducer in our HomePage component to dispatch the fetchIssues() and populate our page with all the issues received from the GitHub API repo.

import React, { useState, useEffect } from "react";
import { useSelector } from "react-redux";
import { useAppDispatch, RootState, AppDispatch } from "./redux/index";
import { Box, Typography, TextField, Stack, Button } from "@mui/material";
import ProjectCard from "./components/ProjectCard";
import { addIssue } from "./redux/IssueReducer";
import { fetchIssues } from "./redux/GithubIssueReducer";
const HomePage = () => {
    const dispatch: AppDispatch = useAppDispatch();
    const [textInput, setTextInput] = useState('');
    const githubIssueList = useSelector((state: RootState) => state.githubIssue.issues)
    const loading = useSelector((state: RootState) => state.githubIssue.loading);
    const error = useSelector((state: RootState) => state.githubIssue.error);
    useEffect(() => {
        dispatch(fetchIssues())
      }, [dispatch]);

    if (loading) {
      return <div>Loading...</div>;
    }

    if (error) {
      return <div>Error: {error}</div>;
    }
    const handleTextInputChange = (e:any) => {
        setTextInput(e.target.value);
    };
    const handleClick = () => {
        console.log(textInput)
        dispatch(addIssue(textInput))
    }
    return(
        <div className="home_page">
            <Box sx={{ml: '5rem', mr: '5rem'}}>
                <Typography variant="h4" sx={{textAlign: 'center'}}>
                    Project Issue Tracker
                </Typography>
                <Box sx={{display: 'flex'}}>
                    <Stack spacing={2}>
                        <Typography variant="h5">
                            Add new issue
                        </Typography>
                        <TextField 
                        id="outlined-basic" 
                        label="Title" 
                        variant="outlined" 
                        onChange={handleTextInputChange}
                        value={textInput}
                        />
                        <Button variant="contained" onClick={handleClick}>Submit</Button>
                    </Stack>
                </Box>
                <Box sx={{ml: '1rem', mt: '3rem'}}>
                    <Typography variant="h5" >
                        Opened issue
                    </Typography>
                    {
                        githubIssueList?.map((issue : string) => {
                            return(
                                <ProjectCard issueTitle={issue} />
                            )
                        })
                    }
                </Box>
            </Box>
        </div>
    )
}
export default HomePage;

This updates the code in HomePage.tsx with two minor changes:

  1. We dispatch fetchIssue and use the createAsync() action to make the API calls under the useEffect hook.
  2. We use the loading and error states when the component renders.

Now, when loading the app, you will first see the “Loading” text rendered, and once the API call is fulfilled, the issuesList will be populated with all the titles of GitHub issues fetched from the repo.

Once again, the complete code for this project can be found on GitHub. You can also check out a live deployment of the app, which displays all the issues fetched from GitHub.

Conclusion

There we have it! We used Redux Toolkit in a React TypeScript application to build a fully functional project issue tracker that syncs with GitHub and allows us to create new issues directly from the app.

We learned many of the foundational concepts of Redux Toolkit, such as defining reducers, immutability helpers, built-in middleware, and DevTools integration. I hope you feel powered to use Redux Toolkit effectively in your projects. With Redux Toolkit, you can improve the performance and scalability of your React applications by effectively managing the global state.

Further Reading on Smashing Magazine

The Safest Way To Hide Your API Keys When Using React

Back in the day, developers had to write all sorts of custom code to get different applications to communicate with each other. But, these days, Application Programming Interfaces (APIs) make it so much easier. APIs provide you with everything you need to interact with different applications smoothly and efficiently, most commonly where one application requests data from the other application.

While APIs offer numerous benefits, they also present a significant risk to your application security. That is why it is essential to learn about their vulnerabilities and how to protect them. In this article, we’ll delve into the wonderful world of API keys, discuss why you should protect your API keys, and look at the best ways to do so when using React.

What Are API Keys?

If you recently signed up for an API, you will get an API key. Think of API keys as secret passwords that prove to the provider that it is you or your app that’s attempting to access the API. While some APIs are free, others charge a cost for access, and because most API keys have zero expiration date, it is frightening not to be concerned about the safety of your keys.

Why Do API Keys Need To Be Protected?

Protecting your API keys is crucial for guaranteeing the security and integrity of your application. Here are some reasons why you ought to guard your API keys:

  • To prevent unauthorized API requests.
    If someone obtains your API key, they can use it to make unauthorized requests, which could have serious ramifications, especially if your API contains sensitive data.
  • Financial insecurity.
    Some APIs come with a financial cost. And if someone gains access to your API key and exceeds your budget requests, you may be stuck with a hefty bill which could cost you a ton and jeopardize your financial stability.
  • Data theft, manipulation, or deletion.
    If a malicious person obtains access to your API key, they may steal, manipulate, delete, or use your data for their purposes.
Best Practices For Hiding API Keys In A React Application

Now that you understand why API keys must be protected, let’s take a look at some methods for hiding API keys and how to integrate them into your React application.

Environment Variables

Environment variables (env) are used to store information about the environment in which a program is running. It enables you to hide sensitive data from your application code, such as API keys, tokens, passwords, and just any other data you’d like to keep hidden from the public.

One of the most popular env packages you can use in your React application to hide sensitive data is the dotenv package. To get started:

  1. Navigate to your react application directory and run the command below.
    npm install dotenv --save
    
  2. Outside of the src folder in your project root directory, create a new file called .env.

  3. In your .env file, add the API key and its corresponding value in the following format:
    // for CRA applications
    REACT_APP_API_KEY = A1234567890B0987654321C ------ correct
    
    // for Vite applications
    VITE_SOME_KEY = 12345GATGAT34562CDRSCEEG3T  ------ correct
    
  4. Save the .env file and avoid sharing it publicly or committing it to version control.
  5. You can now use the env object to access your environment variables in your React application.
    // for CRA applications
    'X-RapidAPI-Key':process.env.REACT_APP_API_KEY
    // for Vite  applications
    'X-RapidAPI-Key':import.meta.env.VITE_SOME_KEY
    
  6. Restart your application for the changes to take effect.

However, running your project on your local computer is only the beginning. At some point, you may need to upload your code to GitHub, which could potentially expose your .env file. So what to do then? You can consider using the .gitignore file to hide it.

The .gitignore File

The .gitignore file is a text file that instructs Git to ignore files that have not yet been added to the repository when it’s pushed to the repo. To do this, add the .env to the .gitignore file before moving forward to staging your commits and pushing your code to GitHub.

// .gitignore
# dependencies
/node_modules
/.pnp
.pnp.js

# api keys
.env

Keep in mind that at any time you decide to host your projects using any hosting platforms, like Vercel or Netlify, you are to provide your environment variables in your project settings and, soon after, redeploy your app to view the changes.

Back-end Proxy Server

While environment variables can be an excellent way to protect your API keys, remember that they can still be compromised. Your keys can still be stolen if an attacker inspects your bundled code in the browser. So, what then can you do? Use a back-end proxy server.

A back-end proxy server acts as an intermediary between your client application and your server application. Instead of directly accessing the API from the front end, the front end sends a request to the back-end proxy server; the proxy server then retrieves the API key and makes the request to the API. Once the response is received, it removes the API key before returning the response to the front end. This way, your API key will never appear in your front-end code, and no one will be able to steal your API key by inspecting your code. Great! Now let’s take a look at how we can go about this:

  1. Install necessary packages.
    To get started, you need to install some packages such as Express, CORS, Axios, and Nodemon. To do this, navigate to the directory containing your React project and execute the following command:
    npm install express cors axios nodemon
    
  2. Create a back-end server file.
    In your project root directory, outside your src folder, create a JavaScript file that will contain all of your requests to the API.

  3. Initialize dependencies and set up an endpoint.
    In your backend server file, initialize the installed dependencies and set up an endpoint that will make a GET request to the third-party API and return the response data on the listened port. Here is an example code snippet:
    // defining the server port
    const port = 5000
    
    // initializing installed dependencies
    const express = require('express')
    require('dotenv').config()
    const axios = require('axios')
    const app = express()
    const cors = require('cors')
    app.use(cors())
    
    // listening for port 5000
    app.listen(5000, ()=> console.log(Server is running on ${port} ))
    
    // API request
    app.get('/', (req,res)=>{
    const options = { method: 'GET', url: 'https://wft-geo-db.p.rapidapi.com/v1/geo/adminDivisions', headers: { 'X-RapidAPI-Key':process.env.REACT_APP_API_KEY, 'X-RapidAPI-Host': 'wft-geo-db.p.rapidapi.com' } }; axios.request(options).then(function (response) { res.json(response.data); }).catch(function (error) { console.error(error); }); }
  4. Add a script tag in your package.json file that will run the back-end proxy server.

  5. Kickstart the back-end server by running the command below and then, in this case, navigate to localhost:5000.
    npm run start:backend
    
  6. Make a request to the backend server (http://localhost:5000/) from the front end instead of directly to the API endpoint. Here’s an illustration:
    import axios from "axios";
    import {useState, useEffect} from "react"
    
    function App() {
    
      const [data, setData] = useState(null)
    
      useEffect(()=>{
        const options = {
          method: 'GET',
          url: "http://localhost:5000",
        }
        axios.request(options)
        .then(function (response) {
            setData(response.data.data)
        })
        .catch(function (error) {
            console.error(error);
        })
    }, []) console.log(data) return ( <main className="App"> <h1>How to Create a Backend Proxy Server for Your API Keys</h1> {data && data.map((result)=>( <section key ={result.id}> <h4>Name:{result.name}</h4> <p>Population:{result.population}</p> <p>Region:{result.region}</p> <p>Latitude:{result.latitude}</p> <p>Longitude:{result.longitude}</p> </section> ))} </main> ) } export default App;

Okay, there you have it! By following these steps, you'll be able to hide your API keys using a back-end proxy server in your React application.

Key Management Service

Even though environment variables and the back-end proxy server allow you to safely hide your API keys online, you are still not completely safe. You may have friends or foes around you who can access your computer and steal your API key. That is why data encryption is essential.

With a key management service provider, you can encrypt, use, and manage your API keys. There are tons of key management services that you can integrate into your React application, but to keep things simple, I will only mention a few:

  • AWS Secrets Manager
    The AWS Secrets Manager is a secret management service provided by Amazon Web Services. It enables you to store and retrieve secrets such as database credentials, API keys, and other sensitive information programmatically via API calls to the AWS Secret Manager service. There are a ton of resources that can get you started in no time.
  • Google Cloud Secret Manager
    The Google Cloud Secret Manager is a key management service provided and fully managed by the Google Cloud Platform. It is capable of storing, managing, and accessing sensitive data such as API keys, passwords, and certificates. The best part is that it seamlessly integrates with Google’s back-end-as-a-service features, making it an excellent choice for any developer looking for an easy solution.
  • Azure Key Vault
    The Azure Key Vault is a cloud-based service provided by Microsoft Azure that allows you to seamlessly store and manage a variety of secrets, including passwords, API keys, database connection strings, and other sensitive data that you don’t want to expose directly in your application code.

There are more key management services available, and you can choose to go with any of the ones mentioned above. But if you want to go with a service that wasn’t mentioned, that’s perfectly fine as well.

Tips For Ensuring Security For Your API Keys

You have everything you need to keep your API keys and data secure. So, if you have existing projects in which you have accidentally exposed your API keys, don’t worry; I've put together some handy tips to help you identify and fix flaws in your React application codebase:

  1. Review your existing codebase and identify any hardcoded API key that needs to be hidden.
  2. Use environment variables with .gitignore to securely store your API keys. This will help to prevent accidental exposure of your keys and enable easier management across different environments.
  3. To add an extra layer of security, consider using a back-end proxy server to protect your API keys, and, for advanced security needs, a key management tool would do the job.
Conclusion

Awesome! You can now protect your API keys in React like a pro and be confident that your application data is safe and secure. Whether you use environment variables, a back-end proxy server, or a key management tool, they will keep your API keys safe from prying eyes.

Further Reading On SmashingMag

How ChatGPT Writes Code for Automation Tool Cypress

In its first week of launch, ChatGPT shattered Internet records by becoming extremely popular. As a person who works in QA automation, my initial thinking when I started looking into it was how to use this wonderful platform to make the jobs of testers for Web and UI automation simpler. 

ChatGPT may be used to write code in a variety of programming languages and technologies. After more investigation, I made the decision to create some scenarios using it. I have created some use cases around UI, API, and Cucumber feature file generation using ChatGPT.

Exploring The Potential Of Web Workers For Multithreading On The Web

Web Workers are a powerful feature of modern web development and were introduced as part of the HTML5 specification in 2009. They were designed to provide a way to execute JavaScript code in the background, separate from the main execution thread of a web page, in order to improve performance and responsiveness.

The main thread is the single execution context that is responsible for rendering the UI, executing JavaScript code, and handling user interactions. In other words, JavaScript is “single-threaded”. This means that any time-consuming task, such as complex calculations or data processing that is executed, would block the main thread and cause the UI to freeze and become unresponsive.

This is where Web Workers come in.

Web Workers were implemented as a way to address this problem by allowing time-consuming tasks to be executed in a separate thread, called a worker thread. This enabled JavaScript code to be executed in the background without blocking the main thread and causing the page to become unresponsive.

Creating a web worker in JavaScript is not much of a complicated task. The following steps provide a starting point for integrating a web worker into your application:

  1. Create a new JavaScript file that contains the code you want to run in the worker thread. This file should not contain any references to the DOM, as it will not have access to it.
  2. In your main JavaScript file, create a new worker object using the Worker constructor. This constructor takes a single argument, which is the URL of the JavaScript file you created in step 1.
    const worker = new Worker('worker.js');
    
  3. Add event listeners to the worker object to handle messages sent between the main thread and the worker thread. The onmessage event handler is used to handle messages sent from the worker thread, while the postMessage method is used to send messages to the worker thread.
    worker.onmessage = function(event) {
      console.log('Worker said: ' + event.data);
    };
    worker.postMessage('Hello, worker!');
    
  4. In your worker JavaScript file, add an event listener to handle messages sent from the main thread using the onmessage property of the self object. You can access the data sent with the message using the event.data property.
    self.onmessage = function(event) {
      console.log('Main thread said: ' + event.data);
      self.postMessage('Hello, main thread!');
    };
    

Now let’s run the web application and test the worker. We should see messages printed to the console indicating that messages were sent and received between the main thread and the worker thread.

One key difference between Web Workers and the main thread is that Web Workers have no access to the DOM or the UI. This means that they cannot directly manipulate the HTML elements on the page or interact with the user.

Web Workers are designed to perform tasks that do not require direct access to the UI, such as data processing, image manipulation, or calculations.

Another important difference is that Web Workers are designed to run in a sandboxed environment, separate from the main thread, which means that they have limited access to system resources and cannot access certain APIs, such as the localStorage or sessionStorage APIs. However, they can communicate with the main thread through a messaging system, allowing data to be exchanged between the two threads.

Importance And Benefits Of Web Workers For Multithreading On The Web

Web Workers provide a way for web developers to achieve multithreading on the web, which is crucial for building high-performance web applications. By enabling time-consuming tasks to be executed in the background, separate from the main thread, Web Workers improve the overall responsiveness of web pages and allow for a more seamless user experience. The following are some of the importance and benefits of Web Workers for multithreading on the Web.

Improved Resource Utilization

By allowing time-consuming tasks to be executed in the background, Web Workers make more efficient use of system resources, enabling faster and more efficient processing of data and improving overall performance. This is especially important for web applications that involve large amounts of data processing or image manipulation, as Web Workers can perform these tasks without impacting the user interface.

Increased Stability And Reliability

By isolating time-consuming tasks in separate worker threads, Web Workers help to prevent crashes and errors that can occur when executing large amounts of code on the main thread. This makes it easier for developers to write stable and reliable web applications, reducing the likelihood of user frustration or loss of data.

Enhanced Security

Web Workers run in a sandboxed environment that is separate from the main thread, which helps to enhance the security of web applications. This isolation prevents malicious code from accessing or modifying data in the main thread or other Web Workers, reducing the risk of data breaches or other security vulnerabilities.

Better Resource Utilization

Web Workers can help to improve resource utilization by freeing up the main thread to handle user input and other tasks while the Web Workers handle time-consuming computations in the background. This can help to improve overall system performance and reduce the likelihood of crashes or errors. Additionally, by leveraging multiple CPU cores, Web Workers can make more efficient use of system resources, enabling faster and more efficient processing of data.

Web Workers also enable better load balancing and scaling of web applications. By allowing tasks to be executed in parallel across multiple worker threads, Web Workers can help distribute the workload evenly across multiple cores or processors, enabling faster and more efficient processing of data. This is particularly important for web applications that experience high traffic or demand, as Web Workers can help to ensure that the application can handle an increased load without impacting performance.

Practical Applications Of Web Workers

Let us explore some of the most common and useful applications of Web Workers. Whether you’re building a complex web application or a simple website, understanding how to leverage Web Workers can help you improve performance and provide a better user experience.

Offloading CPU-Intensive Work

Suppose we have a web application that needs to perform a large, CPU-intensive computation. If we perform this computation in the main thread, the user interface will become unresponsive, and the user experience will suffer. To avoid this, we can use a Web Worker to perform the computation in the background.

// Create a new Web Worker.
const worker = new Worker('worker.js');

// Define a function to handle messages from the worker.
worker.onmessage = function(event) {
  const result = event.data;
  console.log(result);
};

// Send a message to the worker to start the computation.
worker.postMessage({ num: 1000000 });

// In worker.js:

// Define a function to perform the computation.
function compute(num) {
  let sum = 0;
  for (let i = 0; i < num; i++) {
    sum += i;
  }
  return sum;
}

// Define a function to handle messages from the main thread.
onmessage = function(event) {
  const num = event.data.num;
  const result = compute(num);
  postMessage(result);
};

In this example, we create a new Web Worker and define a function to handle messages from the worker. We then send a message to the worker with a parameter (num) that specifies the number of iterations to perform in the computation. The worker receives this message and performs the computation in the background. When the computation is complete, the worker sends a message back to the main thread with the result. The main thread receives this message and logs the result to the console.

This task involves adding up all the numbers from 0 to a given number. While this task is relatively simple and straightforward for small numbers, it can become computationally intensive for very large numbers.

In the example code we used above, we passed the number 1000000 to the compute() function in the Web Worker. This means that the compute function will need to add up all the numbers from 0 to one million. This involves a large number of additional operations and can take a significant amount of time to complete, especially if the code is running on a slower computer or in a browser tab that is already busy with other tasks.

By offloading this task to a Web Worker, the main thread of the application can continue to run smoothly without being blocked by the computationally intensive task. This allows the user interface to remain responsive and ensures that other tasks, such as user input or animations, can be handled without delay.

Handling Network Requests

Let us consider a scenario where a web application needs to initiate a significant number of network requests. Performing these requests within the main thread could cause the user interface to become unresponsive and result in a poor user experience. In order to prevent this issue, we can utilize Web Workers to handle these requests in the background. By doing so, the main thread remains free to execute other tasks while the Web Worker handles the network requests simultaneously, resulting in improved performance and a better user experience.

// Create a new Web Worker.
const worker = new Worker('worker.js');

// Define a function to handle messages from the worker.
worker.onmessage = function(event) {
  const response = event.data;
  console.log(response);
};

// Send a message to the worker to start the requests.
worker.postMessage({ urls: ['https://api.example.com/foo', 'https://api.example.com/bar'] });

// In worker.js:

// Define a function to handle network requests.
function request(url) {
  return fetch(url).then(response => response.json());
}

// Define a function to handle messages from the main thread.
onmessage = async function(event) {
  const urls = event.data.urls;
  const results = await Promise.all(urls.map(request));
  postMessage(results);
};

In this example, we create a new Web Worker and define a function to handle messages from the worker. We then send a message to the worker with an array of URLs to request. The worker receives this message and performs the requests in the background using the fetch API. When all requests are complete, the worker sends a message back to the main thread with the results. The main thread receives this message and logs the results to the console.

Parallel Processing

Suppose we have a web application that needs to perform a large number of independent computations. If we perform these computations in sequence in the main thread, the user interface will become unresponsive, and the user experience will suffer. To avoid this, we can use a Web Worker to perform the computations in parallel.

// Create a new Web Worker.
const worker = new Worker('worker.js');

// Define a function to handle messages from the worker.
worker.onmessage = function(event) {
  const result = event.data;
  console.log(result);
};

// Send a message to the worker to start the computations.
worker.postMessage({ nums: [1000000, 2000000, 3000000] });

// In worker.js:

// Define a function to perform a single computation.
function compute(num) {
  let sum = 0;
  for (let i = 0; i < num; i++) {
    sum += i;
}
  return sum;
}

// Define a function to handle messages from the main thread.
onmessage = function(event) {
  const nums = event.data.nums;
  const results = nums.map(compute);
  postMessage(results);
};

In this example, we create a new Web Worker and define a function to handle messages from the worker. We then send a message to the worker with an array of numbers to compute. The worker receives this message and performs the computations in parallel using the map method. When all computations are complete, the worker sends a message back to the main thread with the results. The main thread receives this message and logs the results to the console.

Limitations And Considerations

Web workers are a powerful tool for improving the performance and responsiveness of web applications, but they also have some limitations and considerations that you should keep in mind when using them. Here are some of the most important ones:

Browser Support

Web workers are supported in all major browsers, including Chrome, Firefox, Safari, and Edge. However, there are still some other browsers that do not support web workers or may have limited support.

For a more extensive look at browser support, see Can I Use.

It is important that you check out the browser support for any feature before using them in production code and test your application thoroughly to ensure compatibility.

Limited Access To The DOM

Web workers run in a separate thread and do not have access to the DOM or other global objects in the main thread. This means you cannot directly manipulate the DOM from a web worker or access global objects like windows or documents.

To work around this limitation, you can use the postMessage method to communicate with the main thread and update the DOM or access global objects indirectly. For example, you can send data to the main thread using postMessage and then update the DOM or global objects in response to the message.

Alternatively, there are some libraries that help solve this issue. For example, the WorkerDOM library enables you to run the DOM in a web worker, allowing for faster page rendering and improved performance.

Communication Overhead

Web workers communicate with the main thread using the postMessage method, and as a result, could introduce communication overhead, which refers to the amount of time and resources required to establish and maintain communication between two or more computing systems, such as between a Web Worker and the main thread in a web application. This could result in a delay in processing messages and potentially slow down the application. To minimize this overhead, you should only send essential data between threads and avoid sending large amounts of data or frequent messages.

Limited Debugging Tools

Debugging Web Workers can be more challenging than debugging code in the main thread, as there are fewer debugging tools available. To make debugging easier, you can use the console API to log messages from the worker thread and use browser developer tools to inspect messages sent between threads.

Code Complexity

Using Web Workers can increase the complexity of your code, as you need to manage communication between threads and ensure that data is passed correctly. This can make it more difficult to write, debug, and maintain your code, so you should carefully consider whether using web workers is necessary for your application.

Strategies For Mitigating Potential Issues With Web Workers

Web Workers are a powerful tool for improving the performance and responsiveness of web applications. However, when using Web Workers, there are several potential issues that can arise. Here are some strategies for mitigating these issues:

Minimize Communication Overhead With Message Batching

Message batching involves grouping multiple messages into a single batch message, which can be more efficient than sending individual messages separately. This approach reduces the number of round-trips between the main thread and Web Workers. It can help to minimize communication overhead and improve the overall performance of your web application.

To implement message batching, you can use a queue to accumulate messages and send them together as a batch when the queue reaches a certain threshold or after a set period of time. Here’s an example of how you can implement message batching in your Web Worker:

// Create a message queue to accumulate messages.
const messageQueue = [];

// Create a function to add messages to the queue.
function addToQueue(message) {
  messageQueue.push(message);

  // Check if the queue has reached the threshold size.
  if (messageQueue.length >= 10) {
    // If so, send the batched messages to the main thread.
    postMessage(messageQueue);

    // Clear the message queue.
    messageQueue.length = 0;
  }
}

// Add a message to the queue.
addToQueue({type: 'log', message: 'Hello, world!'});

// Add another message to the queue.
addToQueue({type: 'error', message: 'An error occurred.'});

In this example, we create a message queue to accumulate messages that need to be sent to the main thread. Whenever a message is added to the queue using the addToQueue function, we check if the queue has reached the threshold size (in this case, ten messages). If so, we send the batched messages to the main thread using the postMessage method. Finally, we clear the message queue to prepare it for the next batch.

By batching messages in this way, we can reduce the overall number of messages sent between the main thread and Web Workers,

Avoid Synchronous Methods

These are JavaScript functions or operations that block the execution of other code until they are complete. Synchronous methods can block the main thread and cause your application to become unresponsive. To avoid this, you should avoid using synchronous methods in your Web Worker code. Instead, use asynchronous methods such as setTimeout() or etInterval() to perform long-running computations.

Here is a little demonstration:

// In the worker
self.addEventListener('message', (event) => {
  if (event.data.action === 'start') {
    // Use a setTimeout to perform some computation asynchronously.
    setTimeout(() => {
      const result = doSomeComputation(event.data.data);

      // Send the result back to the main thread.
      self.postMessage({ action: 'result', data: result });
    }, 0);
  }
});

Be Mindful Of Memory Usage

Web Workers have their own memory space, which can be limited depending on the user’s device and browser settings. To avoid memory issues, you should be mindful of the amount of memory your Web Worker code is using and avoid creating large objects unnecessarily. For example:

// In the worker
self.addEventListener('message', (event) => {
  if (event.data.action === 'start') {
    // Use a for loop to process an array of data.
    const data = event.data.data;
    const result = [];

    for (let i = 0; i < data.length; i++) {
      // Process each item in the array and add the result to the result array.
      const itemResult = processItem(data[i]);
      result.push(itemResult);
    }

    // Send the result back to the main thread.
    self.postMessage({ action: 'result', data: result });
  }
});

In this code, the Web Worker processes an array of data and returns the result to the main thread using the postMessage method. However, the for loop used to process the data may be time-consuming.

The reason for this is that the code is processing an entire array of data at once, meaning that all the data must be loaded into memory at the same time. If the data set is very large, this can cause the Web Worker to consume a significant amount of memory, potentially exceeding the memory limit allocated to the Web Worker by the browser.

To mitigate this issue, you can consider using built-in JavaScript methods like forEach or reduce, which can process data one item at a time and avoid the need to load the entire array into memory at once.

Browser Compatibility

Web Workers are supported in most modern browsers, but some older browsers may not support them. To ensure compatibility with a wide range of browsers, you should test your Web Worker code in different browsers and versions. You can also use feature detection to check if Web Workers are supported before using them in your code, like this:

if (typeof Worker !== 'undefined') {
  // Web Workers are supported.
  const worker = new Worker('worker.js');
} else {
  // Web Workers are not supported.
  console.log('Web Workers are not supported in this browser.');
}

This code checks if Web Workers are supported in the current browser and creates a new Web Worker if they are supported. If Web Workers are not supported, the code logs a message to the console indicating that Web Workers are not supported in the browser.

By following these strategies, you can ensure that your Web Worker code is efficient, responsive, and compatible with a wide range of browsers.

Conclusion

As web applications become increasingly complex and demanding, the importance of efficient multithreading techniques — such as Web Workers — is likely to increase. Web Workers are an essential feature of modern web development that allows developers to offload CPU-intensive tasks to separate threads, improving application performance and responsiveness. However, there are significant limitations and considerations to keep in mind when working with Web Workers, such as the lack of access to the DOM and limitations on the types of data that can be passed between threads.

To mitigate these potential issues, developers can follow strategies as mentioned earlier, such as using asynchronous methods and being mindful of the complexity of the task being offloaded.

Multithreading with Web Workers is likely to remain an important technique for improving web application performance and responsiveness in the future. While there are other techniques for achieving multithreading in JavaScript, such as using WebSockets or SharedArrayBuffer, Web Workers have several advantages that make them a powerful tool for developers.

Adopting more recent technology such as WebAssembly may open up new opportunities for using Web Workers to offload even more complex and computationally-intensive tasks. Overall, Web Workers are likely to continue to evolve and improve in the coming years, helping developers create more efficient and responsive web applications.

Additionally, many libraries and tools exist to help developers work with Web Workers. For example, Comlink and Workerize provide a simplified API for communicating with Web Workers. These libraries abstract away some of the complexity of managing Web Workers, making it easier to leverage their benefits.

Hopefully, this article has given you a good understanding of the potential of web workers for multithreading and how to use them in your own code.

Engagement Tips to Woo Clients and Extend the Honeymoon Phase

Having a brilliant website isn’t enough. Even if your business came with a substantial, pre-existing customer base (and let’s face it, most don’t), continuing communication is essential for retainment and growth.

Running a successful business means building and maintaining a connection with your clients – existing and potential.

There are, of course, many ways to reach your business prospects in today’s digital world: paid ads, social media, real-time messaging platforms, chat, and more.

But would you be shocked to learn that email – approaching its 45th birthday! – remains the most used, most successful platform for customer engagement?

According to Tom Wozniak, of OPTIZMO Technologies: As audience tracking and targeting become more challenging, the email address will continue to be the most valuable piece of audience identification data. [Forbes]

In this article, we’re going to look at why email is the most effective way to promote, proffer, and position your business for prime growth. Plus, we’ve hand-picked a selection of WordPress plugins that handle the various outreach tasks quite effectively.

Keep reading, or jump ahead to any section:

Okay, off we go to the electronic post office…

Which (Customer Acquisition) Channel is Best to Watch

The ways in which you reach your audience are your customer acquisition channels. They are also the avenues for increasing your customer base.

Though some might rule it out because it’s the oldest, email remains one of the best acquisition channels available. [Lesson: Don’t throw the baby granddaddy out with the bathwater.]

Email is simultaneously simple yet powerful in terms of content that can be delivered. And it’s separate from third-party elements (eg, social media, search, etc); meaning, there’s no algorithm to work around. It’s a straight shot into the hands (ie, inboxes) of your customers.

Here are some telling stats:

  • By 2025, the number of global email users is expected to reach a total of 4.6 billion [Statista]
  • When it comes to online advertising, email has seen higher click-through rates than on social media [Statista]
  • 59% of marketers say email is their biggest source of ROI [Sopro]
  • 59% of survey respondents say marketing emails influence their purchase decisions [Sopro]

Email is also extremely cost effective, allowing for a minimal investment in a tool/platform that will likely have most of the important features that mirror its high-end counterparts. It’s also easy to scale as your business grows.

With consumers averaging an online time of 397 minutes daily – giving you a golden opportunity of 6+ hours for engagement – there is simply no better way to speak directly with your customers than email. [Oberlo]

email vs social media marketing stats
Email topples social media in marketing stats. [Source]
Three more stats on email’s impressive reach: [OptinMonster]

  • 44% of users check their email for a deal from a company they know, whereas only 4% will go to Facebook
  • 60% of consumers state that they have made a purchase as the result of a marketing message they received by email
  • There are 400 million more email users than social media users

Repeat: Four hundred million MORE. If those numbers don’t convince you, I don’t know what will.

Now that we understand the value of email as a customer acquisition channel, let’s look at the different types of emails you can employ to build relationships and drive sales. Plus, one very important bit of housekeeping that needs attention first.

How to Deliver Successful Results Easily and Reliably

Whether you run a simple website or a large eCommerce store, reliable email-sending is a necessity. As fantastic a CMS as WordPress is, it has multiple limitations when it comes to sending emails.

Because WordPress uses PHP mail functionality to send emails:

  • you can’t easily build HTML templates, embed images, or add attachments; and
  • it lacks proper email headers, which often results in delivery impediments, causing emails to land in spam folders (or not get delivered at all)

Luckily, there’s a simple solution.

smtp illustration (sendinblue)
SMTP server infrastructure. [Source]

Simple Mail Transfer Protocol (ie, SMTP) provides an easy way to improve successful sending of WordPress emails, increasing email deliverability by using authentication and assuring that your intended audience receives what you send.

SMTP can be set up manually using the PHPMailer library (more difficult), or connected through the use of a plugin (easier).

SMTP Plugins

There are a number of plugins for setting up SMTP in your WordPress email. After looking at the most popular, here are the five we like best.

wp mail smtp banner

1. WP Mail SMTP

This plugin sits at the top of this list, allowing over three million WordPress users to send their emails reliably.

When using one of WP Mail SMTP’s built-in SMTP mail provider integrations (see below), emails are sent using the provider’s direct API. This means even if your web host is blocking SMTP ports, your emails still send successfully, helping you fix the not-sending-email issues that are prevalent in WordPress.

An easy-to-use setup wizard and detailed documentation will guide you through the process, and for most options, you can specify the “from name” and “email address” for outgoing emails.

You can send emails using your own or third-party SMTP email server, or by using integrations with popular email providers, such as:

  • SendLayer
  • SMTP.com
  • Sendinblue SMTP
  • Gmail SMTP (Gmail, Google Workspace, G Suite)
  • All Other SMTP

Instead of having to use different SMTP plugins and workflows for different SMTP providers, WP Mail SMTP brings it all into one, providing the ideal SMTP solution for WordPress.

They also offer paid plans, which include additional features (like one-on-one support, white glove setup, and native integrations for Microsoft, Amazon SES, Zoho Mail, etc).

easy wp smtp banner

2. Easy WP SMTP

With 700,000 active installs, Easy WP SMTP resolves email deliverability issues using transactional mailers or an SMTP server.

The plugin offers configuration from a number of popular mailers, including SendLayer, Mailgun, Sendinblue, and more.

Easy WP SMTP also allows you to debug events that log any failed email sending attempts and the error that caused them, and the ability to specify a Reply-to or BCC email address.

Premium, paid versions are also available, and add more features (like shopping cart plugins, priority support, and reports).

post smtp banner

3. Post SMTP Mailer

With active installs at 300,000 and climbing, Post SMTP Mailer is a next-generation WP Mail SMTP plugin that improves email deliverability for your WordPress websites, sending emails to millions of users worldwide.

Post SMTP has a smart setup wizard that covers everything from getting started to sending test emails. It uses a commercial-grade connectivity tester to better diagnose server issues, has a built-in email log that can help with any failed email problems, and uses OAuth 2.0 security to increase the protection of email passwords.

Post SMTP also offers premium upgraded integrations, through a number of pro extensions. These are: Zoho Mail Pro, Mail Control, Twilio, Office 365, and Amazon SES.

branda banner

4. Branda

WPMU DEV’s Branda plugin, known as the White Labeling wunderkind, also has an easy SMTP tool built right in, and is completely free. Setup is a cinch with our easy-to-understand documentation.

Branda allows you to customize every aspect of WordPress to fit your brand. Transform your dashboard, customize system (default) emails, quickly toggle maintenance mode and “coming soon” landing pages, change every aspect of your login screen, remove or replace logos, create color schemes, and much more. Branda has everything to rebrand WordPress for free without touching any code or hacking modifications.

There is also a pro version of Branda, if you’d like to get the full collection of 30+ modules, along with a membership that includes an entire suite of plugins, premium 24/7 live chat support, and more.

wp offload ses lite banner

5. WP Offload SES Lite

WP Offload SES Lite is trusted by more than 20,000 sites to send their email, with good reason – it works exceedingly well.

This plugin is different in that it’s not sent over SMTP. They believe that going the SMTP route makes you prone to hitting rate limits, and is also missing some key features (like an email queue).

WP Offload SES Lite gives you the high deliverability, powerful managed infrastructure, and low cost of Amazon SES, with the support of a quality WordPress plugin that’s easy to set up and notifies you of sending failures.

Some of WP Offload SES Lite’s top features include:

  • Effortless configuration with an easy step-by-step setup wizard
  • Configure the default email address and name that WordPress uses for notifications
  • Set up a custom “Reply To” and “Return Path” address
  • View statistics on your Amazon SES send rate

There is also a pro version, which gives you additional features like premium support, open and click reporting, engagement analysis per specific emails, filter/search functionality, and more.

The Marketing Tools and Strategies You Need to Know

With your WordPress email primed and ready for most effective delivery, let’s turn our attention to the best ways to engage with your audience using email.

marketing illustration (freepik)

First, you need to collect that all-important contact information (email addresses and names, at the very least), so you have a concrete way to reach interested parties.

Second, you’ll want to offer something of value, to establish a sense of fairness/generosity and drive interest in what you do. This free offering to potential customers, in exchange for a piece of their personal information (e.g. an email address or social media follow), is a tried-and-true marketing technique.

Common incentives – like a discount coupon, downloadable, or other item of interest – can be offered as compensation for providing an email address, in order to attract potential customers. Hence the name: lead magnets.

Finally, you’ll want to measure and track which campaigns or giveaways get the best results. That will give you a sense of how your site is performing; the number of visitors to your different pages, and where you’re getting conversions from. All of which help you understand which content performs the best.

This is where lead generation tools come in. They are specifically designed to identify, capture, store, and analyze leads – with the goal of turning visitors into paying customers, and paying customers into repeat business.

Lead Capturing Plugins

Employing tools and services specific to lead generation is a great way to collect the desired information, without requiring any manual work on your part. The tools automate the process, allowing you to focus on other areas of your business.

Various methods for lead capture include:

  • On-page, opt-in forms and sign-up campaigns
  • Email address finders
  • Customer Relationship Management (CRM); sales funnels
  • Communication channels (chat)
  • Advertising (social media or paid ads)

It’s not a bad idea to experiment with different options from the methods above. But for the purposes of this article, our focus will be on-page, opt-in forms, as they are the simplest to employ and incredibly successful.

Why? Well, you’re collecting contact details from people who already have an interest in your business, even if it’s at the most basic level. They’re on your site, and therefore the most likely to provide you with their contact information. After all, they came looking for you, not the other way around.

Additionally, studies have shown that most people are receptive to emails that come from companies they’ve already shown an interest in.

Here are our top 5 picks for lead capturing plugins.

forminator banner

1. Forminator

Of course we’re partial to our very own contact form, payment form and custom form builder, Forminator, but it’s more than just us who think so, with more than 400,000 happy users.

Forms, polls, quizzes… nothing’s off limits with Forminator. Create new campaigns in minutes with the easy-to-use, drag-and-drop form builder, using pre-fab templates or starting from scratch – with the ability to customize settings, style, and behavior.

Forminator is the easiest way to create any form, such as a contact form, order form, payment form, email form, feedback widgets, interactive polls with real-time results, Buzzfeed-style “no wrong answer” quizzes, service estimators, and registration forms with payment options.

Speaking of payments… take donations, down payments, full payments, sell merch and more with the included Stripe and PayPal integrations. (No Pro upgrade required!) SCA-compliant Stripe and PayPal come included. Just enter your publish keys to activate the Forminator payment module for both fixed and variable payments.

Forminator comes stacked with crowd-favorite third-party integrations – email services, CRM, storage, and project managers such as HubSpot, Google Sheets, Trello, MailChimp, AWeber, Slack, and any generic webhooks (such as Zapier).

But wait – there’s more! Forminator also has these amazing features:

  • Gutenberg Block – say goodbye to shortcodes and quickly add forms to posts with the Forminator block for Gutenberg
  • Email Routing and Pre-Populate – make your site more efficient, from visitor input to email response times; use query strings to pre-fill visitor information and deliver forms direct to specific teams with email routing, auto-response and conditions
  • User Front End Post Submissions – visitors can submit post ideas from the front end of your site so you can easily curate and publish their thoughts
  • Captchas – stop the crazy bots without making it hard on your visitors (ie, no more hard-to-read random phrases)
  • Collect, Track and GDPR ready – store and organize submissions to sort, analyze, and manage responses; all while complying with the GDPR and other legal privacy policies

There is a pro version as well, which contains all the same features as the free version, plus the additional “E-Signature” and “Stripe Subscriptions” features.

formidable banner

2. Formidable Forms

Formidable is a solutions-focused WordPress form plugin. Use drag and drop to create a contact form, survey, quiz, registration form, payment form, lead form, or calculator form.

Formidable is 100% mobile responsive, so your forms look great on all devices (desktop, laptop, tablets, and smartphones). It’s also optimized for speed and maximum server performance.

This free version of Formidable comes with a variety of features, like advanced email subscription forms, multi-page forms, a smart form with conditional logic, stack on repeater fields, payment integrations, form templates, relationships, and cascading dropdown fields.

Submissions are stored in your WordPress database so you won’t lose any leads, and quiz and survey entries can be viewed right from your WordPress dashboard. Also, the form generator is GDPR-friendly (even though entries are saved), and you can turn off IP tracking or stop saving submissions entirely.

Create a payment form and accept credit card payments right from your website, with seamless integration with PayPal, Stripe, and Authorize.net. You can even create a WooCommerce form with custom fields.

There is a pro version as well, that includes many more features and functionalities that help you build more powerful and larger applications.

ninja forms banner

3. Ninja Forms

Design beautiful, complex forms with a dedicated support team at your back.

Easy drag-and-drop fields, row and column layouts, multi-page forms, conditional forms… you don’t have to be a designer to create forms that will blend beautifully with your WordPress website.

You can accept PayPal and credit card payments securely and easily from any of your WordPress forms: single payments, subscriptions, fixed, variable, or user entered amounts. Give your customers or donors all the options, or just one with a PayPal form, Stripe form, and more.

Grow your mailing lists and bring in new leads using integration with MailChimp, Constant Contact, Campaign Monitor, Salesforce CRM, Zoho CRM, Insightly CRM, and more.

Ninja Forms is also GDPR compliant, as it doesn’t collect or store personally identifiable information, or any information, from your forms. Easy-to-use templates are included for Right to Be Forgotten and Data Export requests, and tie into native WordPress GDPR features for automated compliance.

Includes pre-built templates for a contact form, registration form, application form, MailChimp form, quote request form, PayPal form, Stripe form, and more. Also mobile responsive and design-adaptable to fit in with any theme or brand.

Ninja Forms offers additional features and upgrades in two ways: per add-on, or through a membership (different tiers vary in the number of allowable sites and add-ons).

contact form 7 banner

4. Contact Form 7

At five million+ active installs, Contact Form 7 is the OG WordPress plugin for contact forms.

It has stood the test of time, still able to create simple and multiple contact forms, while allowing for the customisation of the form and the mail contents flexibly with simple markup.

Forms support Ajax-powered submitting, CAPTCHA, Akismet spam filtering, and do not track user data or use cookies. However, activation of certain features may invoke personal data to be sent to service providers (eg: reCAPTCHA, Stripe).

While the plugin boasts massive numbers and is still popular, it is becoming less so as more advanced, feature-rich competitors have become available. Especially since most of these are free to use as well.

Additionally, unlike newer plugins, an additional plugin is needed (they make one called Flamingo) just to save submitted messages from contact forms in the database.

They have also started requesting contributions from users, citing the difficulty in continuing support and development of a free plugin.

hustle banner

5. Hustle

Hustle in the ultimate marketing plugin for building a mailing list and converting site traffic.

Incredibly versatile and engaging, Hustle has a myriad of options and customizations.

Easily grow your mailing list or display targeted ads across your site with popups, opt-ins, slide-ins, widgets, and shortcodes.

Build a social following with Hustle’s diverse social sharing capabilities.

Choose colors, animations, layouts, drop shadows, and display conditions for all your marketing modules from easy-to-use design settings. (There’s even CSS if you really want to go all out.) All a cinch with Hustle’s flexible appearance settings.

Default layouts and templates are fully mobile responsive, while allowing for granular adjustments (margins, padding, borders, container sizing) so you can make every module your own.

Target visitors with email opt-ins or ads using Hustle’s behavior and condition settings, and set up intelligent conditions if desired as well (e.g. specific pages/posts, visitor device/browser, country, browser cookie, etc).

Smart triggers allow you to set up a range of options for popups and slide-ins, including time on page, scroll, exit intent, and more.

Schedule when you want your marketing modules to deploy by selecting start/end dates, or show them on specific days of the week, time of day, along with custom time zones.

Easily follow up on user engagement with manual or automated email messages, and build your following on social networks with floating widgets and shortcodes to add followers.

Hustle smoothly integrates with popular form builders (like Forminator), to embed those forms/polls/quizzes into popups and slide-ins for interactive lead generation.

While you’re at it, integrate Hustle with an email service; 19 of the most popular are offered (including AWeber, MailChimp, Sendinblue, MailPoet, Zapier, and more).

Rounding out the additional features you get with Hustle:

  • Gutenberg WP editor block
  • ReCAPTCHA spam warrior
  • Ability to override Ad Blockers that try to prevent popups and slide-ins
  • Intelligent tracking on each module (including conversion stats, insightful charts, custom dashboard widget)

There is also a pro version of Hustle, which gives you all the same settings and options as the free version does. The difference is, Hustle Pro offers unlimited opt-ins, custom content, and social sharing, whereas the free version allows three of each type (popups, slide-ins, social share bars, and embeds).

Email Campaigns and Analytics Plugins

Once you’ve got those all-important email addresses collected, it’s time to set your sights on the various campaigns you can use to turn casual interest into a revenue stream.

There are a number of ways to use email campaigns to engage your audience and increase sales.

Blog posts can help by specifically targeting your audience, catering your content to them. It’s a proven way to align with your audience by providing (free) information of value to them, while keeping the connection warm. Plus, they can be kept in a devoted section of your website, making it easy for both old and new customers to partake in the historical canon unique to your business.

Newsletters and announcements are great for keeping your audience up to date on any site changes, or to promote particular products or discounts. Announcements could include down-time notices (for example, planned changes where your site will be offline briefly), or information about sales, coupons, special offers, etc – anything that is a change in your ‘norm’ that falls within a specific timeframe.

We looked at some email plugins early on, where the purpose was to improve email delivery by directing through SMTP servers or transactional mailers.

There is another category of full-featured email plugins/services that provide additional marketing, campaigning, and analysis features on top of trustworthy delivery. These can be quite handy if you want to avail yourself of pre-designed email templates, auto sending schedules, and tracking customer journeys.

Here are our favorites in this category.

mailpoet banner

1. MailPoet

More than 600,000 websites use MailPoet to keep in touch with their subscribers, delivering 30 million+ emails each month to inboxes, not spam boxes. Enjoy high open rates with their unmatched deliverability rate and rock solid infrastructure.

MailPoet works seamlessly with your favorite CMS so you can start sending emails right away. Quickly add content and images directly from your media library. No need to upload files to third-party services when it’s all right there, ready to use in your WordPress dashboard.

From first hello to loyal customer appreciation, send emails to the right people at the right time. Welcome new subscribers with an automated series of onboarding emails and enjoy open rates of 40% and higher.

Automatically send email updates to let subscribers know about your latest blog posts, in your choice of sending schedule (daily, weekly, monthly) and bring them back to your website.

Create email updates and newsletters your subscribers can’t wait to open with beautifully designed templates you can customize to match your personality (and brand). With plenty of design options and advanced features, you can choose from a template, customize whatever you need, then send it out. Quick previews allow you to always see how your emails look before hitting send.

MailPoet is available via paid plans as well, which add features and functionality like more subscribers, unlimited emails, advanced analytics, the ability to remove branding, priority support, and more.

hubspot banner

2. HubSpot

HubSpot is an extremely popular, all-in-one CRM platform with tools and integrations for marketing, sales, and customer service.

The CRM in HubSpot’s WordPress plugin is your site’s command center, with 360-degree views of your customers, where you can easily:

  • Manage contacts (CRM)
  • Engage visitors with live chat and chatbots
  • Add beautiful forms to pages; create engaging email marketing campaigns
  • Capture leads with custom or existing forms; send newsletters and automated marketing campaigns
  • Track site health with easy-to-understand analytics, directly from the dashboard
  • See a unified timeline of a contact’s page views, past conversations, and more in a WordPress CRM

You’ve also got full-service email, newsletter, and marketing automation software, from which you can build professional emails in minutes, then send them to your entire contact database.

Features here include:

  • 20+ pre-designed email templates to match your campaign goals
    (Choose from templates such as newsletters, ebooks, welcome emails, and more)
  • Drag and drop email builder; adjust typography, designs, colors, and more to create campaigns your subscribers will love
  • Email automation, tracking, and A/B testing
  • Send emails anytime someone fills out a form or engages with your live chat/chatbots
  • Send messages immediately or use email scheduling to send messages later
  • Email tracking assures all of your emails are logged in your database; measure engagement of each with reports for open rate and click rate

Forms and popups are included, with a variety of templates (contact us, newsletter signup, ebook download, etc) and display options (embed, standalone page, pop-up box, dropdown banner, etc). Choose from a variety of settings, color schemes, and fonts — or start from scratch.

HubSpot also allows for integrations with other WP form builders and lead generators (like Forminator and Hustle 🙂).

And there’s much more, such as:

  • Live chat and chatbots (with custom styling, real-time messaging, Slack integration, 24/7 live support on autopilot)
  • Analytics (email, traffic, WP; marketing, sales follow-ups; time-onsite)
  • Reports (blog posts, landing pages, email campaigns)
  • Seamless use of more than 1030 integrations – including social media, ads (Facebook, Google, LinkedIn), Hotjar, YouTube, Zoom, Gmail, Hustle, MailChimp, Sendinblue, Shopify, WooCommerce, Forminator, LiveChat… and the list goes on

In addition to their free version, which offers a taste of limited features, HubSpot offers a number of paid version packages – Starter, Starter CRM Suite, Business, Professional, and Enterprise – with many different combinations of features and services to suit all needs.

optinmonster banner

3. OptinMonster

OptinMonster is a customer acquisition and lead generation plugin. As a SaaS, its use requires an OptinMonster account, but that’s quick to set up.

OptinMonster’s popup maker allows you to create popup campaigns, email subscription forms, sticky announcement bars like hello bar, gamified spin-a-wheel opt-in forms, and other types of interactive popups for your site. Use the drag-and-drop editor to customize the look and feel of your campaigns, or choose from hundreds of templates.

OptinMonster also offers mobile popups so your marketing messages look great on all devices (mobile, tablet, laptop, and desktop). It’s also optimized for both web and server performance.

Popup options include:

  • Lightbox
  • Floating Bar
  • Slide-ins
  • Fullscreen Welcome Mats
  • Gamified Spin a Wheel Popup
  • Countdown Timers

OptinMonster also has targeting behaviors, like page level targeting, geolocation targeting, popup behavior automation, and WooCommerce. Plus trigger behaviors, like exit intent, scroll trigger, and time-on-site popups.

There are also quite a few email and CRM integrations available, such as Constant Contact, MailChimp, AWeber, and more.

A free account includes three campaigns and up to 500 campaign impressions, which never expire.

OptinMonster also offers premium, paid versions which include more features and remove the limits imposed in the free version.

sendinblue banner

4. Sendinblue

Sendinblue is a powerful all-in-one marketing platform, trusted by more than 165,000 companies around the world to deliver their emails and SMS messages.

Sendinblue optimizes deliverability using a proprietary infrastructure over SMTP, with options that include email, SMS, Facebook, chat, CRM, and marketing automation.

The Sendinblue WordPress plugin uses their own API to synchronize contacts, send emails and get statistics. Synchronization is automatic, so it doesn’t matter whether your lists were uploaded on your WordPress interface or on your Sendinblue account: they will always remain up-to-date on both sides.

Sendinblue’s free account takes less than two minutes to set up, and allows you to send up to 300 emails per day on their free (forever) plan.

Sendinblue integrates with most lead capture and advanced form builder plugins, but also contains their own native subscription forms, with the following features:

  • Form designer with WYSIWYG and direct HTML, and CSS editing (if desired)
  • Integration as widget or shortcode
  • Send a confirmation email – you choose the template and the sender
  • Use a double opt-in confirmation – you choose the template and the sender
  • URL redirection
  • Confirmation / error message customization

The following additional options are included as well:

  • Contact lists (unlimited custom fields; CSV and TXT import; advanced segmentation)
  • Marketing campaigns (drag-and-drop tools; template library; advanced scheduling)
  • Transactional emails (with auto replacement of default SMTP)
  • Statistics (real-time and exhaustive)

Sendinblue offers a free (forever) plan that includes 9000 emails per month and unlimited contacts, with no hidden costs.

They offer premium, paid plans as well, which remove the limits, and include additional features (like removing the Sendinblue logo, A/B testing, marketing automation, priority support, and more).

Follow the Leader to the Very Top

Lead generation is often the difference between the smashing success or abject failure of a business.

And while there are many components that go into lucrative marketing, you can tap into WordPress’s generous supply of free plugins to eliminate the heavy lifting.

As shown in this post, email still holds the #1 spot when it comes to customer acquisition channels, with significant reach and conversion rates.

Determine the plugins that best meets your needs, and get going on cultivating your contact lists, securing good delivery with SMTP sends, and setting up email campaigns that drive customer engagement, traffic, retention, and loyalty.

If you want to really ensure best results, make sure you have trusted, dedicated hosting (we’re a top pick for web developers), speed and SEO optimization (our memberships come with a suite of premium plugins, including performance and security), and world-class, always-on support.

Spring Cloud: How To Deal With Microservice Configuration (Part 1)

Configuring a software system in a monolithic approach does not pose particular problems. To make the configuration properties available to the system, we can store them in a file inside an application folder, in some place in the filesystem, or as OS environment variables. Microservice configuration is a more complex subject. We have to deal with a number, which can be huge, of totally independent services, each with its own configuration. We could even face a scenario in which several instances of the same service need different configuration values. 

In such a situation, a way to centralize and simplify configuration management would be of great importance. Spring Cloud has its own module to solve these problems, named Spring Cloud Config. This module provides an implementation of a server that exposes an API to retrieve the configuration information, usually stored in some remote repository like Git, and, at the same time, it gives us the means to implement the client side meant to consume the services of that API.

The Key To Good Component Design Is Selfishness

When developing a new feature, what determines whether an existing component will work or not? And when a component doesn’t work, what exactly does that mean?

Does the component functionally not do what it’s expected to do, like a tab system that doesn’t switch to the correct panel? Or is it too rigid to support the designed content, such as a button with an icon after the content instead of before it? Or perhaps it’s too pre-defined and structured to support a slight variant, like a modal that always had a header section, now requiring a variant without one?

Such is the life of a component. All too often, they’re built for a narrow objective, then hastily extended for minor one-off variations again and again until it no longer works. At that point, a new component is created, the technical debt grows, the onboarding learning curve becomes steeper, and the maintainability of the codebase is more challenging.

Is this simply the inevitable lifecycle of a component? Or can this situation be averted? And, most importantly, if it can be averted, how?

Selfishness. Or perhaps, self-interest. Better yet, maybe a little bit of both.

Far too often, components are far too considerate. Too considerate of one another and, especially, too considerate of their own content. In order to create components that scale with a product, the name of the game is self-interest bordering on selfishness — cold-hearted, narcissistic, the-world-revolves-around-me selfishness.

This article isn’t going to settle the centuries-old debate about the line between self-interest and selfishness. Frankly, I’m not qualified to take part in any philosophical debate. However, what this article is going to do is demonstrate that building selfish components is in the best interest of every other component, designer, developer, and person consuming your content. In fact, selfish components create so much good around them that you could almost say they’re selfless.

I don’t know 🤷‍♀️ Let’s look at some components and decide for ourselves.

Note: All code examples and demos in this article will be based on React and TypeScript. However, the concepts and patterns are framework agnostic.

The Consideration Iterations

Perhaps, the best way to demonstrate a considerate component is by walking through the lifecycle of one. We’ll be able to see how they start small and functional but become unwieldy once the design evolves. Each iteration backs the component further into a corner until the design and needs of the product outgrow the capabilities of the component itself.

Let’s consider the modest Button component. It’s deceptively complex and quite often trapped in the consideration pattern, and therefore, a great example of working through.

Iteration 1

While these sample designs are quite barebones, like not showing various :hover, :focus, and disabled states, they do showcase a simple button with two color themes.

At first glance, it’s possible the resulting Button component could be as barebones as the design.

// First, extend native HTML button attributes like onClick and disabled from React.
type ButtonProps = React.ComponentPropsWithoutRef<"button"> & {
  text: string;
  theme: 'primary' | 'secondary';
}
<Button
  onClick={someFunction}
  text="Add to cart"
  theme="primary"
/>

It’s possible, and perhaps even likely, that we’ve all seen a Button component like this. Maybe we’ve even made one like it ourselves. Some of the namings may be different, but the props, or the API of the Button, are roughly the same.

In order to meet the requirements of the design, the Button defines props for the theme and text. This first iteration works and meets the current needs of both the design and the product.

However, the current needs of the design and product are rarely the final needs. When the next design iterations are created, the Add to cart button now requires an icon.

Iteration 2

After validating the UI of the product, it was decided that adding an icon to the Add to cart button would be beneficial. The designs explain, though, that not every button will include an icon.

Returning to our Button component, its props can be extended with an optional icon prop which maps to the name of an icon to conditionally render.

type ButtonProps = {
  theme: 'primary' | 'secondary';
  text: string;
  icon?: 'cart' | '...all-other-potential-icon-names';
}
<Button
  theme="primary"
  onClick={someFunction}
  text="Add to cart"
  icon="cart"
/>

Whew! Crisis averted.

With the new icon prop, the Button can now support variants with or without an icon. Of course, this assumes the icon will always be shown at the end of the text, which, to the surprise of nobody, is not the case when the next iteration is designed.

Iteration 3

The previous Button component implementation included the icon at the text’s end, but the new designs require an icon to optionally be placed at the start of the text. The single icon prop will no longer fit the needs of the latest design requirements.

There are a few different directions that can be taken to meet this new product requirement. Maybe an iconPosition prop can be added to the Button. But what if there comes a need to have an icon on both sides? Maybe our Button component can get ahead of this assumed requirement and make a few changes to the props.

The single icon prop will no longer fit the needs of the product, so it’s removed. In its place, two new props are introduced, iconAtStart and iconAtEnd.

type ButtonProps = {
  theme: 'primary' | 'secondary' | 'tertiary';
  text: string;
  iconAtStart?: 'cart' | '...all-other-potential-icon-names';
  iconAtEnd?: 'cart' | '...all-other-potential-icon-names';
}

After refactoring the existing uses of Button in the codebase to use the new props, another crisis is averted. Now, the Button has some flexibility. It’s all hardcoded and wrapped in conditionals within the component itself, but surely, what the UI doesn’t know can’t hurt it.

Up until this point, the Button icons have always been the same color as the text. It seems reasonable and like a reliable default, but let’s throw a wrench into this well-oiled component by defining a variation with a contrasting color icon.

Iteration 4

In order to provide a sense of feedback, this confirmation UI stage was designed to be shown temporarily when an item has been added to the cart successfully.

Maybe this is a time when the development team chooses to push back against the product requirements. But despite the push, the decision is made to move forward with providing color flexibility to Button icons.

Again, multiple approaches can be taken for this. Maybe an iconClassName prop is passed into the Button to have greater control over the icon’s appearance. But there are other product development priorities, and instead, a quick fix is done.

As a result, an iconColor prop is added to the Button.

type ButtonProps = {
  theme: 'primary' | 'secondary' | 'tertiary';
  text: string;
  iconAtStart?: 'cart' | '...all-other-potential-icon-names';
  iconAtEnd?: 'cart' | '...all-other-potential-icon-names';
  iconColor?: 'green' | '...other-theme-color-names';
}

With the quick fix in place, the Button icons can now be styled differently than the text. The UI can provide the designed confirmation, and the product can, once again, move forward.

Of course, as product requirements continue to grow and expand, so do their designs.

Iteration 5

With the latest designs, the Button must now be used with only an icon. This can be done in a few different approaches, yet again, but all of them require some amount of refactoring.

Perhaps a new IconButton component is created, duplicating all other button logic and styles into two places. Or maybe that logic and styles are centralized and shared across both components. However, in this example, the development team decides to keep all the variants in the same Button component.

Instead, the text prop is marked as optional. This could be as quick as marking it as optional in the props but could require additional refactoring if there’s any logic expecting the text to exist.

But then comes the question, if the Button is to have only an icon, which icon prop should be used? Neither iconAtStart nor iconAtEnd appropriately describes the Button. Ultimately, it’s decided to bring the original icon prop back and use it for the icon-only variant.

type ButtonProps = {
  theme: 'primary' | 'secondary' | 'tertiary';
  iconAtStart?: 'cart' | '...all-other-potential-icon-names';
  iconAtEnd?: 'cart' | '...all-other-potential-icon-names';
  iconColor?: 'green' | '...other-theme-color-names';
  icon?: 'cart' | '...all-other-potential-icon-names';
  text?: string;
}

Now, the Button API is getting confusing. Maybe a few comments are left in the component to explain when and when not to use specific props, but the learning curve is growing steeper, and the potential for error is increasing.

For example, without adding great complexity to the ButtonProps type, there is no stopping a person from using the icon and text props at the same time. This could either break the UI or be resolved with greater conditional complexity within the Button component itself. Additionally, the icon prop can be used with either or both of the iconAtStart and IconAtEnd props as well. Again, this could either break the UI or be resolved with even more layers of conditionals within the component.

Our beloved Button has become quite unmanageable at this point. Hopefully, the product has reached a point of stability where no new changes or requirements will ever happen again. Ever.

Iteration 6

So much for never having any more changes. 🤦

This next and final iteration of the Button is the proverbial straw that breaks the camel’s back. In the Add to cart button, if the current item is already in the cart, we want to show the quantity of which on the button. On the surface, this is a straightforward change of dynamically building the text prop string. But the component breaks down because the current item count requires a different font weight and an underline. Because the Button accepts only a plain text string and no other child elements, the component no longer works.

Would this design have broken the Button if this was the second iteration? Maybe not. The component and codebase were both much younger then. But the codebase has grown so much by this point that refactoring for this requirement is a mountain to climb.

This is when one of the following things will likely happen:

  1. Do a much larger refactor to move the Button away from a text prop to accepting children or accepting a component or markup as the text value.
  2. The Button is split into a separate AddToCart button with an even more rigid API specific to this one use case. This also either duplicates any button logic and styles into multiple places or extracts them into a centralized file to share everywhere.
  3. The Button is deprecated, and a ButtonNew component is created, splitting the codebase, introducing technical debt, and increasing the onboarding learning curve.

Neither outcome is ideal.

So, where did the Button component go wrong?

Sharing Is Impairing

What is the responsibility of an HTML button element exactly? Narrowing down this answer will shine light onto the issues facing the previous Button component.

The responsibilities of the native HTML button element go no further than:

  1. Display, without opinion, whatever content is passed into it.
  2. Handle native functionality and attributes such as onClick and disabled.

Yes, each browser has its own version of how a button element may look and display content, but CSS resets are often used to strip those opinions away. As a result, the button element boils down to little more than a functional container for triggering events.

The onus of formatting any content within the button isn’t the responsibility of the button but of the content itself. The button shouldn’t care. The button should not share the responsibility for its content.

The core issue with the considerate component design is that component props define the content and not the component itself.

In the previous Button component, the first major limitation was the text prop. From the first iteration, a limitation was placed on the content of the Button. While the text prop fit with the designs at that stage, it immediately deviated from the two core responsibilities of the native HTML button. It immediately forced the Button to be aware of and responsible for its content.

In the following iterations, the icon was introduced. While it seemed reasonable to bake a conditional icon into the Button, also doing so deviated from the core button responsibilities. Doing so limited the use cases of the component. In later iterations, the icon needed to be available in different positions, and the Button props were forced to expand to style the icon.

When the component is responsible for the content it displays, it needs an API that can accommodate all content variations. Eventually, that API will break down because the content will forever and always change.

Introducing The Me In Team

There’s an adage used in all team sports, “There’s no ‘I’ in a team.” While this mindset is noble, some of the greatest individual athletes have embodied other ideas.

Michael Jordan famously responded with his own perspective, “There’s an ‘I’ in win.” The late Kobe Bryant had a similar idea, “There’s an ‘M-E’ in [team].”

Our original Button component was a team player. It shared the responsibility of its content until it reached the point of deprecation. How could the Button have avoided such constraints by embodying a “M-E in team” attitude?

Me, Myself, And UI
When the component is responsible for the content it displays, it will break down because the content will forever and always change.

How would a selfish component design approach have changed our original Button?

Keeping the two core responsibilities of the HTML button element in mind, the structure of our Button component would have immediately been different.

// First, extend native HTML button attributes like onClick and disabled from React.
type ButtonProps = React.ComponentPropsWithoutRef<"button"> & {
  theme: 'primary' | 'secondary' | 'tertiary';
}
<Button
  onClick={someFunction}
  theme="primary"
>
  <span>Add to cart</span>
</Button>

By removing the original text prop in lieu of limitless children, the Button is able to align with its core responsibilities. The Button can now act as little more than a container for triggering events.

By moving the Button to its native approach of supporting child content, the various icon-related props are no longer required. An icon can now be rendered anywhere within the Button regardless of size and color. Perhaps the various icon-related props could be extracted into their own selfish Icon component.

<Button
  onClick={someFunction}
  theme="primary"
>
  <Icon name="cart" />
  <span>Add to cart</span>
</Button>

With the content-specific props removed from the Button, it can now do what all selfish characters do best, think about itself.

// First, extend native HTML button attributes like onClick and disabled from React.
type ButtonProps = React.ComponentPropsWithoutRef<"button"> & {
  size: 'sm' | 'md' | 'lg';
  theme: 'primary' | 'secondary' | 'tertiary';
  variant: 'ghost' | 'solid' | 'outline' | 'link'
}

With an API specific to itself and independent content, the Button is now a maintainable component. The self-interest props keep the learning curve minimal and intuitive while retaining great flexibility for various Button use cases.

Button icons can now be placed at either end of the content.

<Button
  onClick={someFunction}
  size="md"
  theme="primary"
  variant="solid"
>
  <Box display="flex" gap="2" alignItems="center">
    <span>Add to cart</span>
    <Icon name="cart" />
  </Box>
</Button>

Or, a Button could have only an icon.

<Button
  onClick={someFunction}
  size="sm"
  theme="secondary"
  variant="solid"
>
  <Icon name="cart" />
</Button>

However, a product may evolve over time, and selfish component design improves the ability to evolve along with it. Let’s go beyond the Button and into the cornerstones of selfish component design.

The Keys to Selfish Design

Much like when creating a fictional character, it’s best to show, not tell, the reader that they’re selfish. By reading about the character’s thoughts and actions, their personality and traits can be understood. Component design can take the same approach.

But how exactly do we show in a component’s design and use that it is selfish?

HTML Drives The Component Design

Many times, components are built as direct abstractions of native HTML elements like a button or img. When this is the case, let the native HTML element drive the design of the component.

Specifically, if the native HTML element accepts children, the abstracted component should as well. Every aspect of a component that deviates from its native element is something that must be learned anew.

When our original Button component deviated from the native behavior of the button element by not supporting child content, it not only became rigid but it required a mental model shift just to use the component.

There has been a lot of time and thought put into the structure and definitions of HTML elements. The wheel doesn’t need to be reinvented every time.

Children Fend For Themselves

If you’ve ever read Lord of the Flies, you know just how dangerous it can be when a group of children is forced to fend for themselves. However, in the case of selfish component design, we’ll be doing exactly that.

As shown in our original Button component, the more it tried to style its content, the more rigid and complicated it became. When we removed that responsibility, the component was able to do a lot more but with a lot less.

Many elements are little more than semantic containers. It’s not often we expect a section element to style its content. A button element is just a very specific type of semantic container. The same approach can apply when abstracting it to a component.

Components Are Singularly Focused

Think of selfish component design as arranging a bunch of terrible first dates. A component’s props are like the conversation that is entirely focused on them and their immediate responsibilities:

  • How do I look?
    Props need to feed the ego of the component. In our refactored Button example, we did this with props like size, theme, and variant.
  • What am I doing?
    A component should only be interested in what it, and it alone, is doing. Again, in our refactored Button component, we do this with the onClick prop. As far as the Button is concerned, if there’s another click event somewhere within its content, that’s the content’s problem. The Button does. not. care.
  • When and where am I going next?
    Any jet-setting traveler is quick to talk about their next destination. For components like modals, drawers, and tooltips, when and where they’re going is just as gravely important. Components like these are not always rendered in the DOM. This means that in addition to knowing how they look and what they do, they need to know when and where to be. In other words, this can be described with props like isShown and position.

Composition Is King

Some components, such as modals and drawers, can often contain different layout variations. For example, some modals will show a header bar while others do not. Some drawers may have a footer with a call to action. Others may have no footer at all.

Instead of defining each layout in a single Modal or Drawer component with conditional props like hasHeader or showFooter, break the single component into multiple composable child components.

<Modal>
  <Modal.CloseButton />
  <Modal.Header> ... </Modal.Header>
  <Modal.Main> ... <Modal.Main>
</Modal>
<Drawer>
  <Drawer.Main> ... </Drawer.Main>
  <Drawer.Footer> ... </Drawer.Footer>
</Drawer>

By using component composition, each individual component can be as selfish as it wants to be and used only when and where it’s needed. This keeps the root component’s API clean and can move many props to their specific child component.

Let’s explore this and the other keys to selfish component design a bit more.

You’re So Vain, You Probably Think This Code Is About You

Perhaps the keys of selfish design make sense when looking back at the evolution of our Button component. Nevertheless, let’s apply them again to another commonly problematic component — the modal.

For this example, we have the benefit of foresight in the three different modal layouts. This will help steer the direction of our Modal while applying each key of selfish design along the way.

First, let’s go over our mental model and break down the layouts of each design.

In the Edit Profile modal, there are defined header, main and footer sections. There’s also a close button. In the Upload Successful modal, there’s a modified header with no close button and a hero-like image. The buttons in the footer are also stretched. Lastly, in the Friends modal, the close button returns, but now the content area is scrollable, and there’s no footer.

So, what did we learn?

We learned that the header, main and footer sections are interchangeable. They may or may not exist in any given view. We also learned that the close button functions independently and is not tied to any specific layout or section.

Because our Modal can be comprised of interchangeable layouts and arrangements, that’s our sign to take a composable child component approach. This will allow us to plug and play pieces into the Modal as needed.

This approach allows us to very narrowly define the responsibilities of our root Modal component.

Conditionally render with any combination of content layouts.

That’s it. So long as our Modal is just a conditionally-rendered container, it will never need to care about or be responsible for its content.

With the core responsibility of our Modal defined, and the composable child component approach decided, let’s break down each composable piece and its role.

Component Role
<Modal> This is the entry point of the entire Modal component. This container is responsible for when and where to render, how the modal looks, and what it does, like handle accessibility considerations.
<Modal.CloseButton /> An interchangeable Modal child component that can be included only when needed. This component will work similarly to our refactored Button component. It will be responsible for how it looks, where it’s shown, and what it does.
<Modal.Header> The header section will be an abstraction of the native HTML header element. It will be little more than a semantic container for any content, like headings or images, to be shown.
<Modal.Main> The main section will be an abstraction of the native HTML main element. It will be little more than a semantic container for any content.
<Modal.Footer> The footer section will be an abstraction of the native HTML footer element. It will be little more than a semantic container for any content.

With each component and its role defined, we can start creating props to support those roles and responsibilities.

Modal

Earlier, we defined the barebones responsibility of the Modal, knowing when to conditionally render. This can be achieved using a prop like isShown. Therefore, we can use these props, and whenever it’s true, the Modal and its content will render.

type ModalProps = {
  isShown: boolean;
}
<Modal isShown={showModal}>
  ...
</Modal>

Any styling and positioning can be done with CSS in the Modal component directly. There’s no need to create specific props at this time.

Modal.CloseButton

Given our previously refactored Button component, we know how the CloseButton should work. Heck, we can even use our Button to build our CloseButton component.

import { Button, ButtonProps } from 'components/Button';

export function CloseButton({ onClick, ...props }: ButtonProps) {
  return (
    <Button {...props} onClick={onClick} variant="ghost" theme="primary" />
  )
}
<Modal>
  <Modal.CloseButton onClick={closeModal} />
</Modal>

Modal.Header, Modal.Main, Modal.Footer

Each of the individual layout sections, Modal.Header, Modal.Main, and Modal.Footer, can take direction from their HTML equivalents, header, main, and footer. Each of these elements supports any variation of child content, and therefore, our components will do the same.

There are no special props needed. They serve only as semantic containers.

<Modal>
  <Modal.CloseButton onClick={closeModal} />
  <Modal.Header> ... </Modal.Header>
  <Modal.Main> ... </Modal.Main>
  <Modal.Footer> ... </Modal.Footer>
</Modal>

With our Modal component and its child component defined, let’s see how they can be used interchangeably to create each of the three designs.

Note: The full markup and styles are not shown so as not to take away from the core takeaways.

Edit Profile Modal

In the Edit Profile modal, we use each of the Modal components. However, each is used only as a container that styles and positions itself. This is why we haven’t included a className prop for them. Any content styling should be handled by the content itself, not our container components.

<Modal>
  <Modal.CloseButton onClick={closeModal} />

  <Modal.Header>
    <h1>Edit Profile</h1>
  </Modal.Header>

  <Modal.Main>
    <div className="modal-avatar-selection-wrapper"> ... </div>
    <form className="modal-profile-form"> ... </form>
  </Modal.Main>

  <Modal.Footer>
    <div className="modal-button-wrapper">
      <Button onClick={closeModal} theme="tertiary">Cancel</Button>
      <Button onClick={saveProfile} theme="secondary">Save</Button>
    </div>
  </Modal.Footer>
</Modal>

Upload Successful Modal

Like in the previous example, the Upload Successful modal uses its components as opinionless containers. The styling for the content is handled by the content itself. Perhaps this means the buttons could be stretched by the modal-button-wrapper class, or we could add a “how do I look?” prop, like isFullWidth, to the Button component for a wider or full-width size.

<Modal>
  <Modal.Header>
    <img src="..." alt="..." />
    <h1>Upload Successful</h1>
  </Modal.Header>

  <Modal.Main>
    <p> ... </p>
    <div className="modal-copy-upload-link-wrapper"> ... </div>
  </Modal.Main>

  <Modal.Footer>
    <div className="modal-button-wrapper">
      <Button onClick={closeModal} theme="tertiary">Skip</Button>
      <Button onClick={saveProfile} theme="secondary">Save</Button>
    </div>
  </Modal.Footer>
</Modal>

Friends Modal

Lastly, our Friends modal does away with the Modal.Footer section. Here, it may be enticing to define the overflow styles on Modal.Main, but that is extending the container’s responsibilities to its content. Instead, handling those styles is better suited in the modal-friends-wrapper class.

<Modal>
  <Modal.CloseButton onClick={closeModal} />

  <Modal.Header>
    <h1>AngusMcSix's Friends</h1>
  </Modal.Header>

  <Modal.Main>
      <div className="modal-friends-wrapper">
        <div className="modal-friends-friend-wrapper"> ... </div>
        <div className="modal-friends-friend-wrapper"> ... </div>
        <div className="modal-friends-friend-wrapper"> ... </div>
      </div>
  </Modal.Main>
</Modal>

With a selfishly designed Modal component, we can accommodate evolving and changing designs with flexible and tightly scoped components.

Next Modal Evolutions

Given all that we’ve covered, let’s throw around some hypotheticals regarding our Modal and how it may evolve. How would you approach these design variations?

A design requires a fullscreen modal. How would you adjust the Modal to accommodate a fullscreen variation?

Another design is for a 2-step registration process. How could the Modal accommodate this type of design and functionality?

Recap

Components are the workhorses of modern web development. Greater importance continues to be placed on component libraries, either standalone or as part of a design system. With how fast the web moves, having components that are accessible, stable, and resilient is absolutely critical.

Unfortunately, components are often built to do too much. They are built to inherit the responsibilities and concerns of their content and surroundings. So many patterns that apply this level of consideration break down further each iteration until a component no longer works. At this point, the codebase splits, more technical debt is introduced, and inconsistencies creep into the UI.

If we break a component down to its core responsibilities and build an API of props that only define those responsibilities, without consideration of content inside or around the component, we build components that can be resilient to change. This selfish approach to component design ensures a component is only responsible for itself and not its content. Treating components as little more than semantic containers means content can change or even move between containers without effect. The less considerate a component is about its content and its surroundings, the better for everybody — better for the content that will forever change, better for the consistency of the design and UI, which in turn is better for the people consuming that changing content, and lastly, better for the developers using the components.

The key to the good component design is selfishness. Being a considerate team player is the responsibility of the developer.

How To Build a Node.js API Proxy Using http-proxy-middleware

A proxy is something that acts on behalf of something else. Your best friend giving your attendance in the boring lecture you bunked during college is a real-life example of proxying.

When it comes to API development, a proxy is an interface between the client and the API server. The job of the interface is to proxy incoming requests to the real server.

What’s the Best Way to Determine the Price of an API?

As an API provider, once you’ve decided to bring in revenue from your APIs, the next step is to figure out how you will price the usage. As with any business decision, there are plenty of ways to go about pricing an API. There are many short-term strategies to establish initial pricing and then iterate to find which pricing model and price points work best for your customers. Long term, there are also steps that can be taken to ensure your pricing stays relevant and balances retention and revenue.

In this article, we will take a deep dive into all the relevant questions when it comes to API pricing. I will aim to give you some points to think on as well as some best practices. Hopefully, by the end of the article, you will have a better understanding of API pricing and how you can apply the techniques below to your organization’s monetized APIs. Lastly, we will take a look at how an API analytics platform can help you to figure out what APIs to monetize, how to price them, and how to actually charge users for usage. Let’s jump in!

Complex Navigation in SwiftUI

Navigation in SwiftUI has been a major focus of the framework from day one; however, when we tried to create an app that had a bit more navigation and view complexity, we ran into the first problems.

Given the importance of navigation within an app, it has been revised and improved. In this WWDC 22′, they have published a new API to build complex navigation flows, making development easier.

Conway’s Game Of Life – Cellular Automata and Renderbuffers in Three.js

Simple rules can produce structured, complex systems. And beautiful images often follow. This is the core idea behind the Game of Life, a cellular automaton devised by British mathematician John Horton Conway in 1970. Often called just ‘Life’, it’s probably one of the most popular and well known examples of cellular automata. There are many examples and tutorials on the web that go over implementing it, like this one by Daniel Shiffman.

But in many of these examples this computation runs on the CPU, limiting the possible complexity and amount of cells in the system. So this article will go over implementing the Game of Life in WebGL which allows GPU-accelerated computations (= way more complex and detailed images). Writing WebGL on its own can be very painful so it’s going to be implemented using Three.js, a WebGL graphics library. This is going to require some advanced rendering techniques, so some basic familiarity with Three.js and GLSL would be helpful in order to follow along.

Cellular Automata

Conway’s game of life is what’s called a cellular automaton and it makes sense to consider a more abstract view of what that means. This relates to automata theory in theoretical computer science, but really it’s just about creating some simple rules. A cellular automaton is a model of a system that consists of automata, called cells, that are interlinked via some simple logic which allows modelling complex behaviour. A cellular automaton has the following characteristics:

  • Cells live on a grid which can be 1D or higher-dimensional (in our Game of Life it’s a 2D grid of pixels)
  • Each cell has only one current state. Our example only has two possibilities: 0 or 1 / dead or alive
  • Each cell has a neighbourhood, a list of adjacent cells

The basic working principle of a cellular automaton usually involves the following steps:

  • An initial (global) state is selected by assigning a state for each cell.
  • A new generation is created, according to some fixed rule that determines the new state of each cell in terms of:
    • The current state of the cell
    • The states of cells in its neighbourhood
The state of a cell together with its neighbourhood determine the state in the next generation

As already mentioned, the Game of Life is based on a 2D grid. In its initial state there are cells which are either alive or dead. We generate the next generation of cells according to only four rules:

  • Any live cell with fewer than two live neighbours dies as if caused by underpopulation.
  • Any live cell with two or three live neighbours lives on to the next generation.
  • Any live cell with more than three live neighbours dies, as if by overpopulation.
  • Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.

Conway’s Game of Life uses a Moore neighbourhood, which is composed of the current cell and the eight cells that surround it, so those are the ones we’ll be looking at in this example. There are many variations and possibilities to this, and Life is actually Turing complete, but this post is about implementing it in WebGL with Three.js so we will stick to a rather basic version but feel free to research more.

Three.js

Now with most of the theory out of the way, we can finally start implementing the Game of Life.

Three.js is a pretty high-level WebGL library, but it lets you decide how deep you want to go. So it provides a lot of options to control the way scenes are structured and rendered and allows users to get close to the WebGL API by writing custom shaders in GLSL and passing Buffer Attributes.

In the Game of Life each cell needs information about its neighbourhood. But in WebGL all fragments are processed simultaneously by the GPU, so when a fragment shader is in the midst of processing one pixel, there’s no way it can directly access information about any other fragments. But there’s a workaround. In a fragment shader, if we pass a texture, we can easily query the neighbouring pixels in the texture as long as we know its width and height. This idea allows all kinds of post-processing effects to be applied to scenes.

We’ll start with the initial state of the system. In order to get any interesting results, we need non-uniform starting-conditions. In this example we’ll place cells randomly on the screen, so we’ll render a regular noise texture for the first frame. Of course we could initialise with another type of noise but this is the easiest way to get started.

/**
 * Sizes
 */
const sizes = {
	width: window.innerWidth,
	height: window.innerHeight
};

/**
 * Scenes
 */
//Scene will be rendered to the screen
const scene = new THREE.Scene();

/**
 * Textures
 */
//The generated noise texture
const dataTexture = createDataTexture();

/**
 * Meshes
 */
// Geometry
const geometry = new THREE.PlaneGeometry(2, 2);

//Screen resolution
const resolution = new THREE.Vector3(sizes.width, sizes.height, window.devicePixelRatio);

//Screen Material
const quadMaterial = new THREE.ShaderMaterial({
	uniforms: {
		uTexture: { value: dataTexture },
		uResolution: {
			value: resolution
		}
	},
	vertexShader: document.getElementById('vertexShader').textContent,
	fragmentShader: document.getElementById('fragmentShader').textContent
});

// Meshes
const mesh = new THREE.Mesh(geometry, quadMaterial);
scene.add(mesh);

/**
 * Animate
 */

const tick = () => {
    //The texture will get rendere to the default framebuffer
	renderer.render(scene, camera);

	// Call tick again on the next frame
	window.requestAnimationFrame(tick);
};

tick();

This code simply initialises a Three.js scene and adds a 2D plane to fill the screen (the snippet doesn’t show all the basic boilerplate code). The plane is supplied with a Shader Material, that for now does nothing but display a texture in its fragment shader. In this code we generate a texture using a DataTexture. It would be possible to load an image as a texture too, in that case we would need to keep track of the exact texture size. Since the scene will take up the entire screen, creating a texture with the viewport dimensions seems like the simpler solution for this tutorial. Currently the scene will be rendered to the default framebuffer (the device screen).

See the Pen feTurbluence: baseFrequency by Jason Andrew (@jasonandrewth) on CodePen.light

Framebuffers

When writing a WebGL application, whether using the vanilla API or a higher level library like Three.js, after setting up the scene the results are rendered to the default WebGL framebuffer, which is the device screen (as done above).

But there’s also the option to create framebuffers that render off-screen, to image buffers on the GPU’s memory. Those can then be used just like a regular texture for whatever purpose. This idea is used in WebGL when it comes to creating advanced post-processing effects such as depth-of-field, bloom, etc. by applying different effects on the scene once rendered. In Three.js we can do that by using THREE.WebGLRenderTarget. We’ll call our framebuffer renderBufferA.

/**
 * Scenes
 */
//Scene will be rendered to the screen
const scene = new THREE.Scene();
//Create a second scene that will be rendered to the off-screen buffer
const bufferScene = new THREE.Scene();

/**
 * Render Buffers
 */
// Create a new framebuffer we will use to render to
// the GPU memory
let renderBufferA = new THREE.WebGLRenderTarget(sizes.width, sizes.height, {
	// Below settings hold the uv coordinates and retain precision.
	minFilter: THREE.NearestFilter,
	magFilter: THREE.NearestFilter,
	format: THREE.RGBAFormat,
	type: THREE.FloatType,
	stencilBuffer: false
});

//Screen Material
const quadMaterial = new THREE.ShaderMaterial({
	uniforms: {
        //Now the screen material won't get a texture initially
        //The idea is that this texture will be rendered off-screen
		uTexture: { value: null },
		uResolution: {
			value: resolution
		}
	},
	vertexShader: document.getElementById('vertexShader').textContent,
	fragmentShader: document.getElementById('fragmentShader').textContent
});

//off-screen Framebuffer will receive a new ShaderMaterial
// Buffer Material
const bufferMaterial = new THREE.ShaderMaterial({
	uniforms: {
		uTexture: { value: dataTexture },
		uResolution: {
			value: resolution
		}
	},
	vertexShader: document.getElementById('vertexShader').textContent,
	//For now this fragment shader does the same as the one used above
	fragmentShader: document.getElementById('fragmentShaderBuffer').textContent
});

/**
 * Animate
 */

const tick = () => {
	// Explicitly set renderBufferA as the framebuffer to render to
	//the output of this rendering pass will be stored in the texture associated with renderBufferA
	renderer.setRenderTarget(renderBufferA);
	// This will the off-screen texture
	renderer.render(bufferScene, camera);

	mesh.material.uniforms.uTexture.value = renderBufferA.texture;
	//This will set the default framebuffer (i.e. the screen) back to being the output
	renderer.setRenderTarget(null);
	//Render to screen
	renderer.render(scene, camera);

	// Call tick again on the next frame
	window.requestAnimationFrame(tick);
};

tick();

Now there’s nothing to be seen because, while the scene is rendered, it’s rendered to an off-screen buffer.

See the Pen feTurbluence: baseFrequency by Jason Andrew (@jasonandrewth) on CodePen.light

We’ll need to access it as a texture in the animation loop to render the generated texture from the previous step to the fullscreen plane on our screen.

//In the animation loop before rendering to the screen
mesh.material.uniforms.uTexture.value = renderBufferA.texture;

And that’s all it takes to get back the noise, except now it’s rendered off-screen and the output of that render is used as a texture in the framebuffer that renders on to the screen.

See the Pen feTurbluence: baseFrequency by Jason Andrew (@jasonandrewth) on CodePen.light

Ping-Pong 🏓

Now that there’s data rendered to a texture, the shaders can be used to perform general computation using the texture data. Within GLSL, textures are read-only, and we can’t write directly to our input textures, we can only “sample” them. Using the off-screen framebuffer, however, we can use the output of the shader itself to write to a texture. Then, if we can chain together multiple rendering passes, the output of one rendering pass becomes the input for the next pass. So we create two off-screen buffers. This technique is called ping pong buffering. We create a kind of simple ring buffer, where after every frame we swap the off-screen buffer that is being read from with the off-screen buffer that is being written to. We can then use the off-screen buffer that was just written to, and display that to the screen. This lets us perform iterative computation on the GPU, which is useful for all kinds of effects.

To achieve it in THREE.js, first we need to create a second framebuffer. We will call it renderBufferB. Then the ping-pong technique is actually performed in the animation loop.

//Add another framebuffer
let renderBufferB = new THREE.WebGLRenderTarget(
    sizes.width,
    sizes.height,
    {
        minFilter: THREE.NearestFilter,
        magFilter: THREE.NearestFilter,
        format: THREE.RGBAFormat,
        type: THREE.FloatType,
        stencilBuffer: false
    }

    //At the end of each animation loop

    // Ping-pong the framebuffers by swapping them
    // at the end of each frame render
    // Now prepare for the next cycle by swapping renderBufferA and renderBufferB
    // so that the previous frame's *output* becomes the next frame's *input*
    const temp = renderBufferA
    renderBufferA = renderBufferB
    renderBufferB = temp
    //output becomes input
    bufferMaterial.uniforms.uTexture.value = renderBufferB.texture;
)

Now the render buffers are swapped every frame, it’ll look the same but it’s possible to verify by logging out the textures that get passed to the on-screen plane each frame for example. Here’s a more in depth look at ping pong buffers in WebGL.

See the Pen feTurbluence: baseFrequency by Jason Andrew (@jasonandrewth) on CodePen.light

Game Of Life

From here it’s about implementing the actual Game of Life. Since the rules are so simple, the resulting code isn’t very complicated either and there’s many good resources that go through coding it up, so I’ll only go over the key ideas. All the logic for this will happen in the fragment shader that gets rendered off-screen, which will provide the texture for the next frame.

As described earlier, we want to access neighbouring fragments (or pixels) via the texture that’s passed in. This is achieved in a nested for loop in the getNeighbours function. We skip our current cell and check the 8 surrounding pixels by sampling the texture at an offset. Then we check whether the pixels r value is above 0.5, which means it’s alive, and increment the count to represent the alive neighbours.

//GLSL in fragment shader
precision mediump float;
//The input texture
uniform sampler2D uTexture;
//Screen resolution
uniform vec3 uResolution;

// uv coordinates passed from vertex shader
varying vec2 vUvs;

float GetNeighbours(vec2 p) {
    float count = 0.0;

    for(float y = -1.0; y <= 1.0; y++) {
        for(float x = -1.0; x <= 1.0; x++) {

            if(x == 0.0 && y == 0.0)
                continue;

            // Scale the offset down
            vec2 offset = vec2(x, y) / uResolution.xy;
            // Apply offset and sample texture
            vec4 lookup = texture2D(uTexture, p + offset);
             // Accumulate the result
            count += lookup.r > 0.5 ? 1.0 : 0.0;
        }
    }

    return count;
}

Based on this count we can set the rules. (Note how we can use the standard UV coordinates here because the Texture we created in the beginning fills the screen. If we had initialised with an image texture of arbitrary dimensions, we’d need to scale coordinates according to its exact pixel size to get a value between 0.0 and 1.0)

//In the main function
    vec3 color = vec3(0.0);

    float neighbors = 0.0;

    neighbors += GetNeighbours(vUvs);

    bool alive = texture2D(uTexture, vUvs).x > 0.5;

    //cell is alive
    if(alive && (neighbors == 2.0 || neighbors == 3.0)) {

      //Any live cell with two or three live neighbours lives on to the next generation.
      color = vec3(1.0, 0.0, 0.0);

      //cell is dead
      } else if (!alive && (neighbors == 3.0)) {
      //Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.
        color = vec3(1.0, 0.0, 0.0);

      }

    //In all other cases cell remains dead or dies so color stays at 0
    gl_FragColor = vec4(color, 1.0);

And that’s basically it, a working Game of Life using only GPU shaders, written in Three.js. The texture will get sampled every frame via the ping pong buffers, which creates the next generation in our cellular automaton, so no additional variable tracking the time or frames needs to be passed for it to animate.

See the Pen feTurbluence: baseFrequency by Jason Andrew (@jasonandrewth) on CodePen.light

In summary, we first went over the basic ideas behind cellular automata, which is a very powerful model of computation used to generate complex behaviour. Then we were able to implement it in Three.js using ping pong buffering and framebuffers. Now there’s near endless possibilities for taking it further, try adding different rules or mouse interaction for example.

A Poor Man’s API

Creating a full-fledged API requires resources, both time and money. You need to think about the model, the design, the REST principles, etc., without writing a single line of code. Most of the time, you don't know whether it's worth it: you'd like to offer a Minimum Viable Product and iterate from there. I want to show how you can achieve it without writing a single line of code.

The Solution

The main requirement of the solution is to use the PostgreSQL database. It's a well-established Open Source SQL database.

Optimizing A Vue App

Single Page Applications (SPAs) can provide a rich, interactive user experience when dealing with real-time, dynamic data. But they can also be heavy, bloated, and perform poorly. In this article, we’ll walk through some of the front-end optimization tips to keep our Vue apps relatively lean and only ship the JS we need when it’s needed.

Note: Some familiarity with Vue and the Composition API is assumed, but there will hopefully be some useful takeaways regardless of your framework choice.

As a front-end developer at Ada Mode, my job involves building Windscope, a web app for wind farm operators to manage and maintain their fleet of turbines. Due to the need to receive data in real time and the high level of interactivity required, an SPA architecture was chosen for the project. Our web app is dependent on some heavy JS libraries, but we want to provide the best experience for the end user by fetching data and rendering as quickly and efficiently as possible.

Choosing A Framework

Our JS framework of choice is Vue, partly chosen as it’s the framework I’m most familiar with. Previously Vue had a smaller overall bundle size compared to React. However, since recent React updates, the balance appears to have shifted in React’s favor. That doesn’t necessarily matter, as we’ll look at how to only import what we need in the course of this article. Both frameworks have excellent documentation and a large developer ecosystem, which was another consideration. Svelte is another possible choice, but it would have required a steeper learning curve due to unfamiliarity, and being newer, it has a less developed ecosystem.

As an example to demonstrate the various optimizations, I’ve built a simple Vue app that fetches data from an API and renders some charts using D3.js.

Note: Please refer to the example GitHub repository for the full code.

We’re using Parcel, a minimal-config build tool, to bundle our app, but all of the optimizations we’ll cover here are applicable to whichever bundler you choose.

Tree Shaking, Compression, And Minification With Build Tools

It’s good practice to only ship the code you need, and right out of the box, Parcel removes unused Javascript code during the build process (tree shaking). It also minifies the result and can be configured to compress the output with Gzip or Brotli.

As well as minification, Parcel also employs scope hoisting as part of its production process, which can help make minification even more efficient. An in-depth guide to scope hoisting is outside of the scope (see what I did there?) of this article. Still, if we run Parcel’s build process on our example app with the --no-optimize and --no-scope-hoist flags, we can see the resulting bundle is 510kB — around 5 times higher than the optimized and minified version. So, whichever bundler you’re using, it’s fair to say you’ll probably want to make sure it’s carrying out as many optimizations as possible.

But the work doesn’t end here. Even if we’re shipping a smaller bundle overall, it still takes time for the browser to parse and compile our JS, which can contribute to a slower user experience. This article on Bundle Size Optimization by Calibre explains how large JS bundles affect performance metrics.

Let’s look at what else we can do to reduce the amount of work the browser has to do.

Vue Composition API

Vue 3 introduced the Composition API, a new set of APIs for authoring components as an alternative to the Options API. By exclusively using the Composition API, we can import only the Vue functions that we need instead of the whole package. It also enables us to write more reusable code using composables. Code written using the Composition API lends itself better to minification, and the whole app is more susceptible to tree-shaking.

Note: You can still use the Composition API if you’re using an older version of Vue: it was backported to Vue 2.7, and there is an official plugin for older versions.

Importing Dependencies

A key goal was to reduce the size of the initial JS bundle downloaded by the client. Windscope makes extensive use of D3 for data visualization, a large library and wide-ranging in scope. However, Windscope only needs part of it (there are entire modules in the D3 library that we don’t need at all). If we examine the entire D3 package on Bundlephobia, we can see that our app uses less than half of the available modules and perhaps not even all of the functions within those modules.

One of the easiest ways to keep our bundle size as small as possible is only to import the modules we need.

Let’s take D3’s selectAll function. Instead of using a default import, we can just import the function we need from the d3-selection module:

// Previous:
import * as d3 from 'd3'

// Instead:
import { selectAll } from 'd3-selection'
Code Splitting With Dynamic Imports

There are certain packages that are used in a bunch of places throughout Windscope, such as the AWS Amplify authentication library, specifically the Auth method. This is a large dependency that contributes heavily to our JS bundle size. Rather than import the module statically at the top of the file, dynamic imports allow us to import the module exactly where we need it in our code.

Instead of:

import { Auth } from '@aws-amplify/auth'

const user = Auth.currentAuthenticatedUser()

We can import the module when we want to use it:

import('@aws-amplify/auth').then(({ Auth }) => {
    const user = Auth.currentAuthenticatedUser()
})

This means that the module will be split out into a separate JS bundle (or “chunk”), which will only be downloaded by the browser if and when it is needed. Additionally, the browser can cache these dependencies, which may change less frequently than the code for the rest of our app.

Lazy Loading Routes With Vue Router

Our app uses Vue Router for navigation. Similarly to dynamic imports, we can lazyload our route components, so they will only be imported (along with their associated dependencies) when a user navigates to that route.

In our index/router.js file:

// Previously:
import Home from "../routes/Home.vue";
import About = "../routes/About.vue";

// Lazyload the route components instead:
const Home = () => import("../routes/Home.vue");
const About = () => import("../routes/About.vue");

const routes = [
  {
    name: "home",
    path: "/",
    component: Home,
  },
  {
    name: "about",
    path: "/about",
    component: About,
  },
];

The code for the ‘About’ route will only be loaded when the user clicks the ‘About’ link and navigates to the route.

Async Components

In addition to lazyloading each route, we can also lazyload individual components using Vue’s defineAsyncComponent method.

const KPIComponent = defineAsyncComponent(() => import('../components/KPI.vue))

This means the code for the KPI component will be dynamically imported, as we saw in the router example. We can also provide some components to display while it’s in a loading or error state (useful if we’re loading a particularly large file).

const KPIComponent = defineAsyncComponent({
  loader: () => import('../components/KPI.vue),
  loadingComponent: Loader,
  errorComponent: Error,
  delay: 200,
  timeout: 5000,
});
Splitting API Requests

Our application is primarily concerned with data visualization and relies heavily on fetching large amounts of data from the server. Some of these requests can be quite slow, as the server has to perform a number of computations on the data. In our initial prototype, we made a single request to the REST API per route. Unfortunately, we found this resulted in users having to wait a long time — sometimes up to 10 seconds, watching a loading spinner before the app successfully received the data and could begin rendering the visualizations.

We made the decision to split the API into several endpoints and make a request for each widget. While this could increase the response time overall, it means the app should become usable much quicker, as users will see parts of the page rendered while they’re still waiting for others. Additionally, any error that might occur will be localized while the rest of the page remains usable.

You can see the difference illustrated here:

Conditionally Load Components

Now we can combine this with async components to only load a component when we’ve received a successful response from the server. Here we’re fetching the data, then importing the component when our fetch function returns successfully:

<template>
  <div>
    <component :is="KPIComponent" :data="data"></component>
  </div>
</template>

<script>
import {
  defineComponent,
  ref,
  defineAsyncComponent,
} from "vue";
import Loader from "./Loader";
import Error from "./Error";

export default defineComponent({
    components: { Loader, Error },

    setup() {
        const data = ref(null);

        const loadComponent = () => {
          return fetch('https://api.npoint.io/ec46e59905dc0011b7f4')
            .then((response) => response.json())
            .then((response) => (data.value = response))
            .then(() => import("../components/KPI.vue") // Import the component
            .catch((e) => console.error(e));
        };

        const KPIComponent = defineAsyncComponent({
          loader: loadComponent,
          loadingComponent: Loader,
          errorComponent: Error,
          delay: 200,
          timeout: 5000,
        });

        return { data, KPIComponent };
    }
}

To handle this process for every component, we created a higher order component called WidgetLoader, which you can see in the repository.

This pattern can be extended to any place in the app where a component is rendered upon user interaction. For example, in Windscope, we load a map component (and its dependencies) only when the user clicks on the ‘Map’ tab. This is known as Import on interaction.

CSS

If you run the example code, you will see that clicking the ‘Locations’ navigation link loads the map component. As well as dynamically importing the JS module, importing the dependency within the component’s <style> block will lazyload the CSS too:

// In MapView.vue
<style>
@import "../../node_modules/leaflet/dist/leaflet.css";

.map-wrapper {
  aspect-ratio: 16 / 9;
}
</style>

Refining The Loading State

At this point, we have our API requests running in parallel, with components being rendered at different times. One thing we might notice is the page appears janky, as the layout will be shifting around quite a bit.

A quick way to make things feel a bit smoother for users is to set an aspect ratio on the widget that roughly corresponds to the rendered component so the user doesn’t see quite as big a layout shift. We could pass in a prop for this to account for different components, with a default value to fall back to.

// WidgetLoader.vue
<template>
  <div class="widget" :style="{ 'aspect-ratio': loading ? aspectRatio : '' }">
    <component :is="AsyncComponent" :data="data"></component>
  </div>
</template>

<script>
import { defineComponent, ref, onBeforeMount, onBeforeUnmount } from "vue";
import Loader from "./Loader";
import Error from "./Error";

export default defineComponent({
  components: { Loader, Error },

  props: {
    aspectRatio: {
      type: String,
      default: "5 / 3", // define a default value
    },
    url: String,
    importFunction: Function,
  },

  setup(props) {
      const data = ref(null);
      const loading = ref(true);

        const loadComponent = () => {
          return fetch(url)
            .then((response) => response.json())
            .then((response) => (data.value = response))
            .then(importFunction
            .catch((e) => console.error(e))
            .finally(() => (loading.value = false)); // Set the loading state to false
        };

    /* ...Rest of the component code */

    return { data, aspectRatio, loading };
  },
});
</script>
Aborting API Requests

On a page with a large number of API requests, what should happen if the user navigates away before all the requests have been completed? We probably don’t want those requests to continue running in the background, slowing down the user experience.

We can use the AbortController interface, which enables us to abort API requests as desired.

In our setup function, we create a new controller and pass its signal into our fetch request parameters:

setup(props) {
    const controller = new AbortController();

    const loadComponent = () => {
      return fetch(url, { signal: controller.signal })
        .then((response) => response.json())
        .then((response) => (data.value = response))
        .then(importFunction)
        .catch((e) => console.error(e))
        .finally(() => (loading.value = false));
        };
}

Then we abort the request before the component is unmounted, using Vue’s onBeforeUnmount function:

onBeforeUnmount(() => controller.abort());

If you run the project and navigate to another page before the requests have been completed, you should see errors logged in the console stating that the requests have been aborted.

Stale While Revalidate

So far, we’ve done a pretty good of optimizing our app. But when a user navigates to the second view and then back to the previous one, all the components remount and are returned to their loading state, and we have to wait for the request responses all over again.

Stale-while-revalidate is an HTTP cache invalidation strategy where the browser determines whether to serve a response from the cache if that content is still fresh or “revalidate” and serve from the network if the response is stale.

In addition to applying cache-control headers to our HTTP response (out of the scope of this article, but read this article from Web.dev for more detail), we can apply a similar strategy to our Vue component state, using the SWRV library.

First, we must import the composable from the SWRV library:

import useSWRV from "swrv";

Then we can use it in our setup function. We’ll rename our loadComponent function to fetchData, as it will only deal with data fetching. We’ll no longer import our component in this function, as we’ll take care of that separately.

We’ll pass this into the useSWRV function call as the second argument. We only need to do this if we need a custom function for fetching data (maybe we need to update some other pieces of state). As we’re using an Abort Controller, we’ll do this; otherwise, the second argument can be omitted, and SWRV will use the Fetch API:

// In setup()
const { url, importFunction } = props;

const controller = new AbortController();

const fetchData = () => {
  return fetch(url, { signal: controller.signal })
    .then((response) => response.json())
    .then((response) => (data.value = response))
    .catch((e) => (error.value = e));
};

const { data, isValidating, error } = useSWRV(url, fetchData);

Then we’ll remove the loadingComponent and errorComponent options from our async component definition, as we’ll use SWRV to handle the error and loading states.

// In setup()
const AsyncComponent = defineAsyncComponent({
  loader: importFunction,
  delay: 200,
  timeout: 5000,
});

This means we’ll need to include the Loader and Error components in our template and show and hide them depending on the state. The isValidating return value tells us whether there is a request or revalidation happening.

<template>
  <div>
    <Loader v-if="isValidating && !data"></Loader>
    <Error v-else-if="error" :errorMessage="error.message"></Error>
    <component :is="AsyncComponent" :data="data" v-else></component>
  </div>
</template>

<script>
import {
  defineComponent,
  defineAsyncComponent,
} from "vue";
import useSWRV from "swrv";

export default defineComponent({
  components: {
    Error,
    Loader,
  },

  props: {
    url: String,
    importFunction: Function,
  },

  setup(props) {
    const { url, importFunction } = props;

    const controller = new AbortController();

    const fetchData = () => {
      return fetch(url, { signal: controller.signal })
        .then((response) => response.json())
        .then((response) => (data.value = response))
        .catch((e) => (error.value = e));
    };

    const { data, isValidating, error } = useSWRV(url, fetchData);

    const AsyncComponent = defineAsyncComponent({
      loader: importFunction,
      delay: 200,
      timeout: 5000,
    });

    onBeforeUnmount(() => controller.abort());

    return {
      AsyncComponent,
      isValidating,
      data,
      error,
    };
  },
});
</script>

We could refactor this into its own composable, making our code a bit cleaner and enabling us to use it anywhere.

// composables/lazyFetch.js
import { onBeforeUnmount } from "vue";
import useSWRV from "swrv";

export function useLazyFetch(url) {
  const controller = new AbortController();

  const fetchData = () => {
    return fetch(url, { signal: controller.signal })
      .then((response) => response.json())
      .then((response) => (data.value = response))
      .catch((e) => (error.value = e));
  };

  const { data, isValidating, error } = useSWRV(url, fetchData);

  onBeforeUnmount(() => controller.abort());

  return {
    isValidating,
    data,
    error,
  };
}
// WidgetLoader.vue
<script>
import { defineComponent, defineAsyncComponent, computed } from "vue";
import Loader from "./Loader";
import Error from "./Error";
import { useLazyFetch } from "../composables/lazyFetch";

export default defineComponent({
  components: {
    Error,
    Loader,
  },

  props: {
    aspectRatio: {
      type: String,
      default: "5 / 3",
    },
    url: String,
    importFunction: Function,
  },

  setup(props) {
    const { aspectRatio, url, importFunction } = props;
    const { data, isValidating, error } = useLazyFetch(url);

    const AsyncComponent = defineAsyncComponent({
      loader: importFunction,
      delay: 200,
      timeout: 5000,
    });

    return {
      aspectRatio,
      AsyncComponent,
      isValidating,
      data,
      error,
    };
  },
});
</script>

Updating Indicator

It might be useful if we could show an indicator to the user while our request is revalidating so that they know the app is checking for new data. In the example, I’ve added a small loading indicator in the corner of the component, which will only be shown if there is already data, but the component is checking for updates. I’ve also added a simple fade-in transition on the component (using Vue’s built-in Transition component), so there is not such an abrupt jump when the component is rendered.

<template>
  <div
    class="widget"
    :style="{ 'aspect-ratio': isValidating && !data ? aspectRatio : '' }"
  >
    <Loader v-if="isValidating && !data"></Loader>
    <Error v-else-if="error" :errorMessage="error.message"></Error>
    <Transition>
        <component :is="AsyncComponent" :data="data" v-else></component>
    </Transition>

    <!--Indicator if data is updating-->
    <Loader
      v-if="isValidating && data"
      text=""
    ></Loader>
  </div>
</template>
Conclusion

Prioritizing performance when building our web apps improves the user experience and helps ensure they can be used by as many people as possible. We’ve successfully used the above techniques at Ada Mode to make our applications faster. I hope this article has provided some pointers on how to make your app as efficient as possible — whether you choose to implement them in full or in part.

SPAs can work well, but they can also be a performance bottleneck. So, let’s try to build them better.

Further Reading on Smashing Magazine

3D Typing Effects with Three.js

In this tutorial we’ll explore various animated WebGL text typing effects. We will mostly be using Three.js but not the whole tutorial relies on the specific features of this library.
But who doesn’t love Three.js though ❤

This tutorial is aimed at developers who are familiar with the basic concepts of WebGL.

The main idea is to create a JavaScript template that takes a keyboard input and draws the text on the screen in some fancy way. The effects we will build today are all about composing a text shape with a big number of repeating objects. We will cover the following steps:

  • Sampling text on Canvas (generating 2D coordinates)
  • Setting up the scene and placing the Canvas element
  • Generating particles in 3D space
  • Turning particles to an instanced mesh
  • Replacing a static string with some user input
  • Basic animation
  • Typing-related animation
  • Generating the visuals: clouds, bubbles, flowers, eyeballs

Text sampling

In the following we will fill a text shape with some particles.

First, let’s think about what a 3D text shape is. In general, a text mesh is nothing but a 2D shape being extruded. So we don’t need to sample the 3rd coordinate – we can just use X/Y coordinates with Z being randomly generated within the text depth (although we’re not about to use the Z coordinate much today).

One of the ways to generate 2D coordinates inside the shape is with Canvas sampling. So let’s create a <canvas> element, apply some font-related styles to it and make sure the size of <canvas> is big enough for the text to fit (extra space is okay).

// Settings
const fontName = 'Verdana';
const textureFontSize = 100;

// String to show
let string = 'Some text' + '\n' + 'to sample' + '\n' + 'with Canvas';

// Create canvas to sample the text
const textCanvas = document.createElement('canvas');
const textCtx = textCanvas.getContext('2d');
document.body.appendChild(textCanvas);

// ---------------------------------------------------------------

sampleCoordinates();

// ---------------------------------------------------------------

function sampleCoordinates() {

    // Parse text
    const lines = string.split(`\n`);
    const linesMaxLength = [...lines].sort((a, b) => b.length - a.length)[0].length;
    const wTexture = textureFontSize * .7 * linesMaxLength;
    const hTexture = lines.length * textureFontSize;

    // ...
}

With the Canvas API you can set all the font styling pretty much like in CSS. Custom fonts can be used as well, but I’m using good old Verdana today.

Once the style is set, we draw the text (or any other graphics!) on the <canvas>…

function sampleCoordinates() {

    // Parse text
    // ...

    // Draw text
    const linesNumber = lines.length;
    textCanvas.width = wTexture;
    textCanvas.height = hTexture;
    textCtx.font = '100 ' + textureFontSize + 'px ' + fontName;
    textCtx.fillStyle = '#2a9d8f';
    textCtx.clearRect(0, 0, textCanvas.width, textCanvas.height);
    for (let i = 0; i < linesNumber; i++) {
        textCtx.fillText(lines[i], 0, (i + .8) * hTexture / linesNumber);
    }

    // ...
}

… to be able to get imageData from it.

The ImageData object contains a one-dimensional array with RGBA data for every pixel. Knowing the size of the canvas, we can go through the array and check if the given X/Y coordinate matches the color of text or the color of the background.

Since our canvas doesn’t have anything but colored text on the unset (transparent black) background, we can check any of the four RGBA bytes with against a condition as simple as “bigger than zero”.

function sampleCoordinates() {
    // Parse text
    // ...
    // Draw text
    // ...
    // Sample coordinates
    textureCoordinates = [];
    const samplingStep = 4;
    if (wTexture > 0) {
        const imageData = textCtx.getImageData(0, 0, textCanvas.width, textCanvas.height);
        for (let i = 0; i < textCanvas.height; i += samplingStep) {
            for (let j = 0; j < textCanvas.width; j += samplingStep) {
                // Checking if R-channel is not zero since the background RGBA is (0,0,0,0)
                if (imageData.data[(j + i * textCanvas.width) * 4] > 0) {
                    textureCoordinates.push({
                        x: j,
                        y: i
                    })
                }
            }
        }
    }
}

There’re lots of things you can do with the sampling function: change the sampling step, add some randomness, apply an outline stroke to the text, and more. Below we’ll keep using only the simplest sampling. To check the result we can add a second <canvas> and draw the dot for each of sampled textureCoordinates.

It works 🙂

The Three.js scene

Let’s set up a basic Three.js scene and place a Plane object on it. We can use the text sampling Canvas from the previous step as a color map for the Plane.

Generating the particles

We can generate 3D coordinates with the very same sampling function. X/Y are gathered from the Canvas and for the Z coordinate we can take a random number.

The easiest way to visualize this set of coordinates would be a particle system known as THREE.Points.

function createParticles() {
    const geometry = new THREE.BufferGeometry();
    const material = new THREE.PointsMaterial({
        color: 0xff0000,
        size: 2
    });
    const vertices = [];
    for (let i = 0; i < textureCoordinates.length; i ++) {
        vertices.push(textureCoordinates[i].x, textureCoordinates[i].y, 5 * Math.random());
    }
    geometry.setAttribute('position', new THREE.Float32BufferAttribute(vertices, 3));
    const particles = new THREE.Points(geometry, material);
    scene.add(particles);
}

Somehow it works ¯\_(ツ)_/¯

Obviously, we need to flip the Y coordinate for each particle and center the whole text.

To do both, we need to know the bounding box of our text. There are various ways to measure the box using the canvas API or Three.js functions. But as a temporary solution, we just take max X and Y coordinates as width and height of the text.

function refreshText() {
    sampleCoordinates();
    
    // Gather with and height of the bounding box
    const maxX = textureCoordinates.map(v => v.x).sort((a, b) => (b - a))[0];
    const maxY = textureCoordinates.map(v => v.y).sort((a, b) => (b - a))[0];
    stringBox.wScene = maxX;
    stringBox.hScene = maxY;

    createParticles();
}

For each point, the Y coordinate becomes boxTotalHeight - Y.

Shifting the whole particles system by half-width and half-height of the box solves the centering issue.

function createParticles() {
    
    // ...
    for (let i = 0; i < textureCoordinates.length; i ++) {
       // Turning Y coordinate to stringBox.hScene - Y
       vertices.push(textureCoordinates[i].x, stringBox.hScene - textureCoordinates[i].y, 5 * Math.random());
    }
    // ...
    
    // Centralizing the text
    particles.position.x = -.5 * stringBox.wScene;
    particles.position.y = -.5 * stringBox.hScene;
}

Until now, we were using pixel coordinates gathered from text canvas directly on the 3D scene. But let’s say we need the 3D text to have the height equal to 10 units. If we set 10 as a font size, the canvas resolution would be too low to make a proper sampling. To avoid it (and to be more flexible with the particles density), we can add an additional scaling factor: the value we’d multiply the canvas coordinates with before using them in 3D space.

// Settings
// ...
const textureFontSize = 30;
const fontScaleFactor = .3;

// ...

function refreshText() {

    // ...

    textureCoordinates = textureCoordinates.map(c => {
        return { x: c.x * fontScaleFactor, y: c.y * fontScaleFactor }
    });
    
    // ...
}

At this point, we can also remove the Plane object. We keep using the canvas to draw the text and sample coordinates but we don’t need to turn it to a texture and put it on the scene.

Switching to instanced mesh

Of course there are many cool things we can do with THREE.Points but our next step is turning the particles into THREE.InstancedMesh.

The main limitation of THREE.Points is the particle size. THREE.PointsMaterial is based on WebGL gl_PointSize, which can be rendered with a maximum pixel size of around 50 to 100, depending on your video card. So even if we need our particles to be as simple as planes, we sometimes can’t use THREE.Points due to this limitation. You may think about THREE.Sprite as an alternative, but (surprisingly) instanced mesh gives us much better performance on the big (10k+) number of particles.

Plus, if we want to use 3D shapes as particles, THREE.InstancedMesh is the only choice.

There is a well-known approach to work with THREE.InstancedMesh:

  1. Create an instanced mesh with a known number of instances. In our case, the number of instances is the length of our coordinates array.
function createInstancedMesh() {
    instancedMesh = new THREE.InstancedMesh(particleGeometry, particleMaterial, textureCoordinates.length);
    scene.add(instancedMesh);

    // centralize it in the same way as before
    instancedMesh.position.x = -.5 * stringBox.wScene;
    instancedMesh.position.y = -.5 * stringBox.hScene;
}
  1. Add the geometry and material to be used for each instance. I use a doughnut shape known as THREE.TorusGeometry and THREE.MeshNormalMaterial.
function init() {
    // Create scene and text canvas
    // ...

    // Instanced geometry and material
    particleGeometry = new THREE.TorusGeometry(.1, .05, 16, 50);
    particleMaterial = new THREE.MeshNormalMaterial({ });

    // ...
}
  1. Create a dummy object that helps us generate a 4×4 transform matrix for each particle. It doesn’t need to be a part of the scene.
function init() {
    // Create scene, text canvas, instanced geometry and material
    // ...

    dummy = new THREE.Object3D();
}
  1. Apply the transform matrix to each instance with the .setMatrixAt method
function updateParticlesMatrices() {
    let idx = 0;
    textureCoordinates.forEach(p => {

        // we apply samples coordinates like before + some random rotation
        dummy.rotation.set(2 * Math.random(), 2 * Math.random(), 2 * Math.random());
        dummy.position.set(p.x, stringBox.hScene - p.y, Math.random());

        dummy.updateMatrix();
        instancedMesh.setMatrixAt(idx, dummy.matrix);

        idx ++;
    })
    instancedMesh.instanceMatrix.needsUpdate = true;
}

Listening to the keyboard

So far, the string value was hard-coded. We want it to be dynamic and contain the user input.

There are many ways to listen to the keyboard: working directly with keyup/keydown events, using the HTML input element as a proxy, etc. I ended up with a <div> element that has a contenteditable attribute set. Compared to an <input> or a <textarea>, it’s more painful to parse the multi-line string from an editable <div>. But it’s much easier to get an accurate pixel values for the cursor position and the text bounding box.

I won’t go too much into details here. The main idea is to keep the editable <div> focused all the time so that we keep track of whatever the user types there.

<div id="text-input" contenteditable="true" onblur="this.focus()" autofocus></div>

Using the keyup event we parse the string and get the width and height of stringBox from the contenteditable <div>, and then refresh the instanced mesh.

document.addEventListener('keyup', () => {
    handleInput();
    refreshText();
});

While parsing, we replace the inner tags with new lines (this part is specific for <div contenteditable>), and do a few things for usability like disabling empty new lines above and below the text.

Please note that <div contenteditable> and text canvas should have the same CSS properties (font, font size, etc). With the same styles applied, the text is rendered in the very same way on both elements. With that in place, we can take the pixel values from <div contenteditable> (text width, height, cursor position) and use them for the canvas.

const textInputEl = document.querySelector('#text-input');
textInputEl.style.fontSize = textureFontSize + 'px';
textInputEl.style.font = '100 ' + textureFontSize + 'px ' + fontName;
textInputEl.style.lineHeight = 1.1 * textureFontSize + 'px'; 
// ...
function handleInput() {
    if (isNewLine(textInputEl.firstChild)) {
        textInputEl.firstChild.remove();
    }
    if (isNewLine(textInputEl.lastChild)) {
        if (isNewLine(textInputEl.lastChild.previousSibling)) {
            textInputEl.lastChild.remove();
        }
    }
    string = textInputEl.innerHTML
        .replaceAll("<p>", "\n")
        .replaceAll("</p>", "")
        .replaceAll("<div>", "\n")
        .replaceAll("</div>", "")
        .replaceAll("<br>", "")
        .replaceAll("<br/>", "")
        .replaceAll(" ", " ");
    stringBox.wTexture = textInputEl.clientWidth;
    stringBox.wScene = stringBox.wTexture * fontScaleFactor;
    stringBox.hTexture = textInputEl.clientHeight;
    stringBox.hScene = stringBox.hTexture * fontScaleFactor;
    function isNewLine(el) {
        if (el) {
            if (el.tagName) {
                if (el.tagName.toUpperCase() === 'DIV' || el.tagName.toUpperCase() === 'P') {
                    if (el.innerHTML === '<br>' || el.innerHTML === '</br>') {
                        return true;
                    }
                }
            }
        }
        return false
    }
}

Once we have the string and the stringBox, we update the instanced mesh.

function refreshText() {
    sampleCoordinates();
    textureCoordinates = textureCoordinates.map(c => {
        return { x: c.x * fontScaleFactor, y: c.y * fontScaleFactor }
    });
    // This part can be removed as we take text size from editable <div>
    // const sortedX = textureCoordinates.map(v => v.x).sort((a, b) => (b - a))[0];
    // const sortedY = textureCoordinates.map(v => v.y).sort((a, b) => (b - a))[0];
    // stringBox.wScene = sortedX;
    // stringBox.hScene = sortedY;</s>
    recreateInstancedMesh();
    updateParticlesMatrices();
}

Coordinate sampling is the same as before with one difference: we now can create canvas with the exact text size, no extra space to sample.

function sampleCoordinates() {
    const lines = string.split(`\n`);
    // This part can be removed as we take text size from editable <div>
    // const linesMaxLength = [...lines].sort((a, b) => b.length - a.length)[0].length;
    // stringBox.wTexture = textureFontSize * .7 * linesMaxLength;
    // stringBox.hTexture = lines.length * textureFontSize;
    textCanvas.width = stringBox.wTexture;
    textCanvas.height = stringBox.hTexture;
    // ...
}

We can’t increase the number of instances for the existing mesh. So the mesh should be recreated every time the text is updated. Although text centering and instances transform is done exactly like before.

// function createInstancedMesh() {
function recreateInstancedMesh() {

    // Now we need to remove the old Mesh and create a new one every refreshText() call
    scene.remove(instancedMesh);
    instancedMesh = new THREE.InstancedMesh(particleGeometry, particleMaterial, textureCoordinates.length);

    // ...
}

function updateParticlesMatrices() {

    // same as before
    //...

}

Since our text is dynamic and it can get pretty long, let’s make sure the instanced mesh fits the screen:

function refreshText() {

    // ...

    makeTextFitScreen();
}

function makeTextFitScreen() {
    const fov = camera.fov * (Math.PI / 180);
    const fovH = 2 * Math.atan(Math.tan(fov / 2) * camera.aspect);
    const dx = Math.abs(.55 * stringBox.wScene / Math.tan(.5 * fovH));
    const dy = Math.abs(.55 * stringBox.hScene / Math.tan(.5 * fov));
    const factor = Math.max(dx, dy) / camera.position.length();
    if (factor > 1) {
        camera.position.x *= factor;
        camera.position.y *= factor;
        camera.position.z *= factor;
    }
}

One more thing to add is a caret (text cursor). It can be a simple 3D box with a size matching the font size.

function init() {
    // ...
    const cursorGeometry = new THREE.BoxGeometry(.3, 4.5, .03);
    cursorGeometry.translate(.5, -2.7, 0)
    const cursorMaterial = new THREE.MeshNormalMaterial({
        transparent: true,
    });
    cursorMesh = new THREE.Mesh(cursorGeometry, cursorMaterial);
    scene.add(cursorMesh);
}

We gather the position of the caret from our editable <div> in pixels and multiply it by fontScaleFactor, like we do with the bounding box width and height.

function handleInput() {

    // ...
    
    stringBox.caretPosScene = getCaretCoordinates().map(c => c * fontScaleFactor);

    function getCaretCoordinates() {
        const range = window.getSelection().getRangeAt(0);
        const needsToWorkAroundNewlineBug = (range.startContainer.nodeName.toLowerCase() === 'div' && range.startOffset === 0);
        if (needsToWorkAroundNewlineBug) {
            return [
                range.startContainer.offsetLeft,
                range.startContainer.offsetTop
            ]
        } else {
            const rects = range.getClientRects();
            if (rects[0]) {
                return [rects[0].left, rects[0].top]
            } else {
                // since getClientRects() gets buggy in FF
                document.execCommand('selectAll', false, null);
                return [
                    0, 0
                ]
            }
        }
    }
}

The cursor just needs same centering as our instanced mesh has, and voilà, the 3D caret position is the same as in the the input div.

function refreshText() {
    // ...
    
    updateCursorPosition();
}

function updateCursorPosition() {
    cursorMesh.position.x = -.5 * stringBox.wScene + stringBox.caretPosScene[0];
    cursorMesh.position.y = .5 * stringBox.hScene - stringBox.caretPosScene[1];
}

The only thing left is to make the cursor blink when the page (and hence the input element) is focused. The roundPulse function generates the rounded pulse between 0 and 1 from THREE.Clock.getElapsedTime(). We need to update the cursor opacity all the time, so the updateCursorOpacity call goes to the main render loop.

function render() {
    // ...

    updateCursorOpacity();
    
    // ...
}

let roundPulse = (t) => Math.sign(Math.sin(t * Math.PI)) * Math.pow(Math.sin((t % 1) * 3.14), .2);

function updateCursorOpacity() {
    if (document.hasFocus() && document.activeElement === textInputEl) {
        cursorMesh.material.opacity = roundPulse(2 * clock.getElapsedTime());
    } else {
        cursorMesh.material.opacity = 0;
    }
}

Basic animation

Instead of setting the instances transform just on the text update, we can also animate this transform.

To do this, we add an additional array of Particle objects to store the parameters for each instance. We still need the textureCoordinates array to store the 2D coordinates in pixels, but now we remap them to the particles array. And obviously, the particles transform update should happen in the main render loop now.

// ...
let textureCoordinates = [];
let particles = [];

function refreshText() {
    
    // ...

    // textureCoordinates are only pixel coordinates, particles is array of data objects
    particles = textureCoordinates.map(c => 
        new Particle([c.x * fontScaleFactor, c.y * fontScaleFactor])
    );

    // We call it in the render() loop now
    // updateParticlesMatrices();

    // ...
}

Each Particle object contains a list of properties and a grow() function that updates some of those properties.

For starters, we define position, rotation and scale. Position would be static for each particle, scale would increase from zero to one when the particle is created, and rotation would be animated all the time.

function Particle([x, y]) {
    this.x = x;
    this.y = y;
    this.z = 0;
    this.rotationX = Math.random() * 2 * Math.PI;
    this.rotationY = Math.random() * 2 * Math.PI;
    this.rotationZ = Math.random() * 2 * Math.PI;
    this.scale = 0;
    this.deltaRotation = .2 * (Math.random() - .5);
    this.deltaScale = .01 + .2 * Math.random();
    this.grow = function () {
        this.rotationX += this.deltaRotation;
        this.rotationY += this.deltaRotation;
        this.rotationZ += this.deltaRotation;
        if (this.scale < 1) {
            this.scale += this.deltaScale;
        }
    }
}
// ...
function updateParticlesMatrices() {
    let idx = 0;
    // textureCoordinates.forEach(p => {
    particles.forEach(p => {
        // update the particles data
        p.grow();
        // dummy.rotation.set(2 * Math.random(), 2 * Math.random(), 2 * Math.random());
        dummy.rotation.set(p.rotationX, p.rotationY, p.rotationZ);
        dummy.scale.set(p.scale, p.scale, p.scale);
        dummy.position.set(p.x, stringBox.hScene - p.y, p.z);
        dummy.updateMatrix();
        instancedMesh.setMatrixAt(idx, dummy.matrix);
        idx ++;
    })
    instancedMesh.instanceMatrix.needsUpdate = true;
}

Typing animation

We already have a nice template by now. But every time the text is updated we recreate all the instances for all the symbols. So every time the text is changed we reset all the properties and animations of all the particles.

Instead, we need to keep the properties and animations for “old” particles. To do so, we need to know if each particle should be recreated or not.

In other words, for each sampled coordinate we need to check if Particle already exists or not. If we found a Particle object with the same X/Y coordinates, we keep it along with all its properties. If there is no existing Particle for the sampled coordinate, we call new Particle() like we did before.

We evolve the sampling function so we don’t only gather the X/Y values and refill textureCoordinates array but also do the following:

  1. Turn one-dimensional array imageData to two-dimensional imageMask array
  2. Go through the existing textureCoordinates array and compare its elements to the imageMask. If coordinate exists, add old property to the coordinate, otherwise add toDelete property.
  3. All the sampled coordinates that were not found in the textureCoordinates, we handle as new coordinate that has X and Y values and old or toDelete properties set to false

It would make sense to simply delete old coordinates that were not found in the new imageMask. But we use a special toDelete property instead to play a fade-out animation for deleted particles first, and actually delete the Particle data only in the next step.

function sampleCoordinates() {
    // Draw text
    // ...
    // Sample coordinates
    if (stringBox.wTexture > 0) {
        // Image data to 2d array
        const imageData = textCtx.getImageData(0, 0, textCanvas.width, textCanvas.height);
        const imageMask = Array.from(Array(textCanvas.height), () => new Array(textCanvas.width));
        for (let i = 0; i < textCanvas.height; i++) {
            for (let j = 0; j < textCanvas.width; j++) {
                imageMask[i][j] = imageData.data[(j + i * textCanvas.width) * 4] > 0;
            }
        }
        if (textureCoordinates.length !== 0) {
            // Clean up: delete coordinates and particles which disappeared on the prev step
            // We need to keep same indexes for coordinates and particles to reuse old particles properly
            textureCoordinates = textureCoordinates.filter(c => !c.toDelete);
            particles = particles.filter(c => !c.toDelete);
            // Go through existing coordinates (old to keep, toDelete for fade-out animation)
            textureCoordinates.forEach(c => {
                if (imageMask[c.y]) {
                    if (imageMask[c.y][c.x]) {
                        c.old = true;
                        if (!c.toDelete) {
                            imageMask[c.y][c.x] = false;
                        }
                    } else {
                        c.toDelete = true;
                    }
                } else {
                    c.toDelete = true;
                }
            });
        }
        // Add new coordinates
        for (let i = 0; i < textCanvas.height; i++) {
            for (let j = 0; j < textCanvas.width; j++) {
                if (imageMask[i][j]) {
                    textureCoordinates.push({
                        x: j,
                        y: i,
                        old: false,
                        toDelete: false
                    })
                }
            }
        }
    } else {
        textureCoordinates = [];
    }
}

With old and toDelete properties, mapping texture coordinates to the particles becomes conditional:

function refreshText() {
    
    // ...

    // particles = textureCoordinates.map(c => 
    //     new Particle([c.x * fontScaleFactor, c.y * fontScaleFactor])
    // );
    particles = textureCoordinates.map((c, cIdx) => {
        const x = c.x * fontScaleFactor;
        const y = c.y * fontScaleFactor;
        let p = (c.old && particles[cIdx]) ? particles[cIdx] : new Particle([x, y]);
        if (c.toDelete) {
            p.toDelete = true;
            p.scale = 1;
        }
        return p;
    });

    // ...

}

The grow() call would not only increase the size of the particle when it’s created. We would also decrease it if the particle meant to be deleted.

function Particle([x, y]) {
    // ...
    
    this.toDelete = false;
    
    this.grow = function () {
        // ...
        if (this.scale < 1) {
            this.scale += this.deltaScale;
        }
        if (this.toDelete) {
            this.scale -= this.deltaScale;
            if (this.scale <= 0) {
                this.scale = 0;
            }
        }
    }
}

The template is now ready and we can use it to create various effects with only little changes.

Bubbles effect 🫧

See the Pen Bubble Typer Three.js – Demo #2 by Ksenia Kondrashova (@ksenia-k) on CodePen.

Here is the full list of changes I made to make these bubbles based on the template:

  1. Change TorusGeometry to IcosahedronGeometry so each instance is a sphere
  2. Replace MeshNormalMaterial with ShaderMaterial. You can check out the GLSL code in the sandbox above but the shader essentially does this:
    • mix white color and randomized gradient (taken from normal vector), and use the result as sphere color
    • applies transparency in a way to make less transparent outline and more transparent middle of the sphere if you look from the camera position
  3. Adjust textureFontSize and fontScaleFactor values to change the density of the particles
  4. Evolve the Particle object so that
    • bubble position is a bit randomized comparing to the sampled coordinates
    • maximum size of the bubble is defined by randomized maxScale property
    • no rotation
    • bubbles size is randomized as the scale limit is maxScale property, not 1
    • bubble grows all the time, bursts, and then grows again. So the scale increase happens not only when Particle is created but all the time. Once the scale reaches the maxScale value, we reset the scale to zero
    • some bubbles would get isFlying property so they move up from the initial position
  5. Change color of page background and cursor

Clouds effect ☁

You don’t need to do much for having clouds, too:

  1. Use PlaneGeometry for instance shape
  2. Use MeshBasicMaterial and apply the following image as an alpha map
  3. Adjust textureFontSize and fontScaleFactor to change the density of the particles
  4. Evolve the Particle object so that
    • particle position is a bit randomized compared to the sampled coordinates
    • size of the particle is defined by randomized maxScale property
    • only rotation around Z axis is needed
    • particle size (scale) is pulsating all the time
  5. Additional transform dummy.quaternion.copy(camera.quaternion) should be applied for each instance. This way the particle is always facing towards the camera; rotate the cloudy text to see the result 🙂
  6. Change color of page background and cursor

See the Pen Clouds Typer Three.js – Demo #1 by Ksenia Kondrashova (@ksenia-k) on CodePen.

Flowers effect 🌸

Flowers are actually quite similar to clouds. The main difference is about having two instanced meshes and two materials. One is mapped as flower texture, another one as a leaf

Also, all the particles must have a new color property. We apply colors to the instanced mesh with the setColorAt method every time we recreate the meshes.

With a few small changes like particles density, scaling speed, rotation speed, and the color of the background and cursor, we have this:

See the Pen Flower Typer Three.js – Demo #3 by Ksenia Kondrashova (@ksenia-k) on CodePen.

Eyes effect 👀

We can go further and load a glb model and use it as an instance! I took this nice looking eye from turbosquid.com

Instead of applying a random rotation, we can make the eyeballs follow the mouse position! To do so, we need an additional transparent plane in front of the instanced mesh, THREE.Raycaster() and the mouse position tracker. We are listening to the mousemove event, set ray from mouse to the plane, and make the dummy object look at the intersection point.

Don’t forget to add some lights to see the imported model. And as we have lights, let’s make the instanced mesh cast the shadow to the plane behind the text.

Together with some other small changes like sampling density, grow() function parameters, cursor and background style, we get this:

See the Pen Eyes Typer Three.js – Demo #4 by Ksenia Kondrashova (@ksenia-k) on CodePen.

And that’s it! I hope this tutorial was interesting and that it gave you some inspiration. Feel free to use this template to create more fun things!

Rendering External API Data in WordPress Blocks on the Back End

This is a continuation of my last article about “Rendering External API Data in WordPress Blocks on the Front End”. In that last one, we learned how to take an external API and integrate it with a block that renders the fetched data on the front end of a WordPress site.

The thing is, we accomplished this in a way that prevents us from seeing the data in the WordPress Block Editor. In other words, we can insert the block on a page but we get no preview of it. We only get to see the block when it’s published.

Let’s revisit the example block plugin we made in the last article. Only this time, we’re going to make use of the JavaScript and React ecosystem of WordPress to fetch and render that data in the back-end Block Editor as well.

Where we left off

As we kick this off, here’s a demo where we landed in the last article that you can reference. You may have noticed that I used a render_callback method in the last article so that I can make use of the attributes in the PHP file and render the content.

Well, that may be useful in situations where you might have to use some native WordPress or PHP function to create dynamic blocks. But if you want to make use of just the JavaScript and React (JSX, specifically) ecosystem of WordPress to render the static HTML along with the attributes stored in the database, you only need to focus on the Edit and Save functions of the block plugin.

  • The Edit function renders the content based on what you want to see in the Block Editor. You can have interactive React components here.
  • The Save function renders the content based on what you want to see on the front end. You cannot have the the regular React components or the hooks here. It is used to return the static HTML that is saved into your database along with the attributes.

The Save function is where we’re hanging out today. We can create interactive components on the front-end, but for that we need to manually include and access them outside the Save function in a file like we did in the last article.

So, I am going to cover the same ground we did in the last article, but this time you can see the preview in the Block Editor before you publish it to the front end.

The block props

I intentionally left out any explanations about the edit function’s props in the last article because that would have taken the focus off of the main point, the rendering.

If you are coming from a React background, you will likely understand what is that I am talking about, but if you are new to this, I would recommend checking out components and props in the React documentation.

If we log the props object to the console, it returns a list of WordPress functions and variables related to our block:

Console log of the block properties.

We only need the attributes object and the setAttributes function which I am going to destructure from the props object in my code. In the last article, I had modified RapidAPI’s code so that I can store the API data through setAttributes(). Props are only readable, so we are unable to modify them directly.

Block props are similar to state variables and setState in React, but React works on the client side and setAttributes() is used to store the attributes permanently in the WordPress database after saving the post. So, what we need to do is save them to attributes.data and then call that as the initial value for the useState() variable.

The edit function

I am going to copy-paste the HTML code that we used in football-rankings.php in the last article and edit it a little to shift to the JavaScript background. Remember how we created two additional files in the last article for the front end styling and scripts? With the way we’re approaching things today, there’s no need to create those files. Instead, we can move all of it to the Edit function.

Full code
import { useState } from "@wordpress/element";
export default function Edit(props) {
  const { attributes, setAttributes } = props;
  const [apiData, setApiData] = useState(null);
    function fetchData() {
      const options = {
        method: "GET",
        headers: {
          "X-RapidAPI-Key": "Your Rapid API key",
          "X-RapidAPI-Host": "api-football-v1.p.rapidapi.com",
        },
      };
      fetch(
        "https://api-football-v1.p.rapidapi.com/v3/standings?season=2021&league=39",
          options
      )
      .then((response) => response.json())
      .then((response) => {
        let newData = { ...response }; // Deep clone the response data
        setAttributes({ data: newData }); // Store the data in WordPress attributes
        setApiData(newData); // Modify the state with the new data
      })
      .catch((err) => console.error(err));
    }
    return (
      <div {...useBlockProps()}>
        <button onClick={() => getData()}>Fetch data</button>
        {apiData && (
          <>
          <div id="league-standings">
            <div
              className="header"
              style={{
                backgroundImage: `url(${apiData.response[0].league.logo})`,
              }}
            >
              <div className="position">Rank</div>
              <div className="team-logo">Logo</div>
              <div className="team-name">Team name</div>
              <div className="stats">
                <div className="games-played">GP</div>
                <div className="games-won">GW</div>
                <div className="games-drawn">GD</div>
                <div className="games-lost">GL</div>
                <div className="goals-for">GF</div>
                <div className="goals-against">GA</div>
                <div className="points">Pts</div>
              </div>
              <div className="form-history">Form history</div>
            </div>
            <div className="league-table">
              {/* Usage of [0] might be weird but that is how the API structure is. */}
              {apiData.response[0].league.standings[0].map((el) => {
                
                {/* Destructure the required data from all */}
                const { played, win, draw, lose, goals } = el.all;
                  return (
                    <>
                    <div className="team">
                      <div class="position">{el.rank}</div>
                      <div className="team-logo">
                        <img src={el.team.logo} />
                      </div>
                      <div className="team-name">{el.team.name}</div>
                      <div className="stats">
                        <div className="games-played">{played}</div>
                        <div className="games-won">{win}</div>
                        <div className="games-drawn">{draw}</div>
                        <div className="games-lost">{lose}</div>
                        <div className="goals-for">{goals.for}</div>
                        <div className="goals-against">{goals.against}</div>
                        <div className="points">{el.points}</div>
                      </div>
                      <div className="form-history">
                        {el.form.split("").map((result) => {
                          return (
                            <div className={`result-${result}`}>{result}</div>
                          );
                        })}
                      </div>
                    </div>
                    </>
                  );
                }
              )}
            </div>
          </div>
        </>
      )}
    </div>
  );
}

I have included the React hook useState() from @wordpress/element rather than using it from the React library. That is because if I were to load the regular way, it would download React for every block that I am using. But if I am using @wordpress/element it loads from a single source, i.e., the WordPress layer on top of React.

This time, I have also not wrapped the code inside useEffect() but inside a function that is called only when clicking on a button so that we have a live preview of the fetched data. I have used a state variable called apiData to render the league table conditionally. So, once the button is clicked and the data is fetched, I am setting apiData to the new data inside the fetchData() and there is a rerender with the HTML of the football rankings table available.

You will notice that once the post is saved and the page is refreshed, the league table is gone. That is because we are using an empty state (null) for apiData‘s initial value. When the post saves, the attributes are saved to the attributes.data object and we call it as the initial value for the useState() variable like this:

const [apiData, setApiData] = useState(attributes.data);

The save function

We are going to do almost the same exact thing with the save function, but modify it a little bit. For example, there’s no need for the “Fetch data” button on the front end, and the apiData state variable is also unnecessary because we are already checking it in the edit function. But we do need a random apiData variable that checks for attributes.data to conditionally render the JSX or else it will throw undefined errors and the Block Editor UI will go blank.

Full code
export default function save(props) {
  const { attributes, setAttributes } = props;
  let apiData = attributes.data;
  return (
    <>
      {/* Only render if apiData is available */}
      {apiData && (
        <div {...useBlockProps.save()}>
        <div id="league-standings">
          <div
            className="header"
            style={{
              backgroundImage: `url(${apiData.response[0].league.logo})`,
            }}
          >
            <div className="position">Rank</div>
            <div className="team-logo">Logo</div>
            <div className="team-name">Team name</div>
            <div className="stats">
              <div className="games-played">GP</div>
              <div className="games-won">GW</div>
              <div className="games-drawn">GD</div>
              <div className="games-lost">GL</div>
              <div className="goals-for">GF</div>
              <div className="goals-against">GA</div>
              <div className="points">Pts</div>
            </div>
            <div className="form-history">Form history</div>
          </div>
          <div className="league-table">
            {/* Usage of [0] might be weird but that is how the API structure is. */}
            {apiData.response[0].league.standings[0].map((el) => {
              const { played, win, draw, lose, goals } = el.all;
                return (
                  <>
                  <div className="team">
                    <div className="position">{el.rank}</div>
                      <div className="team-logo">
                        <img src={el.team.logo} />
                      </div>
                      <div className="team-name">{el.team.name}</div>
                      <div className="stats">
                        <div className="games-played">{played}</div>
                        <div className="games-won">{win}</div>
                        <div className="games-drawn">{draw}</div>
                        <div className="games-lost">{lose}</div>
                        <div className="goals-for">{goals.for}</div>
                        <div className="goals-against">{goals.against}</div>
                        <div className="points">{el.points}</div>
                      </div>
                      <div className="form-history">
                        {el.form.split("").map((result) => {
                          return (
                            <div className={`result-${result}`}>{result}</div>
                          );
                        })}
                      </div>
                    </div>
                  </>
                );
              })}
            </div>
          </div>
        </div>
      )}
    </>
  );
}

If you are modifying the save function after a block is already present in the Block Editor, it would show an error like this:

The football rankings block in the WordPress block Editor with an error message that the block contains an unexpected error.

That is because the markup in the saved content is different from the markup in our new save function. Since we are in development mode, it is easier to remove the bock from the current page and re-insert it as a new block — that way, the updated code is used instead and things are back in sync.

This situation of removing it and adding it again can be avoided if we had used the render_callback method since the output is dynamic and controlled by PHP instead of the save function. So each method has it’s own advantages and disadvantages.

Tom Nowell provides a thorough explanation on what not to do in a save function in this Stack Overflow answer.

Styling the block in the editor and the front end

Regarding the styling, it is going to be almost the same thing we looked at in the last article, but with some minor changes which I have explained in the comments. I’m merely providing the full styles here since this is only a proof of concept rather than something you want to copy-paste (unless you really do need a block for showing football rankings styled just like this). And note that I’m still using SCSS that compiles to CSS on build.

Editor styles
/* Target all the blocks with the data-title="Football Rankings" */
.block-editor-block-list__layout 
.block-editor-block-list__block.wp-block[data-title="Football Rankings"] {
  /* By default, the blocks are constrained within 650px max-width plus other design specific code */
  max-width: unset;
  background: linear-gradient(to right, #8f94fb, #4e54c8);
  display: grid;
  place-items: center;
  padding: 60px 0;

  /* Button CSS - From: https://getcssscan.com/css-buttons-examples - Some properties really not needed :) */
  button.fetch-data {
    align-items: center;
    background-color: #ffffff;
    border: 1px solid rgb(0 0 0 / 0.1);
    border-radius: 0.25rem;
    box-shadow: rgb(0 0 0 / 0.02) 0 1px 3px 0;
    box-sizing: border-box;
    color: rgb(0 0 0 / 0.85);
    cursor: pointer;
    display: inline-flex;
    font-family: system-ui, -apple-system, system-ui, "Helvetica Neue", Helvetica, Arial, sans-serif;
    font-size: 16px;
    font-weight: 600;
    justify-content: center;
    line-height: 1.25;
    margin: 0;
    min-height: 3rem;
    padding: calc(0.875rem - 1px) calc(1.5rem - 1px);
    position: relative;
    text-decoration: none;
    transition: all 250ms;
    user-select: none;
    -webkit-user-select: none;
    touch-action: manipulation;
    vertical-align: baseline;
    width: auto;
    &:hover,
    &:focus {
      border-color: rgb(0, 0, 0, 0.15);
      box-shadow: rgb(0 0 0 / 0.1) 0 4px 12px;
      color: rgb(0, 0, 0, 0.65);
    }
    &:hover {
      transform: translateY(-1px);
    }
    &:active {
      background-color: #f0f0f1;
      border-color: rgb(0 0 0 / 0.15);
      box-shadow: rgb(0 0 0 / 0.06) 0 2px 4px;
      color: rgb(0 0 0 / 0.65);
      transform: translateY(0);
    }
  }
}
Front-end styles
/* Front-end block styles */
.wp-block-post-content .wp-block-football-rankings-league-table {
  background: linear-gradient(to right, #8f94fb, #4e54c8);
  max-width: unset;
  display: grid;
  place-items: center;
}

#league-standings {
  width: 900px;
  margin: 60px 0;
  max-width: unset;
  font-size: 16px;
  .header {
    display: grid;
    gap: 1em;
    padding: 10px;
    grid-template-columns: 1fr 1fr 3fr 4fr 3fr;
    align-items: center;
    color: white;
    font-size: 16px;
    font-weight: 600;
    background-color: transparent;
    background-repeat: no-repeat;
    background-size: contain;
    background-position: right;

    .stats {
      display: flex;
      gap: 15px;
      &amp; &gt; div {
        width: 30px;
      }
    }
  }
}
.league-table {
  background: white;
  box-shadow:
    rgba(50, 50, 93, 0.25) 0px 2px 5px -1px,
    rgba(0, 0, 0, 0.3) 0px 1px 3px -1px;
  padding: 1em;
  .position {
    width: 20px;
  }
  .team {
    display: grid;
    gap: 1em;
    padding: 10px 0;
    grid-template-columns: 1fr 1fr 3fr 4fr 3fr;
    align-items: center;
  }
  .team:not(:last-child) {
    border-bottom: 1px solid lightgray;
  }
  .team-logo img {
    width: 30px;
    top: 3px;
    position: relative;
  }
  .stats {
    display: flex;
    gap: 15px;
    &amp; &gt; div {
      width: 30px;
      text-align: center;
    }
  }
  .last-5-games {
    display: flex;
    gap: 5px;
    &amp; &gt; div {
      width: 25px;
      height: 25px;
      text-align: center;
      border-radius: 3px;
      font-size: 15px;
    &amp; .result-W {
      background: #347d39;
      color: white;
    }
    &amp; .result-D {
      background: gray;
      color: white;
    }
    &amp; .result-L {
      background: lightcoral;
      color: white;
    }
  }
}

We add this to src/style.scss which takes care of the styling in both the editor and the frontend. I will not be able to share the demo URL since it would require editor access but I have a video recorded for you to see the demo:


Pretty neat, right? Now we have a fully functioning block that not only renders on the front end, but also fetches API data and renders right there in the Block Editor — with a refresh button to boot!

But if we want to take full advantage of the WordPress Block Editor, we ought to consider mapping some of the block’s UI elements to block controls for things like setting color, typography, and spacing. That’s a nice next step in the block development learning journey.


Rendering External API Data in WordPress Blocks on the Back End originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Rendering External API Data in WordPress Blocks on the Front End

There’ve been some new tutorials popping here on CSS-Tricks for working with WordPress blocks. One of them is an introduction to WordPress block development and it’s a good place to learn what blocks are and to register them in WordPress for use in pages and posts.

While the block basics are nicely covered in that post, I want to take it another step forward. You see, in that article, we learned the difference between rendering blocks in the back-end WordPress Block Editor and rendering them on the front-end theme. The example was a simple Pullquote Block that rendered different content and styles on each end.

Let’s go further and look at using dynamic content in a WordPress block. More specifically, let’s fetch data from an external API and render it on the front end when a particular block is dropped into the Block Editor.

We’re going to build a block that outputs data that shows soccer (er, football) rankings pulled from Api-Football.

An ordered set of football team rankings showing team logos, names, and game results.
This is what we’re working for together.

There’s more than one way to integrate an API with a WordPress block! Since the article on block basics has already walked through the process of making a block from scratch, we’re going to simplify things by using the @wordpress/create-block package to bootstrap our work and structure our project.

Initializing our block plugin

First things first: let’s spin up a new project from the command line:

npx @wordpress/create-block football-rankings

I normally would kick a project like this off by making the files from scratch, but kudos to the WordPress Core team for this handy utility!

Once the project folder has been created by the command, we technically have a fully-functional WordPress block registered as a plugin. So, let’s go ahead and drop the project folder into the wp-content/plugins directory where you have WordPress installed (probably best to be working in a local environment), then log into the WordPress admin and activate it from the Plugins screen.

Now that our block is initialized, installed, and activated, go ahead and open up the project folder from at /wp-content/plugins/football-rankings. You’re going to want to cd there from the command line as well to make sure we can continue development.

These are the only files we need to concentrate on at the moment:

  • edit.js
  • index.js
  • football-rankings.php

The other files in the project are important, of course, but are inessential at this point.

Reviewing the API source

We already know that we’re using Api-Football which comes to us courtesy of RapidAPI. Fortunately, RapidAPI has a dashboard that automatically generates the required scripts we need to fetch the API data for the 2021 Premier League Standings.

A dashboard interface with three columns showing code and data from an API source.
The RapidAPI dashboard

If you want to have a look on the JSON structure, you can generate visual representation with JSONCrack.

Fetching data from the edit.js file

I am going to wrap the RapidAPI code inside a React useEffect() hook with an empty dependency array so that it runs only once when the page is loaded. This way, we prevent WordPress from calling the API each time the Block Editor re-renders. You can check that using wp.data.subscribe() if you care to.

Here’s the code where I am importing useEffect(), then wrapping it around the fetch() code that RapidAPI provided:

/**
* The edit function describes the structure of your block in the context of the
* editor. This represents what the editor will render when the block is used.
*
* @see https://developer.wordpress.org/block-editor/reference-guides/block-api/block-edit-save/#edit
*
* @return {WPElement} Element to render.
*/

import { useEffect } from "@wordpress/element";

export default function Edit(props) {
  const { attributes, setAttributes } = props;

  useEffect(() => {
    const options = {
      method: "GET",
      headers: {
        "X-RapidAPI-Key": "Your Rapid API key",
        "X-RapidAPI-Host": "api-football-v1.p.rapidapi.com",
      },
    };

    fetch("https://api-football-v1.p.rapidapi.com/v3/standings?season=2021&league=39", options)
      .then( ( response ) => response.json() )
      .then( ( response ) => {
        let newData = { ...response };
        setAttributes( { data: newData } );
        console.log( "Attributes", attributes );
      })
      .catch((err) => console.error(err));
}, []);

  return (
    <p { ...useBlockProps() }>
      { __( "Standings loaded on the front end", "external-api-gutenberg" ) }
    </p>
  );
}

Notice that I have left the return function pretty much intact, but have included a note that confirms the football standings are rendered on the front end. Again, we’re only going to focus on the front end in this article — we could render the data in the Block Editor as well, but we’ll leave that for another article to keep things focused.

Storing API data in WordPress

Now that we are fetching data, we need to store it somewhere in WordPress. This is where the attributes.data object comes in handy. We are defining the data.type as an object since the data is fetched and formatted as JSON. Make sure you don’t have any other type or else WordPress won’t save the data, nor does it throw any error for you to debug.

We define all this in our index.js file:

registerBlockType( metadata.name, {
  edit: Edit,
  attributes: {
    data: {
      type: "object",
    },
  },
  save,
} );

OK, so WordPress now knows that the RapidAPI data we’re fetching is an object. If we open a new post draft in the WordPress Block Editor and save the post, the data is now stored in the database. In fact, if we can see it in the wp_posts.post_content field if we open the site’s database in phpMyAdmin, Sequel Pro, Adminer, or whatever tool you use.

Showing a large string of JSON output in a database table.
API output stored in the WordPress database

Outputting JSON data in the front end

There are multiple ways to output the data on the front end. The way I’m going to show you takes the attributes that are stored in the database and passes them as a parameter through the render_callback function in our football-rankings.php file.

I like keeping a separation of concerns, so how I do this is to add two new files to the block plugin’s build folder: frontend.js and frontend.css (you can create a frontend.scss file in the src directory which compiled to CSS in the build directory). This way, the back-end and front-end codes are separate and the football-rankings.php file is a little easier to read.

/explanation Referring back to the introduction to WordPress block development, there are editor.css and style.css files for back-end and shared styles between the front and back end, respectively. By adding frontend.scss (which compiles to frontend.css, I can isolate styles that are only intended for the front end.

Before we worry about those new files, here’s how we call them in football-rankings.php:

/**
* Registers the block using the metadata loaded from the `block.json` file.
* Behind the scenes, it registers also all assets so they can be enqueued
* through the block editor in the corresponding context.
*
* @see https://developer.wordpress.org/reference/functions/register_block_type/
*/
function create_block_football_rankings_block_init() {
  register_block_type( __DIR__ . '/build', array(
    'render_callback' => 'render_frontend'
  ));
}
add_action( 'init', 'create_block_football_rankings_block_init' );

function render_frontend($attributes) {
  if( !is_admin() ) {
    wp_enqueue_script( 'football_rankings', plugin_dir_url( __FILE__ ) . '/build/frontend.js');
    wp_enqueue_style( 'football_rankings', plugin_dir_url( __FILE__ ) . '/build/frontend.css' ); // HIGHLIGHT 15,16,17,18
  }
  
  ob_start(); ?>

  <div class="football-rankings-frontend" id="league-standings">
    <div class="data">
      <pre>
        <?php echo wp_json_encode( $attributes ) ?>
      </pre>
    </div>
    <div class="header">
      <div class="position">Rank</div>
      <div class="team-logo">Logo</div>
      <div class="team-name">Team name</div>
      <div class="stats">
        <div class="games-played">GP</div>
        <div class="games-won">GW</div>
        <div class="games-drawn">GD</div>
        <div class="games-lost">GL</div>
        <div class="goals-for">GF</div>
        <div class="goals-against">GA</div>
        <div class="points">Pts</div>
      </div>
      <div class="form-history">Last 5 games</div>
    </div>
    <div class="league-table"></div>
  </div>

  <?php return ob_get_clean();
}

Since I am using the render_callback() method for the attributes, I am going to handle the enqueue manually just like the Block Editor Handbook suggests. That’s contained in the !is_admin() condition, and is enqueueing the two files so that we avoid enqueuing them while using the editor screen.

Now that we have two new files we’re calling, we’ve gotta make sure we are telling npm to compile them. So, do that in package.json, in the scripts section:

"scripts": {
  "build": "wp-scripts build src/index.js src/frontend.js",
  "format": "wp-scripts format",
  "lint:css": "wp-scripts lint-style",
  "lint:js": "wp-scripts lint-js",
  "packages-update": "wp-scripts packages-update",
  "plugin-zip": "wp-scripts plugin-zip",
  "start": "wp-scripts start src/index.js src/frontend.js"
},

Another way to include the files is to define them in the block metadata contained in our block.json file, as noted in the introduction to block development:

"viewScript": [ "file:./frontend.js", "example-shared-view-script" ],
"style": [ "file:./frontend.css", "example-shared-style" ],

The only reason I’m going with the package.json method is because I am already making use of the render_callback() method.

Rendering the JSON data

In the rendering part, I am concentrating only on a single block. Generally speaking, you would want to target multiple blocks on the front end. In that case, you need to make use of document.querySelectorAll() with the block’s specific ID.

I’m basically going to wait for the window to load and grab data for a few key objects from JSON and apply them to some markup that renders them on the front end. I am also going to convert the attributes data to a JSON object so that it is easier to read through the JavaScript and set the details from JSON to HTML for things like the football league logo, team logos, and stats.

The “Last 5 games” column shows the result of a team’s last five matches. I have to manually alter the data for it since the API data is in a string format. Converting it to an array can help make use of it in HTML as a separate element for each of a team’s last five matches.

import "./frontend.scss";

// Wait for the window to load
window.addEventListener( "load", () => {
  // The code output
  const dataEl = document.querySelector( ".data pre" ).innerHTML;
  // The parent rankings element
  const tableEl = document.querySelector( ".league-table" );
  // The table headers
  const tableHeaderEl = document.querySelector( "#league-standings .header" );
  // Parse JSON for the code output
  const dataJSON = JSON.parse( dataEl );
  // Print a little note in the console
  console.log( "Data from the front end", dataJSON );
  
  // All the teams 
  let teams = dataJSON.data.response[ 0 ].league.standings[ 0 ];
  // The league logo
  let leagueLogoURL = dataJSON.data.response[ 0 ].league.logo;
  // Apply the league logo as a background image inline style
  tableHeaderEl.style.backgroundImage = `url( ${ leagueLogoURL } )`;
  
  // Loop through the teams
  teams.forEach( ( team, index ) => {
    // Make a div for each team
    const teamDiv = document.createElement( "div" );
    // Set up the columns for match results
    const { played, win, draw, lose, goals } = team.all;

    // Add a class to the parent rankings element
    teamDiv.classList.add( "team" );
    // Insert the following markup and data in the parent element
    teamDiv.innerHTML = `
      <div class="position">
        ${ index + 1 }
      </div>
      <div class="team-logo">
        <img src="${ team.team.logo }" />
      </div>
      <div class="team-name">${ team.team.name }</div>
      <div class="stats">
        <div class="games-played">${ played }</div>
        <div class="games-won">${ win }</div>
        <div class="games-drawn">${ draw }</div>
        <div class="games-lost">${ lose }</div>
        <div class="goals-for">${ goals.for }</div>
        <div class="goals-against">${ goals.against }</div>
        <div class="points">${ team.points }</div>
      </div>
      <div class="form-history"></div>
    `;
    
    // Stringify the last five match results for a team
    const form = team.form.split( "" );
    
    // Loop through the match results
    form.forEach( ( result ) => {
      // Make a div for each result
      const resultEl = document.createElement( "div" );
      // Add a class to the div
      resultEl.classList.add( "result" );
      // Evaluate the results
      resultEl.innerText = result;
      // If the result a win
      if ( result === "W" ) {
        resultEl.classList.add( "win" );
      // If the result is a draw
      } else if ( result === "D" ) {
        resultEl.classList.add( "draw" );
      // If the result is a loss
      } else {
        resultEl.classList.add( "lost" );
      }
      // Append the results to the column
      teamDiv.querySelector( ".form-history" ).append( resultEl );
    });

    tableEl.append( teamDiv );
  });
});

As far as styling goes, you’re free to do whatever you want! If you want something to work with, I have a full set of styles you can use as a starting point.

I styled things in SCSS since the @wordpress/create-block package supports it out of the box. Run npm run start in the command line to watch the SCSS files and compile them to CSS on save. Alternately, you can use npm run build on each save to compile the SCSS and build the rest of the plugin bundle.

View SCSS
body {
  background: linear-gradient(to right, #8f94fb, #4e54c8);
}

.data pre {
  display: none;
}

.header {
  display: grid;
  gap: 1em;
  padding: 10px;
  grid-template-columns: 1fr 1fr 3fr 4fr 3fr;
  align-items: center;
  color: white;
  font-size: 16px;
  font-weight: 600;
  background-repeat: no-repeat;
  background-size: contain;
  background-position: right;
}

.frontend#league-standings {
  width: 900px;
  margin: 60px 0;
  max-width: unset;
  font-size: 16px;

  .header {
    .stats {
      display: flex;
      gap: 15px;

      &amp; &gt; div {
        width: 30px;
      }
    }
  }
}

.league-table {
  background: white;
  box-shadow:
    rgba(50, 50, 93, 0.25) 0px 2px 5px -1px,
    rgba(0, 0, 0, 0.3) 0px 1px 3px -1px;
  padding: 1em;

  .position {
    width: 20px;
  }

  .team {
    display: grid;
    gap: 1em;
    padding: 10px 0;
    grid-template-columns: 1fr 1fr 3fr 4fr 3fr;
    align-items: center;
  }

  .team:not(:last-child) {
    border-bottom: 1px solid lightgray;
  }

  .team-logo img {
    width: 30px;
  }

  .stats {
    display: flex;
    gap: 15px;
  }

  .stats &gt; div {
    width: 30px;
    text-align: center;
  }

  .form-history {
    display: flex;
    gap: 5px;
  }

  .form-history &gt; div {
    width: 25px;
    height: 25px;
    text-align: center;
    border-radius: 3px;
    font-size: 15px;
  }

  .form-history .win {
    background: #347d39;
    color: white;
  }

  .form-history .draw {
    background: gray;
    color: white;
  }

  .form-history .lost {
    background: lightcoral;
    color: white;
  }
}

Here’s the demo!

Check that out — we just made a block plugin that fetches data and renders it on the front end of a WordPress site.

We found an API, fetch()-ed data from it, saved it to the WordPress database, parsed it, and applied it to some HTML markup to display on the front end. Not bad for a single tutorial, right?

Again, we can do the same sort of thing so that the rankings render in the Block Editor in addition to the theme’s front end. But hopefully keeping this focused on the front end shows you how fetching data works in a WordPress block, and how the data can be structured and rendered for display.


Rendering External API Data in WordPress Blocks on the Front End originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Delightful UI Animations With Shared Element Transitions API (Part 2)

In the first part of this article, we covered Shared Element Transitions API (SET API) and how we can use it to effortlessly create complex transitions for various UI elements, which would usually require a lot of JavaScript code or an animation library to achieve.

But what about smooth and delightful transition animations between individual pages? This is probably one of the most often requested features in the past few years because even with all the frameworks like React and Svelte and animation libraries like GSAP and Framer Motion, transitions between pages are still really difficult to do.

In this article, we’re going to showcase same-document page transitions commonly found in Single Page Applications and talk about the future of the Shared Element Transitions API for cross-document (Multi Page Application) transitions. I’ll also showcase some awesome React, Astro, and Svelte implementation examples from the dev community.

Note: Shared Element Transitions API is currently supported only in Chrome version 104+ and Canary with the document-transition flag enabled. Examples will be accompanied by a video, so you can easily follow along with the article if you don’t have the required browser installed.

In case you haven’t checked out my previous article on the topic, here is a quick rundown of this exciting new API so you can follow along with the article.

Shared Element Transitions API

With Shared Element Transitions API, the browser does a lot of heavy lifting when it comes to animations allowing us to create complex UI animations in a more streamlined way. The main part of the API is a JavaScript function that takes screenshots of the UI state before and after the DOM update and apples a crossfade animation:

const moveTransition = document.createDocumentTransition();
await moveTransition.start(() => {
  /* Take screenshot of an outgoing state */
  /* Update the DOM - move item from one container to another */
  targetContainer.append(activeItem);
  /* Take screenshot of an incoming state and crossfade the states */
});

Just by calling the start function, we get a neat and simple crossfade animation between the outgoing and incoming states.

As you can see, we can still navigate between the pages; DOM is updated with the new content, and the URL in the browser changes. We are intercepting the browser’s default navigation behavior and handling the page loading and DOM updates all by ourselves while we remain on the same page.

By just passing the DOM update function as a callback to the SET API start function, we get a neat crossfade transition between pages right out of the box!

With just a few lines of CSS and JavaScript, we’ve created this beautiful transition animation. All we had to do was to identify the shared element (item image) on a clicked link using a page-transition-tag and signal the browser to keep track of its dimension and position.

We get a crossfade animation on a shared element on backward navigation for free because the selector we used document.querySelector(a[href="${url.pathname}"] .card__image) runs on the current page, so when we navigate back to items list page the tag doesn’t get applied and browser cannot match the shared element.

If we want to have the same animation on the shared element when navigating back to the item list page, we have to apply the tag to the correct image element in the grid after we fetch the contents of a target page.

Customizing Page-Transition Animation With CSS

Let’s use CSS animation properties to fine-tune the crossfade and item image animation. We want the crossfade animation to be quick and more subtle, and the more elaborate image animation to be more noticeable and have a nice custom easing function:

/* Speed up crossfade animations */
::page-transition-outgoing-image(*),
::page-transition-incoming-image(*) {
    animation-timing-function: ease-in-out;
    animation-duration: 0.25s;
}

/* Fine-tune shared element position and dimension animation */
::page-transition-container(product-image) {
    animation-timing-function: cubic-bezier(0.22, 1, 0.36, 1);
    animation-duration: 0.5s;
}

We also need to keep in mind that some users might prefer browsing the site without the complex animations with a lot of movement, so we want to either turn them off or provide more appropriate animation:

@media (prefers-reduced-motion) {
  ::page-transition-container(*),
  ::page-transition-outgoing-image(*),
  ::page-transition-incoming-image(*) {
    /* Or add appropriate animation alternatives */
    animation: none !important; 
  }
}

Crossfade animations now run faster, and the sizing and position animation runs a bit slower and with a different timing function.

In this example, I’ve only showcased code snippets relevant to creating page transition and SET API. If you are curious about the complete source code or want to check the demo in detail, feel free to check out the project repository and inspect the demo page.

Upcoming Cross-document Transitions

Proper Shared Element Transitions API support for MPAs is still a work in progress, but we can get a general idea of how it’s supposed to work from a rough draft by WICG.

In same-document transitions, we would use pageTransition.start(/* … */) function to let the browser keep track of the DOM updates. As for the cross-document transitions, we need to run the transition request function on the outgoing page before it’s unloaded and run the transition on the incoming page once it’s ready.

The following code snippets are copied from the WICG draft:

// In the outgoing page
document.addEventListener("pagehide", (event) => {
  if (!event.isSameOriginDocumentSwap) return;
  if (looksRight(event.nextPageURL)) {
    // This signals that the outgoing elements should be captured.
    event.pleaseLetTheNextPageDoATransitionPlease();
  }
});
// In the incoming page
document.addEventListener("beforepageshow", (event) => {
  if (
    event.previousPageWantsToDoATransition &&
    looksRight(event.previousPageURL)
  ) {
    const transitionReadyPromise = event.yeahLetsDoAPageTransition();
  }
});

Shared Element Transitions API for cross-document transitions would also need to be heavily restricted for security reasons.

Framework Implementation Examples

During the past few weeks, I saw some jaw-dropping examples of using Shared Element Transitions API for page navigation, added with progressive enhancement to various frameworks like React and Svelte.

Adding page transitions with SET API in frameworks can be tricky. In this example, we’ve had control over the DOM update functions, but this is not usually the case with front-end frameworks. Hopefully, as this API gets proper browser support and traction in the dev community, frameworks and router libraries will follow suit and provide better ways to integrate Shared Element Transitions API in navigation.

So, I would like to highlight some awesome examples of framework implementations from the community, especially those that provide reusable functions and hooks.

React / Preact

Jake Archibald created a great video playlist example using Preact, TypeScript, and a custom page transition hook. This example uses a custom router implementation to apply class names to the html element to customize the animation and toggle different types of animation depending on the navigation direction.

Astro

Maxi Ferreira implemented page transitions similarly as in our example with Navigation API but with Astro and explained the process in great detail on top of building a stunning movie database app.

He also worked with Ben Myers on this awesome guitar shop example with cool animations on both the guitar image and item background, which expands into a full description background container. This is also a good example of how to create elaborate but seamless and tasteful animations that add to the user experience.

Svelte

Moving onto Svelte, Geoff Rich built this neat fruit nutritional data app and explained the whole process in great detail in his article. SvelteKit has a built-in navigating store, and Geoff created a handy util function for intercepting page transitions and applying the Shared Element Transitions API depending on its browser support.

Conclusion

Shared Element Transitions API allows us not only to implement complex UI animations on a component level but also on a page level. Same-document transitions in Single Page Applications can be implemented today with progressive enhancement, and we can achieve impressive app-like page transitions with just a few lines of JavaScript and CSS. And all that without a JavaScript animation library! More popular and more complex cross-document transitions for Multi Page Applications are still a work in progress, and I can see it being a massive game-changer once it’s released and gains wider browser support.

Judging from the impressive examples we’ve seen online, some of which are featured in this article, we can safely say that the community is more than excited about this API. If you’ve built something awesome using Shared Element Transitions API, feel free to reach out on Twitter or LinkedIn and share your work.

Many thanks to Nikola Vranesic for reviewing the article for technical accuracy.

References