How to Automate Project Versioning and Releases with Continuous Deployment

Having a semantically versioned software will help you easily maintain and communicate changes in your software. Doing this is not easy. Even after manually merging the PR, tagging the commit, and pushing the release, you still have to write release notes. There are a lot of different steps, and many are repetitive and take time.

Let’s look at how we can make a more efficient flow and completely automating our release process by plugin semantic versioning into a continuous deployment process.

Semantic versioning

A semantic version is a number that consists of three numbers separated by a period. For example, 1.4.10 is a semantic version. Each of the numbers has a specific meaning.

Major change

The first number is a Major change, meaning it has a breaking change.

Minor change

The second number is a Minor change, meaning it adds functionality.

Patch change

The third number is a Patch change, meaning it includes a bug fix.

It is easier to look at semantic versioning as Breaking . Feature . Fix. It is a more precise way of describing a version number that doesn’t leave any room for interpretation.

Commit format

To make sure that we are releasing the correct version — by correctly incrementing the semantic version number — we need to standardize our commit messages. By having a standardized format for commit messages, we can know when to increment which number and easily generate a release note. We are going to be using the Angular commit message convention, although we can change this later if you prefer something else.

It goes like this:

<header>
<optional body>
<optional footer>

Each commit message consists of a header, a body, and a footer.

The commit header

The header is mandatory. It has a special format that includes a type, an optional scope, and a subject.

The header’s type is a mandatory field that tells what impact the commit contents have on the next version. It has to be one of the following types:

  • feat: New feature
  • fix: Bug fix
  • docs: Change to the documentation
  • style: Changes that do not affect the meaning of the code (e.g. white-space, formatting, missing semi-colons, etc.)
  • refactor: Changes that neither fix a bug nor add a feature
  • perf: Change that improves performance
  • test: Add missing tests or corrections to existing ones
  • chore: Changes to the build process or auxiliary tools and libraries, such as generating documentation

The scope is a grouping property that specifies what subsystem the commit is related to, like an API, or the dashboard of an app, or user accounts, etc. If the commit modifies more than one subsystem, then we can use an asterisk (*) instead.

The header subject should hold a short description of what has been done. There are a few rules when writing one:

  • Use the imperative, present tense (e.g. “change” instead of “changed” or “changes”).
  • Lowercase the first letter on the first word.
  • Leave out a period (.) at the end.
  • Avoid writing subjects longer than 80 charactersThe commit body.

Just like the header subject, use the imperative, present tense for the body. It should include the motivation for the change and contrast this with previous behavior.

The footer should contain any information about breaking changes and is also the place to reference issues that this commit closes.

Breaking change information should start with BREAKING CHANGE: followed by a space or two new lines. The rest of the commit message goes here.

Enforcing a commit format

Working on a team is always a challenge when you have to standardize anything that everyone has to conform to. To make sure that everybody uses the same commit standard, we are going to use Commitizen.

Commitizen is a command-line tool that makes it easier to use a consistent commit format. Making a repo Commitizen-friendly means that anyone on the team can run git cz and get a detailed prompt for filling out a commit message.

Generating a release

Now that we know our commits follow a consistent standard, we can work on generating a release and release notes. For this, we will use a package called semantic-release. It is a well-maintained package with great support for multiple continuous integration (CI) platforms.

semantic-release is the key to our journey, as it will perform all the necessary steps to a release, including:

  1. Figuring out the last version you published
  2. Determining the type of release based on commits added since the last release
  3. Generating release notes for commits added since the last release
  4. Updating a package.json file and creating a Git tag that corresponds to the new release version
  5. Pushing the new release

Any CI will do. For this article we are using GitHub Action, because I love using a platform’s existing features before reaching for a third-party solution.

There are multiple ways to install semantic-release but we’ll use semantic-release-cli as it provides takes things step-by-step. Let’s run npx semantic-release-cli setup in the terminal, then fill out the interactive wizard.

Th script will do a couple of things:

  • It runs npm adduser with the NPM information provided to generate a .npmrc.
  • It creates a GitHub personal token.
  • It updates package.json.

After the CLI finishes, it wil add semantic-release to the package.json but it won’t actually install it. Run npm install to install it as well as other project dependencies.

The only thing left for us is to configure the CI via GitHub Actions. We need to manually add a workflow that will run semantic-release. Let’s create a release workflow in .github/workflows/release.yml.

name: Release
on:
  push:
    branches:
      - main
jobs:
  release:
    name: Release
    runs-on: ubuntu-18.04
    steps:
      - name: Checkout
        uses: actions/checkout@v2
      - name: Setup Node.js
        uses: actions/setup-node@v1
        with:
          node-version: 12
      - name: Install dependencies
        run: npm ci
      - name: Release
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        # If you need an NPM release, you can add the NPM_TOKEN
        #   NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
        run: npm run release

Steffen Brewersdorff already does an excellent job covering CI with GitHub Actions, but let’s just briefly go over what’s happening here.

This will wait for the push on the main branch to happen, only then run the pipeline. Feel free to change this to work on one, two, or all branches.

on:
  push:
    branches:
      - main

Then, it pulls the repo with checkout and installs Node so that npm is available to install the project dependencies. A test step could go, if that’s something you prefer.

- name: Checkout
uses: actions/checkout@v2
- name: Setup Node.js
uses: actions/setup-node@v1
with:
    node-version: 12
- name: Install dependencies
run: npm ci
# You can add a test step here
# - name: Run Tests
# run: npm test

Finally, let semantic-release do all the magic:

- name: Release
run: npm run release

Push the changes and look at the actions:

The GitHub Actions screen for a project showing a file navigating on the left and the actions it includes on the right against a block background.

Now each time a commit is made (or merged) to a specified branch, the action will run and make a release, complete with release notes.

Showing the GitHub releases screen for a project with an example that shows a version 1.0.0 and 2.0.0, both with release notes outlining features and breaking changes.

Release party!

We have successfully created a CI/CD semantic release workflow! Not that painful, right? The setup is relatively simple and there are no downsides to having a semantic release workflow. It only makes tracking changes a lot easier.

semantic-release has a lot of plugins that can make an even more advanced automations. For example, there’s even a Slack release bot that can post to a project channel once the project has been successfully deployed. No need to head over to GitHub to find updates!


The post How to Automate Project Versioning and Releases with Continuous Deployment appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Optimize Images with a GitHub Action

I was playing with GitHub Actions the other day. Such a nice tool! Short story: you can have it run code for you, like run your build processes, tests, and deployments. But it’s just configuration files that can run whatever you need. There is a whole marketplace of Actions wanting to do work for you.

What I wanted to do was run code to do image optimization. That way I never have to think about it. Any image in the repo has been optimized.

There is an action for this already, Calibre’s image-actions, which we’ll leverage here. You’ll also need to ensure Actions is enabled for the repo. I know in my main organization we only flip on Actions on a per-repo basis, which is one of the options.

Then you make a file at ./github/workflows/optimize-images.yml. That’s where you can configure this action. All your actions can have separate files, if you want them to. I made this a separate file because (1) it only works with “pushes to pull requests,” so if you have other actions that run on different triggers, they won’t mix nicely, and (2) That’s what is in their docs and looks like the suggested usage.

name: Optimize images
on: pull_request
jobs:
  build:
    name: calibreapp/image-actions
    runs-on: ubuntu-latest
    steps:
      - name: Checkout Repo
        uses: actions/checkout@master

      - name: Compress Images
        uses: calibreapp/image-actions@master
        with:
          githubToken: ${{ secrets.GITHUB_TOKEN }}

Now if you make a pull request, you’ll see it run:

That successful run then leaves a comment on the pull request saying what it was able to optimize:

It will literally re-commit those files back to the pull request as well, so if you’re going to stay on the pull request and keep working, you’ll need to push again before you can push to get the optimized images.

I can look at that automatic commit and see the difference:

The commit preview in Git Tower.

How I can merge the PR knowing all is well:

Pretty cool. Is optimizing your images locally particularly hard? No. Is never having to think about it again better? Yeah. You’re taking on a smidge of technical debt here, but reducing it elsewhere, which is a very fair trade, at least in my book.


The post Optimize Images with a GitHub Action appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

A Community-Driven Site with Eleventy: Preparing for Contributions

I’ve recently found myself reaching for Eleventy (aka 11ty) above all other tools when I want to develop a website. It’s hard to beat a static site generator that provides advanced templating opportunities while otherwise getting out of your way and allowing you to just create.

One of those sites is Style Stage, a modern CSS showcase styled by community contributions. Eleventy was perfect for this community-driven project in several ways:

  • Its exceptionally fast builds locally and on a production host
  • It’s un-opinionated about how to construct templates
  • Its ability to create any file type with complete control over how and where files are rendered
  • Its ability to intermix templating languages, such as HTML, Markdown, and Nunjucks
  • It’s highly performant because it compiles to static HTML with no required dependencies for production

The number one reason Eleventy is a great choice for creating a community-driven site is the ability to dynamically create site pages from data sources. We’ll review how to use this feature and more when we create our sample community site.

Article Series:

  1. Preparing for Contributions (You are here!)
  2. Building the Site (Coming tomorrow!)

What goes into creating a community-driven site?

In the not-so-distant past, creating a community-driven site could potentially be a painful process involving CMS nightmares trying to create contributor workflows. Armed with Eleventy and a few other modern tools, this is now nearly fully automatable with a minimum of oversight.

Before we get to inviting contributors, we’ve got some work to do ourselves.

1. Determine what content contributors will have access to modify

This will guide a lot of the other decisions. In the case of using Eleventy for Style Stage, I created a JSON file that contributors can use to create pull requests to modify and provide their own relevant metadata that’s used to create their pages.

An early version of the JSON file which initially had an “Example” for contributors to reference. This screenshot also shows the first two contributors details.

Perhaps you also want to allow access to include additional assets, or maybe it makes sense to have multiple data files for the ease of categorizing and querying data. Or maybe contributors are able to add Markdown files within a particular directory.

Consider the scope of what contributors can modify or submit, and weigh that against an estimate of your availability to review submissions. This will help enable a successful, manageable community.

GitHub actions can make it possible to label or close a pull request with invalid files if you need advanced automated screening of incoming content.

2. Create contributor guidelines

Spending time upfront to think through your guidelines can help with your overall plan. You may identify additional needed features, or items that can be automated.

Once your guidelines are prepared, it’s best to include them in a special file in your GitHub repository called CONTRIBUTING.md. The all-caps filename is the expected format. Having this file creates an automatic extra link for contributors when they are creating their pull request or issues in a prompt that ask them to be sure they’ve reviewed the guidelines:

Screenshot courtesy of the GitHub documentation.

How to handle content licensing and author attribution are things that fall into this category. For example, Style Stage releases contributed stylesheets under the CC BY-NC-SA license but authors retain copyright over original graphics. As part of the build process, the license and author attribution are appended to the styles, and the authors attribution metadata is updated within the style page template.

You’ll also want to consider policies around acceptable content and what would cause submissions to be rejected. Style Stage states that:

Submissions will be rejected for using obscene, excessively violent, or otherwise distasteful imagery, violating the above guidelines, or other reasons at the discretion of the maintainer.

3. Prepare workflow and automations

While Eleventy takes care of the site build, the other key players enabling Style Stage contributions are Netlify and GitHub.

Contributors submit a pull request to the Style Stage repo on GitHub and, when they do, Netlify creates a deploy preview. This allows contributors to verify that their submission works as expected, and saves me time as the maintainer by not having to pull down submissions to ensure they meet the guidelines.

The status of the Netlify deploy updates in real-time on the pull request review page. Once the last item (“/deploy-preview”) displays “Deploy preview ready!” clicking “Details” will launch the live link to the preview.

All discussion takes place through GitHub. This has the added advantage of public accountability which helps dissuade bad actors.

If the contributor needs to make a change, they can update their pull request or request a re-deploy of the branch preview if it’s a remote asset that has changed. This re-deploy is a very small manual step, and it may not be needed for every PR — or even at all, depending on how you accept contributions.

The last step is the final approval of the PR and merging into the main branch. Once the pull request is merged, Netlify immediately deploys the changes to production.

Eleventy is, of course, a static site generator, and several hosts offer webhooks to trigger a build. Netlify’s build plugins are a good example of that. But if you need to refresh data more often than each time a PR is merged, one option is to use IFTTT or Zapier to set up daily deploys, or deploys based on a variety of other triggers.

Example of completed setup of a daily deploy via webhook from IFTTT

It’s worth noting that what we’re talking about here does limit your contributor audience to having a GitHub account. However, GitHub contributions can be done entirely via the web interface, so it’s very possible to provide guidance so that other users — even those who don’t code — can still participate.

4. Choose a method for contributor and community updates

The first consideration here is to decide how critical it is for contributors to know about updates to your site by evaluating the likely impact of the change.

In the case of Style Stage, the core will be unchanging, but there is some planned optional functionality. I went with a weekly(-ish) newsletter. That way, it is something folks can opt into and there is value for contributors and users alike.

Matthew Ström’s “Using Netlify Forms and Netlify Functions to Build an Email Sign-Up Widget” is a great place to learn how to add subscribers to your newsletter with a static form in Eleventy. It also covers a function for sending the subscriber’s email to Buttondown, a lightweight email service. For an example of how to manage your Buttondown email template and content in Eleventy, review the Style Stage setup which shows how to exclude the newsletter from the published site build.

If you’re only expecting low priority updates, then GitHub’s repo notifications might be sufficient for communication. Creating releases is another way to go. Or, hey, it’s even possible to to incorporate notifications on the site itself.

5. Find and engage with potential contributors

Style Stage was an idea that I vetted by tossing out a poll on Twitter. I then put out a “call for contributors” and engaged with responders as well as those who retweeted me. A short timeline also helped find motivated contributors who helped Style Stage avoid launching without any submissions. Many of those contributors became evangelists that introduced Style Stage to even more people. I also promoted a launch livestream which doubled as promotional material.

This is what it means to “engage” with contributors. Creating avenues for engagement and staying engaged with them helps turn casual contributors into “fans” who encourage others to participate.

Remember that the site content is a great place to encourage participation! Style Stage dedicates its entire page to encouraging submissions. If that’s not possible for you, then you might consider using prompts for contributions where it makes sense.

6. Finalize repo settings and include community health files

Finally, ensure that your repository is published publicly and that it includes applicable “community health” files. These are meant to be documents that help establish guidelines, set good expectations with community members, define a code of conduct, and other information that contribute to the overall “health” of the community. There are a bunch of examples, suggestions and tips on how to do this in the GitHub docs.

While there are a half dozen files noted in the documentation, in my experience so far, the three files you’ll need at minimum are:

  • a README.md file at the root of the project that includes the project’s name and a good description of what it is. GitHub will display the contents below the list of files in the repo.
  • a CONTRIBUTING.md file that describes the submission process for contributions. Be explicit as far as what steps are involved and what constitutes a “good” submission.
  • a pull request template. I wouldn’t exactly say this is a mandatory thing, but it’s worth adding to this list because it further solidifies the expectations for submitting contributions. Many templates will even include a checklist that details requirements for approval.

Oh, and having a branch protection rule on the main branch is another good idea. You can do this by going to SettingsBranches from the repo and selecting the “Add rule” option. “Require pull request reviews before merging” and “Require review from Code Owners” are the two key settings to enable. You can check the GitHub docs to learn more about this protection.

Coming up next…

What we covered here is a starting point for creating a community-driven site with Eleventy. The point is that there are several things that need to be considered before we jump straight into code. Communities need care and that requires a few steps that help establish an engaged and healthy community.

You’re probably getting anxious to start coding a community site with Eleventy! Well, that’s coming up in the next installment of this two-parter.  Together, we’ll develop an Eleventy starter from scratch that you can extend for your own community (or personal) site.

Article Series:

  1. Preparing for Contributions (You are here!)
  2. Building the Site (Coming tomorrow!)


The post A Community-Driven Site with Eleventy: Preparing for Contributions appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Top 13 GitHub Alternatives in 2020 [Free and Paid]

If the black cat doesn’t seem cute enough, and you are looking for a reliable yet powerful GitHub alternative, this article unveils some of the top GitHub alternatives you can find today.

Every tool on this list is discussed in detail to help you make a better decision whether to switch over to another git platform or stick with GitHub.

Automate Your Development Workflow With Github Actions

Hi everybody. My name is Hacene. I am a software developer and interested in DevOps. Today, I will show you how to automate a development workflow life cycle using Github Actions.

On November 13, 2019, Github engineers revealed some news: GitHub Actions is supporting CI/CD now, and it's free for public repositories! We can manage and automate our development workflows right in our repository using GitHub Actions. The API supports multiple Operating Systems (Linux, Windows, MacOS…) and different languages.

Effective Microservices CI/CD With GitHub Actions and Ballerina

Introduction

In a microservice architecture, continuous integration and continuous delivery (CI/CD) is critical in creating an agile environment for incorporating incremental changes to your system. This requires every code change you push to your repository to be immediately built, tested, and deployed to your infrastructure. There are many technologies that help make this happen, and it often requires you to set up complex pieces of software to get things done.

You may also like: The Complete CI/CD Collection

How I Learned to Stop Worrying and Love Git Hooks

The merits of Git as a version control system are difficult to contest, but while Git will do a superb job in keeping track of the commits you and your teammates have made to a repository, it will not, in itself, guarantee the quality of those commits. Git will not stop you from committing code with linting errors in it, nor will it stop you from writing commit messages that convey no information whatsoever about the nature of the commits themselves, and it will, most certainly, not stop you from committing poorly formatted code.

Fortunately, with the help of Git hooks, we can rectify this state of affairs with only a few lines of code. In this tutorial, I will walk you through how to implement Git hooks that will only let you make a commit provided that it meets all the conditions that have been set for what constitutes an acceptable commit. If it does not meet one or more of those conditions, an error message will be shown that contains information about what needs to be done for the commit to pass the checks. In this way, we can keep the commit histories of our code bases neat and tidy, and in doing so make the lives of our teammates, and not to mention our future selves, a great deal easier and more pleasant.

As an added bonus, we will also see to it that code that passes all the tests is formatted before it gets committed. What is not to like about this proposition? Alright, let us get cracking.

Prerequisites

In order to be able to follow this tutorial, you should have a basic grasp of Node.js, npm and Git. If you have never heard of something called package.json and git commit -m [message] sounds like code for something super-duper secret, then I recommend that you pay this and this website a visit before you continue reading.

Our plan of action

First off, we are going to install the dependencies that make implementing pre-commit hooks a walk in the park. Once we have our toolbox, we are going to set up three checks that our commit will have to pass before it is made:

  • The code should be free from linting errors.
  • Any related unit tests should pass.
  • The commit message should adhere to a pre-determined format.

Then, if the commit passes all of the above checks, the code should be formatted before it is committed. An important thing to note is that these checks will only be run on files that have been staged for commit. This is a good thing, because if this were not the case, linting the whole code base and running all the unit tests would add quite an overhead time-wise.

In this tutorial, we will implement the checks discussed above for some front-end boilerplate that uses TypeScript and then Jest for the unit tests and Prettier for the code formatting. The procedure for implementing pre-commit hooks is the same regardless of the stack you are using, so by all means, do not feel compelled to jump on the TypeScript train just because I am riding it; and if you prefer Mocha to Jest, then do your unit tests with Mocha.

Installing the dependencies

First off, we are going to install Husky, which is the package that lets us do whatever checks we see fit before the commit is made. At the root of your project, run:

npm i husky --save-dev

However, as previously discussed, we only want to run the checks on files that have been staged for commit, and for this to be possible, we need to install another package, namely lint-staged:

npm i lint-staged --save-dev

Last, but not least, we are going to install commitlint, which will let us enforce a particular format for our commit messages. I have opted for one of their pre-packaged formats, namely the conventional one, since I think it encourages commit messages that are simple yet to the point. You can read more about it here.

npm install @commitlint/{config-conventional,cli} --save-dev

## If you are on a device that is running windows
npm install @commitlint/config-conventional @commitlint/cli --save-dev

After the commitlint packages have been installed, you need to create a config that tells commitlint to use the conventional format. You can do this from your terminal using the command below:

echo "module.exports = {extends: ['@commitlint/config-conventional']}" > commitlint.config.js

Great! Now we can move on to the fun part, which is to say implementing our checks!

Implementing our pre-commit hooks

Below is an overview of the scripts that I have in the package.json of my boilerplate project. We are going to run two of these scripts out of the box before a commit is made, namely the lint and prettier scripts. You are probably asking yourself why we will not run the test script as well, since we are going to implement a check that makes sure any related unit tests pass. The answer is that you have to be a little bit more specific with Jest if you do not want all unit tests to run when a commit is made.

"scripts": {
  "start": "webpack-dev-server --config ./webpack.dev.js --mode development",
  "build": "webpack --config ./webpack.prod.js --mode production",
  "test": "jest",
  "lint": "tsc --noEmit",
  "prettier": "prettier --single-quote --print-width 80 "./**/*.{js,ts}" --write"
}

As you can tell from the code we added to the package.json file below, creating the pre-commit hooks for the lint and prettier scripts does not get more complicated than telling Husky that before a commit is made, lint-staged needs to be run. Then you tell lint-staged to run the lint and prettier scripts on all staged JavaScript and TypeScript files, and that is it!

"scripts": {
  "start": "webpack-dev-server --config ./webpack.dev.js --mode development",
  "build": "webpack --config ./webpack.prod.js --mode production",
  "test": "jest",
  "lint": "tsc --noEmit",
  "prettier": "prettier --single-quote --print-width 80 "./**/*.{js,ts}" --write"
},
"husky": {
  "hooks": {
    "pre-commit": "lint-staged"
  }
},
"lint-staged": {
  "./**/*.{ts}": [
    "npm run lint",
    "npm run prettier"
  ]
}

At this point, if you set out to anger the TypeScript compiler by passing a string to a function that expects a number and then try to commit this code, our lint check will stop your commit in its tracks and tell you about the error and where to find it. This way, you can correct the error of your ways, and while I think that, in itself, is pretty powerful, we are not done yet!

By adding "jest --bail --coverage --findRelatedTests" to our configuration for lint-staged, we also make sure that the commit will not be made if any related unit tests do not pass. Coupled with the lint check, this is the code equivalent of wearing two safety harnesses while fixing broken tiles on your roof.

What about making sure that all commit messages adhere to the commitlint conventional format? Commit messages are not files, so we can not handle them with lint-staged, since lint-staged only works its magic on files staged for commit. Instead, we have to return to our configuration for Husky and add another hook, in which case our package.json will look like so:

"scripts": {
  "start": "webpack-dev-server --config ./webpack.dev.js --mode development",
  "build": "webpack --config ./webpack.prod.js --mode production",
  "test": "jest",
  "lint": "tsc --noEmit",
  "prettier": "prettier --single-quote --print-width 80 "./**/*.{js,ts}" --write"
},
"husky": {
  "hooks": {
    "commit-msg": "commitlint -E HUSKY_GIT_PARAMS",  //Our new hook!
    "pre-commit": "lint-staged"
  }
},
"lint-staged": {
  "./**/*.{ts}": [
    "npm run lint",
    "jest --bail --coverage --findRelatedTests", 
    "npm run prettier"
  ]
}

If your commit message does not follow the commitlint conventional format, you will not be able to make your commit: so long, poorly formatted and obscure commit messages!

If you get your house in order and write some code that passes both the linting and unit test checks, and your commit message is properly formatted, lint-staged will run the Prettier script on the files staged for commit before the commit is made, which feels like the icing on the cake. At this point, I think we can feel pretty good about ourselves; a bit smug even.

Implementing pre-commit hooks is not more difficult than that, but the gains of doing so are tremendous. While I am always skeptical of adding yet another step to my workflow, using pre-commit hooks has saved me a world of bother, and I would never go back to making my commits in the dark, if I am allowed to end this tutorial on a somewhat pseudo-poetical note.

The post How I Learned to Stop Worrying and Love Git Hooks appeared first on CSS-Tricks.

rtCamp Releases GitHub Actions for Automated Code Review, Deploying WordPress, and Slack Notifications

rtCamp, a 60+ person agency and WordPress.com VIP service partner, has released three new GitHub Actions that handle automated code review, WordPress deployment, and Slack notifications.

The PHPCS Code Review action takes advantage of GitHub’s pull request review feature. It performs an automated code review on pull requests using PHPCS. This Action is based on WordPress.com VIP’s GPL-licensed review scripts.

“Our action is a wrapper around the original vip-go-ci project,” rtCamp CEO Rahul Bansal said. “VIP’s project uses Teamcity which is expensive and very hard to get up and running. We built a wrapper around it to get it working with Github. Still huge props to them for sharing what we consider to be the USP of the VIP platform to the public at large.”

The Deploy WordPress GitHub action uses the Deployer.org tool to deploy code changes. Using it requires your git repo to match rtCamp’s WordPress Skeleton which is very similar to the VIP Go Skeleton. The action includes optional support for Hashicorp Vault, which is useful for managing multiple servers.

“Our action supports secrets fetching via HashiCorp Vault project,” Bansal said. “For small teams or indies using Vault might be overkill. But at scale, such as our hosting dept, where they are responsible for more than 100+ servers, Vault streamlines WordPress Deploys. It’s partly because of Vault, devs can simply change the hostname on the fly and everything still works.”

rtCamp has also released a GitHub action called Slack Notify that sends a message to a Slack channel. It can be customized to notify a channel about deployment status. The Site and SSH Host details are available if the action is run following the Deploy WordPress GitHub action. All three of the new Actions are designed to work seamlessly together.

rtCamp plans to add more Actions to its GitHub Actions Library in the future. Bansal said they are currently working on build actions to cover Sass, Webpack, and Grunt, as well as Testing actions for phpunit and QUnit. Further down the road they are planning to build an action that will automatically update their theme and plugin products in their EDD store when there is a GitHub release.