How to Automate Project Versioning and Releases with Continuous Deployment

Having a semantically versioned software will help you easily maintain and communicate changes in your software. Doing this is not easy. Even after manually merging the PR, tagging the commit, and pushing the release, you still have to write release notes. There are a lot of different steps, and many are repetitive and take time.

Let’s look at how we can make a more efficient flow and completely automating our release process by plugin semantic versioning into a continuous deployment process.

Semantic versioning

A semantic version is a number that consists of three numbers separated by a period. For example, 1.4.10 is a semantic version. Each of the numbers has a specific meaning.

Major change

The first number is a Major change, meaning it has a breaking change.

Minor change

The second number is a Minor change, meaning it adds functionality.

Patch change

The third number is a Patch change, meaning it includes a bug fix.

It is easier to look at semantic versioning as Breaking . Feature . Fix. It is a more precise way of describing a version number that doesn’t leave any room for interpretation.

Commit format

To make sure that we are releasing the correct version — by correctly incrementing the semantic version number — we need to standardize our commit messages. By having a standardized format for commit messages, we can know when to increment which number and easily generate a release note. We are going to be using the Angular commit message convention, although we can change this later if you prefer something else.

It goes like this:

<header>
<optional body>
<optional footer>

Each commit message consists of a header, a body, and a footer.

The commit header

The header is mandatory. It has a special format that includes a type, an optional scope, and a subject.

The header’s type is a mandatory field that tells what impact the commit contents have on the next version. It has to be one of the following types:

  • feat: New feature
  • fix: Bug fix
  • docs: Change to the documentation
  • style: Changes that do not affect the meaning of the code (e.g. white-space, formatting, missing semi-colons, etc.)
  • refactor: Changes that neither fix a bug nor add a feature
  • perf: Change that improves performance
  • test: Add missing tests or corrections to existing ones
  • chore: Changes to the build process or auxiliary tools and libraries, such as generating documentation

The scope is a grouping property that specifies what subsystem the commit is related to, like an API, or the dashboard of an app, or user accounts, etc. If the commit modifies more than one subsystem, then we can use an asterisk (*) instead.

The header subject should hold a short description of what has been done. There are a few rules when writing one:

  • Use the imperative, present tense (e.g. “change” instead of “changed” or “changes”).
  • Lowercase the first letter on the first word.
  • Leave out a period (.) at the end.
  • Avoid writing subjects longer than 80 charactersThe commit body.

Just like the header subject, use the imperative, present tense for the body. It should include the motivation for the change and contrast this with previous behavior.

The footer should contain any information about breaking changes and is also the place to reference issues that this commit closes.

Breaking change information should start with BREAKING CHANGE: followed by a space or two new lines. The rest of the commit message goes here.

Enforcing a commit format

Working on a team is always a challenge when you have to standardize anything that everyone has to conform to. To make sure that everybody uses the same commit standard, we are going to use Commitizen.

Commitizen is a command-line tool that makes it easier to use a consistent commit format. Making a repo Commitizen-friendly means that anyone on the team can run git cz and get a detailed prompt for filling out a commit message.

Generating a release

Now that we know our commits follow a consistent standard, we can work on generating a release and release notes. For this, we will use a package called semantic-release. It is a well-maintained package with great support for multiple continuous integration (CI) platforms.

semantic-release is the key to our journey, as it will perform all the necessary steps to a release, including:

  1. Figuring out the last version you published
  2. Determining the type of release based on commits added since the last release
  3. Generating release notes for commits added since the last release
  4. Updating a package.json file and creating a Git tag that corresponds to the new release version
  5. Pushing the new release

Any CI will do. For this article we are using GitHub Action, because I love using a platform’s existing features before reaching for a third-party solution.

There are multiple ways to install semantic-release but we’ll use semantic-release-cli as it provides takes things step-by-step. Let’s run npx semantic-release-cli setup in the terminal, then fill out the interactive wizard.

Th script will do a couple of things:

  • It runs npm adduser with the NPM information provided to generate a .npmrc.
  • It creates a GitHub personal token.
  • It updates package.json.

After the CLI finishes, it wil add semantic-release to the package.json but it won’t actually install it. Run npm install to install it as well as other project dependencies.

The only thing left for us is to configure the CI via GitHub Actions. We need to manually add a workflow that will run semantic-release. Let’s create a release workflow in .github/workflows/release.yml.

name: Release
on:
  push:
    branches:
      - main
jobs:
  release:
    name: Release
    runs-on: ubuntu-18.04
    steps:
      - name: Checkout
        uses: actions/checkout@v2
      - name: Setup Node.js
        uses: actions/setup-node@v1
        with:
          node-version: 12
      - name: Install dependencies
        run: npm ci
      - name: Release
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        # If you need an NPM release, you can add the NPM_TOKEN
        #   NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
        run: npm run release

Steffen Brewersdorff already does an excellent job covering CI with GitHub Actions, but let’s just briefly go over what’s happening here.

This will wait for the push on the main branch to happen, only then run the pipeline. Feel free to change this to work on one, two, or all branches.

on:
  push:
    branches:
      - main

Then, it pulls the repo with checkout and installs Node so that npm is available to install the project dependencies. A test step could go, if that’s something you prefer.

- name: Checkout
uses: actions/checkout@v2
- name: Setup Node.js
uses: actions/setup-node@v1
with:
    node-version: 12
- name: Install dependencies
run: npm ci
# You can add a test step here
# - name: Run Tests
# run: npm test

Finally, let semantic-release do all the magic:

- name: Release
run: npm run release

Push the changes and look at the actions:

The GitHub Actions screen for a project showing a file navigating on the left and the actions it includes on the right against a block background.

Now each time a commit is made (or merged) to a specified branch, the action will run and make a release, complete with release notes.

Showing the GitHub releases screen for a project with an example that shows a version 1.0.0 and 2.0.0, both with release notes outlining features and breaking changes.

Release party!

We have successfully created a CI/CD semantic release workflow! Not that painful, right? The setup is relatively simple and there are no downsides to having a semantic release workflow. It only makes tracking changes a lot easier.

semantic-release has a lot of plugins that can make an even more advanced automations. For example, there’s even a Slack release bot that can post to a project channel once the project has been successfully deployed. No need to head over to GitHub to find updates!


The post How to Automate Project Versioning and Releases with Continuous Deployment appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Comparing Various Ways to Hide Things in CSS

You would think that hiding content with CSS is a straightforward and solved problem, but there are multiple solutions, each one being unique.

Developers most commonly use display: none to hide the content on the page. Unfortunately, this way of hiding content isn’t bulletproof because now that content is now “inaccessible” to screen readers. It’s tempting to use it, but especially in cases where something is only meant to be visually hidden, don’t reach for it.

The fact is that there are many ways to “hide” things in CSS, each with their pros and cons which greatly depend on how it’s being used. We’re going to review each technique here and cap things off with a summary that helps us decide which to use and when.

How to spot differences between the techniques

To see a difference between different ways of hiding content, we must introduce some metrics. Metrics that we’ll use to compare the methods. I decided to break that down by asking questions focused on four particular areas that affect layout, performance and accessibility:

  1. Accessibility: Is the hidden content read by a screen reader?
  2. Document flow: Will the hidden element affect the document layout?
  3. Rendering: Will the hidden element’s box model be rendered?
  4. Event triggers: Does the element detect clicks or focus?

Now that we have our criteria out of the way, let’s compare the methods. Again, we’ll put everything together at the end in a way that we can use it as a reference for making decisions when hiding things in CSS.

Method 1: The display property

We kicked off this post with a caution about using display to hide content. And as we established, using it to hide an element means that the element is not generated at all. It’s in the DOM, but never actually rendered.

The element will still show in the markup, if you inspect the page you will be able to see the element. The box model will not generate nor appear on the page, which also applies to all its children.

And what’s more, if the element has any event listeners — say a click or hover — they won’t register at all. And as we’ve discussed already, all the content will be ignored by screen readers. Here, we have two visible buttons and one hidden with display: none. All three buttons have click events but only the two visible buttons will render and register the clicks.

Display is the only property that will affect image request firing. If an image tag (or parent element) has a display property set to none either through inline CSS or by selector, the image will be downloaded. On the other hand, if the image is applied with a background property, it won’t be downloaded.

This is the case because the parser hasn’t applied the CSS when an HTML document is parsed and it encounters an <img> tag. On the other hand, when we apply the image to an element with a background property, the image won’t be downloaded because the parser hasn’t applied the CSS where the image is called. This behavior is matched across all latest browsers. The only exception is IE 11, which will download images in both cases.

MetricResult
Is the hidden content read by a screen reader?
Will the hidden element affect the document layout?
Will the hidden element’s box model be rendered?
Does the element detect clicks or focus?

Method 2: The visibility property

If an element’s visibility property is set to hidden, then the element is “visually hidden.” Being “visually hidden” sounds a lot like what display: none does, but it’s incredibly different in that the element is generated and rendered, but invisible. This means that the element’s box model is present, giving it dimensions that continue to occupy space on the screen even though it doesn’t appear to be there.

Imagine you’re wearing an invisible cloak that makes you invisible to others, but you are still able to bump into things. You’re physically there, even if you’re invisible to the human eye.

But that’s where the differences between “visually hidden” and “not displayed” end. In fact, elements hidden with visibility and display behave the same in terms of accessibility and event triggers. Invisible elements are inaccessible to screen readers and won’t register events, as we see in the following demo that’s exactly the same as the last example, but merely swaps display: none with visibility: hidden.

MetricResult
Is the hidden content read by a screen reader?
Will the hidden element affect the document layout?
Will the hidden element’s box model be rendered?
Does the element detect clicks or focus?

Method 3: The opacity property

The opacity property only affects the visual aspect of the element. If we set an element’s opacity to zero, the element will be fully transparent. Again, it’s a lot like visibility: hidden where we’re draping an invisible cloak on the element where it’s invisible, but still physically present.

In other words, what we have is a hollow, transparent element that acts like any other element, only it’s invisible. Sounds a lot like the visibility method, right? The difference is that a fully transparent element is still accessible to a screen reader and can register events, like clicks, as we see in the following example.

MetricResult
Is the hidden content read by a screen reader?
Will the hidden element affect the document layout?
Will the hidden element’s box model be rendered?
Does the element detect clicks or focus?

Method 4: The position property

Pushing an element off-screen with absolute positioning is another way developers often hide things. Using top and left, we can push the element so far off the screen that there’s no way it will ever be seen. It’s like hiding the cookie jar outside of the house so the kids (or maybe you!) can’t find them.

“Absolute” is the key word here. If we set position to absolute, an element is taken out of the document flow which is a way of saying it no longer adheres to its natural position in the DOM. In other words, the page doesn’t reserve any space for it, which knocks the element out of order visually, positioning it to it’s nearest positioned element if there is one, or the document root if nothing else.

We take advantage of absolute positioning by taking the “hidden” element out of the document flow and offsetting it toward the top-left with values of -9999px.

.hidden {
  position: absolute;
  top: -9999px;
  left: -9999px;
}
MetricEffect
Is the hidden content read by a screen reader?
Will the hidden element affect the document layout?
Will the hidden element’s box model be rendered?
Does the element detect clicks or focus?

If the hidden element contains focusable content, the page will scroll to the element when it is in focus, creating a sudden jump.

Method 5: The “visually hidden” class

So far, the position method is the closest we’ve seen to an accessibility-friendly way to hide things in CSS. But the problem with focusable content causing sudden page jumps isn’t great. Another approach to accessible hiding combines absolute positioning, the clip property and hidden overflow. Scott O’Hara blogged it back in 2017.

.visually-hidden:not(:focus):not(:active) {
  clip: rect(0 0 0 0); 
  clip-path: inset(50%);
  height: 1px;
  overflow: hidden;
  position: absolute;
  white-space: nowrap; 
  width: 1px;
}

Let’s break that down.

We need to remove the element from the document flow. The best way to do this is by using position: absolute. This will remove the element, but we won’t push it off the screen.

.visually-hidden {
  position: absolute;
}

We can hide the element by setting the width and height property to zero. Unfortunately, that won’t work because some screen readers will ignore elements with zero width and height. What we can do is set it to the second-lowest value, 1px. That means the content will easily overflow the space, so we also need overflow: hidden to make sure it doesn’t visually spill over.

.visually-hidden {
  height: 1px;
  overflow: hidden;
  position: absolute;
  width: 1px;
}

To hide that one-pixel square, we can use the CSS clipping property. It is perfect for this situation, as it doesn’t affect screen readers. The content is there but, again, is visually hidden. The thing to note is that clip was deprecated in favor of clip-path but is still needed if we need to support older versions of Internet Explorer.

.visually-hidden {
  clip: rect(0 0 0 0);
  clip-path: inset(50%);
  height: 1px;
  overflow: hidden;
  position: absolute;
  width: 1px;
}

Another piece of the “visually hidden” class puzzle is to address smushed off-screen accessible text, an oddity that removes white-spacing between words, causing them to be read aloud like one big string of words. For example, “Welcome back home” will be read out as “Welcomebackhome.”

A simple solution to this problem is to set the white-space: nowrap:

.visually-hidden {
  clip: rect(0 0 0 0);
  clip-path: inset(50%);
  height: 1px;
  overflow: hidden;
  position: absolute;
  white-space: nowrap;
  width: 1px;
}

And, finally! The last thing to consider is to allow certain element with native focus and active sites to display when they are in focus, while continuing to prevent other elements, like paragraphs, from displaying. We can use the :not pseudo-selector for that.

.visually-hidden:not(:focus):not(:active) {
  clip: rect(0 0 0 0);
  clip-path: inset(50%);
  height: 1px;
  overflow: hidden;
  position: absolute;
  white-space: nowrap;
  width: 1px;
}
MetricResult
Is the hidden content read by a screen reader?
Will the hidden element affect the document layout?
Will the hidden element’s box model be rendered?
Does the element detect clicks or focus?

Honorable mentions

There are even more methods than the five we’ve covered. For example, the text-indent property can push text off the screen like the position method:

.hidden {
  text-indent: -9999em;
}

Unfortunately, this approach doesn’t jive with RTL writing modes. That makes it less adaptable than other solutions we’ve covered.

Another method is using transform to scale or move the element out of the way. It works the same — visually only — like opacity.

.hidden {
  transform: scale(0);
}

Let’s put everything together!

We got to a solution that will visually hide content but still be accessible. Then, should you stop using display: none? No, this is still the best way to hide an element completely (visually and accessibly).

That said, It is worth mentioning that if you want to achieve an opposite result — hide something from the screen reader, the aria-hidden="true" attribute will hide the content from screen readers, but not visually.

With that, here is a complete table that compares all of the approaches. Use it to guide your decisions on how to hide content next time you find yourself in that situation.

MetricDisplayVisibilityOpacityPositionAccessible Way
Is the hidden content read by a screen reader?
Will the hidden element affect the document layout?
Will the hidden element’s box model be rendered?
Does the element detect clicks or focus?

The post Comparing Various Ways to Hide Things in CSS appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Working with JavaScript Media Queries

What’s the first thing that comes to mind when you think of media queries? Maybe something in a CSS file that looks like this:

body {
  background-color: plum;
}


@media (min-width: 768px) {
  body {
    background-color: tomato;
  }
}

CSS media queries are a core ingredient in any responsive design. They’re a great way to apply different styles to different contexts, whether it’s based on viewport size, motion preference, preferred color scheme, specific interactions and, heck, even certain devices like printers, TVs and projectors, among many others.

But did you know that we have media queries for JavaScript too? It’s true! We may not see them as often in JavaScript, but there definitely are use cases for them I have found helpful over the years for creating responsive plugins, like sliders. For example, at a certain resolution, you may need to re-draw and recalculate the slider items.

Working with media queries in JavaScript is very different than working with them in CSS, even though the concepts are similar: match some conditions and apply some stuff.

Using matchMedia() 

To determine if the document matches the media query string in JavaScript, we use the matchMedia() method. Even though it’s officially part of the CSS Object Model View Module specification which is in Working Draft status, the browser support for it is great going as far back as Internet Explorer 10 with 98.6% global coverage.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

Desktop

ChromeFirefoxIEEdgeSafari
9610125.1

Mobile / Tablet

Android ChromeAndroid FirefoxAndroidiOS Safari
847935.0-5.1

The usage is nearly identical to CSS media queries. We pass the media query string to matchMedia() and then check the .matches property.

// Define the query
const mediaQuery = window.matchMedia('(min-width: 768px)')

The defined media query will return a MediaQueryList object. It is an object that stores information about the media query and the key property we need is .matches. That is a read-only Boolean property that returns true if the document matches the media query.

// Create a media condition that targets viewports at least 768px wide
const mediaQuery = window.matchMedia('(min-width: 768px)')


// Check if the media query is true
if (mediaQuery.matches) {
  // Then trigger an alert
  alert('Media Query Matched!')
}

That’s the basic usage for matching media conditions in JavaScript. We create a match condition (matchMedia()) that returns an object (MediaQueryList), check against it (.matches), then do stuff if the condition evaluates to true. Not totally unlike CSS!

But there’s more to it. For example, if we were change the window size below our target window size, nothing updates the way it will with CSS right out of the box. That’s because .matches is perfect for one-time instantaneous checks but is unable to continuously check for changes. That means we need to…

Listen for changes

 MediaQueryList has an addListener() (and the subsequent removeListener()) method that accepts a callback function (represented by the .onchange event) that’s invoked when the media query status changes. In other words, we can fire additional functions when the conditions change, allowing us to “respond” to the updated conditions.

// Create a condition that targets viewports at least 768px wide
const mediaQuery = window.matchMedia('(min-width: 768px)')


function handleTabletChange(e) {
  // Check if the media query is true
  if (e.matches) {
    // Then log the following message to the console
    console.log('Media Query Matched!')
  }
}


// Register event listener
mediaQuery.addListener(handleTabletChange)

// Initial check
handleTabletChange(mediaQuery)

The one-two punch of matchMedia() and MediaQueryList gives us the same power to not only match media conditions that CSS provides, but to actively respond to updated conditions as well.

When you register an event listener with addListener() it won’t fire initially. We need to call the event handler function manually and pass the media query as the argument.

The old way of doing things

For the sake of context — and a little nostalgia — I would like to cover the old, but still popular, way of doing “media queries” in JavaScript (and, yes, those quotes are important here). The most common approach is binding a resize event listener that checks window.innerWidth or window.innerHeight.

You’ll still see something like this in the wild:

function checkMediaQuery() {
  // If the inner width of the window is greater then 768px
  if (window.innerWidth > 768) {
    // Then log this message to the console
    console.log('Media Query Matched!')
  }
}


// Add a listener for when the window resizes
window.addEventListener('resize', checkMediaQuery);

Since the resize event is called on each browser resize, this is an expensive operation! Looking at the performance impact of an empty page we can see the difference.

That’s a 157% increase in scripting!

An even simpler way to see the difference is with the help of a console log.

That’s 208 resize events versus six matched media events.

Even if we look past the performance issues, resize is restrictive in the sense that it doesn’t let us write advanced media queries for things like print and orientation. So, while it does mimic “media query” behavior by allowing us to match viewport widths, it’s incapable of matching much of anything else — and we know that true media queries are capable of so much more.

Conclusion

That’s a look at media queries in JavaScript! We explored how matchMedia() allows us to define media conditions and examined the MediaQueryList object that lets us do one-time (.matches) and persistent (addListener()) checks against those conditions so that we can respond to changes (.onchange) by invoking functions.

We also saw the “old” way of doing things by listening for resize events on the window. While it’s still widely used and a totally legit way to respond to changes to the size of the window.innerWidth, it’s unable to perform checks on advanced media conditions.

To finish the article here is a useful example that is not achievable in the old way. Using a media query I will check if the user is in the landscape mode. This approach is common when developing HTML5 games and is best viewed on a mobile device.


The post Working with JavaScript Media Queries appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Don’t Wait! Mock the API

Today we have a loose coupling between the front end and the back end of web applications. They are usually developed by separate teams, and keeping those teams and the technology in sync is not easy. To solve part of this problem, we can “fake” the API server that the back end tech would normally create and develop as if the API or endpoints already exist.

The most common term used for creating simulated or “faking” a component is mocking. Mocking allows you to simulate the API without (ideally) changing the front end. There are many ways to achieve mocking, and this is what makes it so scary for most people, at least in my opinion. 

Let’s cover what a good API mocking should look like and how to implement a mocked API into a new or existing application.

Note, the implementation that I am about to show is framework agnostic — so it can be used with any framework or vanilla JavaScript application.

Mirage: The mocking framework

The mocking approach we are going to use is called Mirage, which is somewhat new. I have tested many mocking frameworks and just recently discovered this one, and it’s been a game changer for me.

Mirage is marketed as a front-end-friendly framework that comes with a modern interface. It works in your browser, client-side, by intercepting XMLHttpRequest and Fetch requests.

We will go through creating a simple application with mocked API and cover some common problems along the way.

Mirage setup

Let’s create one of those standard to-do applications to demonstrate mocking. I will be using Vue as my framework of choice but of course, you can use something else since we’re working with a framework-agnostic approach.

So, go ahead and install Mirage in your project:

# Using npm
npm i miragejs -D


# Using Yarn
yarn add miragejs -D

To start using Mirage, we need to setup a “server” (in quotes, because it’s a fake server). Before we jump into the setup, I will cover the folder structure I found works best.

/
├── public
├── src
│   ├── api
│   │   └── mock
│   │       ├── fixtures
│   │       │   └── get-tasks.js
│   │       └── index.js
│   └── main.js
├── package.json
└── package-lock.json

In a mock directory, open up a new index.js file and define your mock server:

// api/mock/index.js
import { Server } from 'miragejs';


export default function ({ environment = 'development' } = {}) {
  return new Server({
    environment,


    routes() {
      // We will add our routes here
    },
  });
}

The environment argument we’re adding to the function signature is just a convention. We can pass in a different environment as needed.

Now, open your app bootstrap file. In our case, this is he src/main.js file since we are working with Vue. Import your createServer function, and call it in the development environment.

// main.js
import createServer from './mock'


if (process.env.NODE_ENV === 'development') {
    createServer();
}

We’re using the process.env.NODE_ENV environment variable here, which is a common global variable. The conditional allows Mirage to be tree-shaken in production, therefore, it won’t affect your production bundle.

That is all we need to set up Mirage! It’s this sort of ease that makes the DX of Mirage so nice.

Our createServer function is defaulting it to development environment for the sake of making this article simple. In most cases, this will default to test since, in most apps, you’ll call createServer once in development mode but many times in test files.

How it works

Before we make our first request, let’s quickly cover how Mirage works.

Mirage is a client-side mocking framework, meaning all the mocking will happen in the browser, which Mirage does using the Pretender library. Pretender will temporarily replace native XMLHttpRequest and Fetch configurations, intercept all requests, and direct them to a little pretend service that the Mirage hooks onto.

If you crack open DevTools and head into the Network tab, you won’t see any Mirage requests. That’s because the request is intercepted and handled by Mirage (via Pretender in the back end). Mirage logs all requests, which we’ll get to in just a bit.

Let’s make requests!

Let’s create a request to an /api/tasks endpoint that will return a list of tasks that we are going to show in our to-do app. Note that I’m using axios to fetch the data. That’s just my personal preference. Again, Mirage works with native XMLHttpRequest, Fetch, and any other library.

// components/tasks.vue
export default {
  async created() {
    try {
      const { data } = await axios.get('/api/tasks'); // Fetch the data
      this.tasks = data.tasks;
    } catch(e) {
      console.error(e);
    }
  }
};

Opening your JavaScript console — there should be an error from Mirage in there:

Mirage: Your app tried to GET '/api/tasks', but there was no route defined to handle this request.

This means Mirage is running, but the router hasn’t been mocked out yet. Let’s solve this by adding that route.

Mocking requests

Inside our mock/index.js file, there is a routes() hook. Route handlers allow us to define which URLs should be handled by the Mirage server.

To define a router handler, we need to add it inside the routes() function.

// mock/index.js
export default function ({ environment = 'development' } = {}) {
    // ...
    routes() {
      this.get('/api/tasks', () => ({
        tasks: [
          { id: 1, text: "Feed the cat" },
          { id: 2, text: "Wash the dishes" },
          //...
        ],
      }))
    },
  });
}

The routes() hook is the way we define our route handlers. Using a this.get() method lets us mock GET requests. The first argument of all request functions is the URL we are handling, and the second argument is a function that responds with some data.

As a note, Mirage accepts any HTTP request type, and each type has the same signature:

this.get('/tasks', (schema, request) => { ... });
this.post('/tasks', (schema, request) => { ... });
this.patch('/tasks/:id', (schema, request) => { ... });
this.put('/tasks/:id', (schema, request) => { ... });
this.del('/tasks/:id', (schema, request) => { ... });
this.options('/tasks', (schema, request) => { ... });

We will discuss the schema and request parameters of the callback function in a moment.

With this, we have successfully mocked our route and we should see inside our console a successful response from Mirage.

Screenshot of a Mirage response in the console showing data for two task objects with IDs 1 and 2.

Working with dynamic data

Trying to add a new to-do in our app won’t be possible because our data in the GET response has hardcoded values. Mirage’s solution to this is that they provide a lightweight data layer that acts as a database. Let’s fix what we have so far.

Like the routes() hook, Mirage defines a seeds() hook. It allows us to create initial data for the server. I’m going to move the GET data to the seeds() hook where I will push it to the Mirage database.

seeds(server) {
  server.db.loadData({
    tasks: [
      { id: 1, text: "Feed the cat" },
      { id: 2, text: "Wash the dishes" },
    ],
  })
},

I moved our static data from the GET method to seeds() hook, where that data is loaded into a faux database. Now, we need to refactor our GET method to return data from that database. This is actually pretty straightforward — the first argument of the callback function of any route() method is the schema.

this.get('/api/tasks', (schema) => {
  return schema.db.tasks;
})

Now we can add new to-do items to our app by making a POST request:

async addTask() {
  const { data } = await axios.post('/api/tasks', { data: this.newTask });
  this.tasks.push(data);
  this.newTask = {};
},

We mock this route in Mirage by creating a POST /api/tasks route handler:

this.post('/tasks', (schema, request) => {})

Using the second parameter of the callback function, we can see the sent request.

Screenshot of the mocking server request. The requestBody property is highlighted in yellow and contains text data that says Hello CSS-Tricks.

Inside the requestBody property is the data that we sent. That means it’s now available for us to create a new task.

this.post('/api/tasks', (schema, request) => {
  // Take the send data from axios.
  const task = JSON.parse(request.requestBody).data


  return schema.db.tasks.insert(task)
})

The id of the task will be set by the Mirage’s database by default. Thus, there is no need to keep track of ids and send them with your request — just like a real server.

Dynamic routes? Sure!

The last thing to cover is dynamic routes. They allow us to use a dynamic segment in our URL, which is useful for deleting or updating a single to-do item in our app.

Our delete request should go to /api/tasks/1, /api/tasks/2, and so on. Mirage provides a way for us  to define a dynamic segment in the URL, like this:

this.delete('/api/tasks/:id', (schema, request) => {
  // Return the ID from URL.
  const id = request.params.id;


  return schema.db.tasks.remove(id);
})

Using a colon (:) in the URL is how we define a dynamic segment in our URL. After the colon, we specify the name of the segment which, in our case, is called id and maps to the ID of a specific to-do item. We can access the value of the segment via the request.params object, where the property name corresponds to the segment name — request.params.id. Then we use the schema to remove an item with that same ID from the Mirage database.

If you’ve noticed, all of my routes so far are prefixed with api/. Writing this over and over can be cumbersome and you may want to make it easier. Mirage offers the namespace property that can help. Inside the routes hook, we can define the namespace property so we don’t have to write that out each time.

routes() {
 // Prefix for all routes.
 this.namespace = '/api';


 this.get('/tasks', () => { ... })
 this.delete('/tasks/:id', () => { ... })
 this.post('/tasks', () => { ... })
}

OK, let’s integrate this into an existing app

So far, everything we’ve looked at integrates Mirage into a new app. But what about adding Mirage to an existing application? Mirage has you covered so you don’t have to mock your entire API.

The first thing to note is that adding Mirage to an existing application will throw an error if the site makes a request that isn’t handled by Mirage. To avoid this, we can tell Mirage to pass through all unhandled requests.

routes() {
  this.get('/tasks', () => { ... })
  
  // Pass through all unhandled requests.
  this.passthrough()
}

Now we can develop on top of an existing API with Mirage handling only the missing parts of our API.

Mirage can even change the base URL of which it captures the requests. This is useful because, usually, a server won’t live on localhost:3000 but rather on a custom domain.

routes() {
 // Set the base route.
 this.urlPrefix = 'https://devenv.ourapp.example';


 this.get('/tasks', () => { ... })
}

Now, all of our requests will point to the real API server, but Mirage will intercept them like it did when we set it up with a new app. This means that the transition from Mirage to the real API is pretty darn seamless — delete the route from the mock server and things are good to go.

Wrapping up

Over the course of five years, I have used many mocking frameworks, yet I never truly liked any of the solutions out there. That was until recently, when my team was faced with a need for a mocking solution and I found out about Mirage.

Other solutions, like the commonly used JSON-Server, are external processes that need to run alongside the front end. Furthermore, they are often nothing more than an Express server with utility functions on top. The result is that front-end developers like us need to know about middleware, NodeJS, and how servers work… things many of us probably don’t want to handle. Other attempts, like Mockoon, have a complex interface while lacking much-needed features. There’s another group of frameworks that are only used for testing, like the popular SinonJS. Unfortunately, these frameworks can’t be used to mock the regular behavior.

My team managed to create a functioning server that enables us to write front-end code as if we were working with a real back-end. We did it by writing the front-end codebase without any external processes or servers that are needed to run. This is why I love Mirage. It is really simple to set up, yet powerful enough to handle anything that’s thrown at it. You can use it for basic applications that return a static array to full-blown back-end apps alike — regardless of whether it’s a new or existing app.

There’s a lot more to Mirage beyond the implementations we covered here. A working example of what we covered can be found on GitHub. (Fun fact: Mirage also works with GraphQL!) Mirage has well-written documentation that includes a bunch of step-by-step tutorials, so be sure to check it out.


The post Don’t Wait! Mock the API appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Getting JavaScript to Talk to CSS and Sass

JavaScript and CSS have lived beside one another for upwards of 20 years. And yet it’s been remarkably tough to share data between them. There have been large attempts, sure. But, I have something simple and intuitive in mind — something not involving a structural change, but rather putting CSS custom properties and even Sass variables to use.

CSS custom properties and JavaScript

Custom properties shouldn’t be all that surprising here. One thing they’ve always been able to do since browsers started supporting them is work alongside JavaScript to set and manipulate the values.

Specifically, though, we can use JavaScript with custom properties in a few ways. We can set the value of a custom property using setProperty:

document.documentElement.style.setProperty("--padding", 124 + "px"); // 124px

We can also retrieve CSS variables using getComputedStyle in JavaScript. The logic behind this is fairly simple: custom properties are part of the style, therefore, they are part of computed style.

getComputedStyle(document.documentElement).getPropertyValue('--padding') // 124px

Same sort of deal with getPropertyValue. That let us get the custom property value from an inlined style from HTML markup.

document.documentElement.style.getPropertyValue("--padding'"); // 124px

Note that custom properties are scoped. This means we need to get computed styles from a particular element. As we previously defined our variable in :root we get them on the HTML element.

Sass variables and JavaScript

Sass is a pre-processing language, meaning it’s turned into CSS before it ever is a part of a website. For that reason, accessing them from JavaScript in the same way as CSS custom properties — which are accessible in the DOM as computed styles — is not possible. 

We need to modify our build process to change this. I doubt there isn’t a huge need for this in most cases since loaders are often already part of a build process. But if that’s not the case in your project, we need three modules that are capable of importing and translating Sass modules.

Here’s how that looks in a webpack configuration:

module.exports = {
 // ...
 module: {
  rules: [
   {
    test: /\.scss$/,
    use: ["style-loader", "css-loader", "sass-loader"]
   },
   // ...
  ]
 }
};

To make Sass (or, specifically, SCSS in this case) variables available to JavaScript, we need to “export” them.

// variables.scss
$primary-color: #fe4e5e;
$background-color: #fefefe;
$padding: 124px;

:export {
  primaryColor: $primary-color;
  backgroundColor: $background-color;
  padding: $padding;
}

The :export block is the magic sauce webpack uses to import the variables. What is nice about this approach is that we can rename the variables using camelCase syntax and choose what we expose.

Then we import the Sass file (variables.scss) file into JavaScript, giving us access to the variables defined in the file.

import variables from './variables.scss';

/*
 {
  primaryColor: "#fe4e5e"
  backgroundColor: "#fefefe"
  padding: "124px"
 }
*/

document.getElementById("app").style.padding = variables.padding;

There are some restrictions on the :export syntax that are worth calling out:

  • It must be at the top level but can be anywhere in the file.
  • If there is more than one in a file, the keys and values are combined and exported together.
  • If a particular exportedKey is duplicated, the last one (in the source order) takes precedence.
  • An exportedValue may contain any character that’s valid in CSS declaration values (including spaces).
  • An exportedValue does not need to be quoted because it is already treated as a literal string.

There are lots of ways having access to Sass variables in JavaScript can come in handy. I tend to reach for this approach for sharing breakpoints. Here is my breakpoints.scs file, which I later import in JavaScript so I can use the matchMedia() method to have consistent breakpoints.

// Sass variables that define breakpoint values
$breakpoints: (
  mobile: 375px,
  tablet: 768px,
  // etc.
);

// Sass variables for writing out media queries
$media: (
  mobile: '(max-width: #{map-get($breakpoints, mobile)})',
  tablet: '(max-width: #{map-get($breakpoints, tablet)})',
  // etc.
);

// The export module that makes Sass variables accessible in JavaScript
:export {
  breakpointMobile: unquote(map-get($media, mobile));
  breakpointTablet: unquote(map-get($media, tablet));
  // etc.
}

Animations are another use case. The duration of an animation is usually stored in CSS, but more complex animations need to be done with JavaScript’s help.

// animation.scss
$global-animation-duration: 300ms;
$global-animation-easing: ease-in-out;

:export {
  animationDuration: strip-unit($global-animation-duration);
  animationEasing: $global-animation-easing;
}

Notice that I use a custom strip-unit function when exporting the variable. This allows me to easily parse things on the JavaScript side.

// main.js
document.getElementById('image').animate([
  { transform: 'scale(1)', opacity: 1, offset: 0 },
  { transform: 'scale(.6)', opacity: .6, offset: 1 }
], {
  duration: Number(variables.animationDuration),
  easing: variables.animationEasing,
});

It makes me happy that I can exchange data between CSS, Sass and JavaScript so easily. Sharing variables like this makes code simple and DRY.

There are multiple ways to achieve the same sort of thing, of course. Les James shared an interesting approach in 2017 that allows Sass and JavaScript to interact via JSON. I may be biased, but I find the approach we covered here to be the simplest and most intuitive. It doesn’t require crazy changes to the way you already use and write CSS and JavaScript.

Are there other approaches that you might be using somewhere? Share them here in the comments — I’d love to see how you’re solving it.

The post Getting JavaScript to Talk to CSS and Sass appeared first on CSS-Tricks.

What I Like About Craft CMS

Looking at the CMS scene today, there are upwards of 150 options to choose from — and that’s not including whatever home-grown custom alternatives people might be running. The term “Content Management System” is broad and most site builders fit into the CMS model. Craft CMS, a relatively new choice in this field (launched in 2013) stands out to me.

My team and I have been using Craft CMS for the past two years to develop and maintain a couple of websites. I would like to share my experience using this system with you.

Note that this review is focused on our experience with using Craft and as such, no attempt has been made to compare it to other available options. For us, using Craft has been a very positive experience and we leave it up to you, the reader, to compare it to other experiences that you may have had.

First, a quick introduction to Craft

Craft is the creation of Pixel & Tonic, a small software development company based out of Oregon. Founded by Brandon Kelly, known for premium ExpressionEngine add-ons. While developing some of the most used add-ons, Pixel & Tonic set out to build their own CMS, known as "Blocks." This was all the way in 2010, during its development the name was changed to Craft CMS.

Looking at the market we can see that Craft is well adopted. At the time of writing this article, there are around ~70 000 websites using Craft.

Showing market growth over the five year period.

Craft was set out to make life enjoyable for developers and content managers. In 2015, Craft proved this by winning the Best CMS for Developers award by CMSCritics. Over the years, Craft has won multiple awards that prove that Craft is on the right path.

When I am asked where Craft fits in the overall CMS landscape, I say it's geared toward small-to-medium-sized businesses where there is a staff of content managers that don't require a completely custom solution.

At the heart of things, Craft is a CMS in the same vein as WordPress and other traditional offerings — just with a different flavor and approach to content management that makes it stand out from others, which is what we're covering next.

Craft's requirements

Server requirements for a Craft setup are simple and standard. Craft requires the following:

  • PHP 7.0+
  • MySQL 5.5+ with InnoDB, MariaDB 5.5+, or PostgreSQL 9.5+
  • At least 256MB of memory allocated to PHP
  • At least 200MB of free disk space

Out of the box, you can get Craft up and running fast. You don’t need an extensive PHP or Database background to get started. Hell, you can get away with little-to-no PHP knowledge at all. That makes both the barrier to entry and the time from installation to development extremely small.

It’s both simple and complex at the same time

Craft is unique in that it is both a simple and a complex CMS.

You can use Craft to design and develop complex sites that and are built with and rely heavily on PHP, databases, and query optimizations. 

However, you can also use Craft to design and develop simple sites where you do none of those things.

This was one of the main selling points for me. It’s simple to get up and going with very little, but if you need to do something more complex, you can. And it never feels like you are “hacking” it do anything it wasn’t meant to.

Craft abstracted all the field creation and setup to the admin panel. You only need to point it to the right Twig and then use the fields you connected. Furthermore, it provides localization and multi-site management out of the box with no need for plugins. This is essentially what makes it different from other content management systems. You can create the structure, fields and all the forms without ever touching any code.

Some CMSs like to make a lot of decisions for you and sometimes that leads to unnecessary bloat. Front- and back-end performance is super important to me and, as such, I appreciate that Craft doesn’t leave a lot of that up to me, should I need it. It provides a full customization experience that supports beginners right out of the box, but doesn’t constrain folks at the professional level.

Craft’s templating engine

Some developers are not keen on this, but Craft uses Twig as its template engine. The word “use” should be emphasized as a requirement, as there is no option of writing raw PHP anywhere inside the template. Here are my thoughts on that:

  1. It is standardized in a way that, when I look at my team's Pull Requests, I don’t expect to see 100 lines of custom PHP that make no sense. I only see the code related to templating.
  2. Twig is already powerful enough that it will cover nearly all use cases while being extensible for anything else.

Let’s say you’re not digging Twig or you would rather use one of the latest technologies (hello static site generators!). Craft’s templating system isn’t the only way to get content out of Craft. As of Craft 3.3, it provides a “headless” mode and GraphQL built-in with Craft's Pro features. That means that you can use tools like Gatsby or Gridsome to build static sites with the comfort of Craft CMS. That brings Craft in line with the like of WordPress that provides its own REST API for fetching content to use somewhere else.

There's a fully functional GraphQL editor built right inside the Craft UI.

Speaking of REST, there is even an option for that in Craft if, say, you are not a fan of GraphQL. The Element API is a REST read-only API that is available via the first-party Element API plugin. Again, Craft comes with exactly what you need at a minimum and can be extended to do more.

Craft’s extensibility

This brings me to my next point: Craft CMS is super extensible. It is built on the Yii Framework, a well-known PHP framework that is robust and fast. This is important, as all the extensibility is either through modules or plugins written in Yii and Craft API. Modules are a concept passed down from Yii modules and they provide a way to extend core functionality without changing the source. On the other hand, plugins are a Craft concept and they do the same thing as modules, but can be installed, disabled and removed. If you would like to read more about this, you can find it in Craft’s docs.

Both modules and plugins have full access to Craft and Yii’s API. This is a huge bonus, as you can benefit from Yii’s community and documentation. Once you get used to Yii, writing plugins is easy and enjoyable. My team has built multiple custom plugins and modules over the last two years, like a Pardot form integration, a Google reCAPTCHA integration, custom search behavior, and others.  Essentially, the sky is the limit. 

Writing plugins and modules is covered in the docs but I think this is where Craft's system has room to grow. I would recommend opening a well-known plugin on GitHub to get a sense of how it’s done because I’ve found that to be much more helpful than the docs.

Initially, you may find this aspect of the system difficult, but once you understand the structure, it does get easier, because the code structure essentially consists of models, views, and controllers. It is like building a small MVC app inside your CMS. Here is an example of a plugin structure I’ve worked with:

.
├── assetbundles
├── controllers
├── migrations
├── models
├── records
├── services
├── templates
│   ├── _layouts
│   └── _macros
├── translations
│   └── en
├── variables
├── icon-mask.svg
├── icon.svg
└── Plugin.php

If you don’t feel like writing PHP and tinkering with Yii/Craft, you can always download plugins from the official Craft plugin store. There is a variety of plugins, from image to building on top of the WYSIWYG editor. One of many things that Craft got right is that you can try paid plugins in development mode as much as you like rather than having to first make a purchase.

The Craft plugins screen.

During the course of two years, we have tried multiple plugins, there are a few that I not only recommend, but have found myself using for almost every project.

  • ImageOptimize - This is a must for performance enthusiasts as it provides a way to automatically transform uploaded images to responsive images with compression and convert to more modern formats.
  • Navigation - Craft doesn’t come with navigation management built right in, even though you technically can do it with custom fields. But Verbb did an awesome job with this simple plugin and for us it’s one of the very first plugins we reach for on any given project.
  • Seomatic - This is what is the Yoast SEO plugin is to WordPress: an out of the box solution for all your SEO needs.
  • Redactor - This is a must and should be included in every project. Craft doesn’t come with a WYSIWYG editor out of the box but, with Redactor, you get a Redactor field that includes one.
  • Super Table - This powerful plugin gives you an option to create repeatable fields. You can use built-in Craft field types to create a structure (table) and the content manager creates rows of content. It reminds me of ACF Repeater for WordPress.

Craft’s author experience

While we’ve covered the development experience so far, the thing that Craft got extremely right — to the point of blowing other CMSs out of the water, in my view — is the author's experience. A CMS can do all kinds of wonderful things, but it has to be nice to write in at the end of the day.

Craft provides a straightforward set of options to configure the site right in the admin.

The whole concept of the CMS is that it is built with two simple things; Fields and Sections, where fields are added to sections and entries are created by content managers.

Craft's default post editor is simple and geared toward blog posts. Need more fields or sections? Those are available to configure in the site settings, making for a more open-ended tool for any type of content publishing.

One of the neatest author features is version control. "Wait, what?" you ask. Yes, all content is version controlled in a way that lets authors track changes and roll back to previous versions for any reason at all.

Craft shows revisions for each entry.

At any point in time, you can go back to any revision and use is as a current one. You don't know how much you need this feature until you've tried it. For me, it brings a sense of security that you can't lose someone's edit or changes, same a with Git and developers.

The fun doesn't stop here because Craft nailed one of the hardest things (in my opinion) about content management and that is localization. People still find this hard in 2020 and usually give up because it is both difficult to implement and properly present to authors in the UI.

You can create as many sites as you want.

Oh, and you can host multiple websites in a single Craft 3 instance. You can define one or more sites at different domains, different versions of the entry content and using a different set of templates. Like with everything in Craft, it is made so simple and open-ended (in a good way) that it is up to you what the other sites are going to be. You can create a site with the same language but different content or create a site with another language, solving the localization problem.

All the features above are already built-in inside Craft which for me is a must for a good author experience. As soon as you start patching the essential author functionality with plugins, great author experience is lost. This is because usually when you want to add functionality there are multiple plugins (ways) to do it, which aids a different author experience on the same platform but different instances.

Craft’s community

It’s worth underscoring the importance of having a community of people you can to turn to. To some degree, you’re probably reading this post because you enjoy learning from others in the development community. It’ no difference with CMSs and Craft has an awesome community of developers.

Craft's Stack Exchange is full of questions and answers, but a lot of the information needs to be updated to reflect Craft 3.

At the same time, the community is still small (compared to, say, WordPress) and doesn’t have a long track record — though there are many folks in the community who have been around quite a while having come from ExpressionEngine.  It’s not just because Craft itself is relatively new to the market. It’s also because not everyone posts on the Craft CMS Stack Exchange to the extent thatmany of the older answers haven’t even been updated for Craft 3. You’ll actually find most of the community hanging out on Discord, where even the creators of Craft, Pixel & Tonic, are active and willing to answer questions. It is also very helpful when you see Craft core members and big plugin creators, like Andrew from nystudio107 (shout out to a great performance freak), are there to assist almost 24/7.

Craft's discord has always someone to help you. Even the core team responds often.

One thing I also want to touch on is the limited learning resources available but, then again, you hardly need them. As I said earlier, the combination of Craft and Twig is simple enough that you won’t need a full course on how to build a blog.

Craft's conference, Dot All, is a great resource all its own. Chris attended last year with a great presentation, which is available to the public.

And, lastly, Craft uses and enforces open source. For me, open source is always a good thing because you expose your code to more people (developers). Craft did this right. The whole platform and also plugins are open source.

Pricing

This is the elephant in the room because there are mixed feelings about charging for content management systems. And yes, Craft has a simple pricing model

  1. It’s free for a single user account, small website.
  2. It’s $299 per project for the first year of updates. It’s only $59 each year after that, but they don't force you to pay for updates and you can enable license updates for an additional year at any time at the same price.
Craft's Solo version is totally capable of meeting the needs of many websites, but the paid Pro option is a cost-effective upgrade for advanced features and use cases.

I consider this pricing model fair and not particularly expensive — at least to the point of being prohibitive. Craft offers a free license for a small website you can build for a friend or a family member. On the other hand, Craft is more of a professional platform that is used to build mid-size business websites and as such their license cost is negligible. In most cases, developers (or agencies) will eat up the initial cost so that clients don’t have to worry about this pricing.

Oh, and kudos to Craft for providing an option to try the Pro version for free on a local domain. This also applies to all plugins.

Conclusion

To conclude, I would like to thank Craft CMS and the Pixel & Tonic team for an awesome and fun ride. Craft has satisfied almost all our needs and we will continue to use it for future projects. It’s flexible to fit each project and feel like CMS built for that use case.

It boils down Craft for me is a content management framework. Out of the box, it is nothing more than nuts and bolts that needs to be assembled to the user's needs. This is the thing that makes Craft stand out and why it provides both great author and developer experience.

As you saw in the licensing model it is free for a single user, try it out and leave your feedback in the comments.

The post What I Like About Craft CMS appeared first on CSS-Tricks.