Managing WordPress Metadata in Gutenberg Using a Sidebar Plugin

WordPress released their anticipated over to the post editor, nicknamed Gutenberg, which is also referred to as the block editor. It transforms a WordPress post into a collection of blocks that you can add, edit, remove and re-order in the layout. Before the official release, Gutenberg was available as a plugin and, during that time, I was interested in learning how to create custom blocks for the editor. I was able to learn a lot about Gutenberg that I decided to put together a course that discusses almost everything you need to know to develop blocks for Gutenberg.

In this article, we will discuss metaboxes and metafields in WordPress. Specifically, we’ll cover how to replace the old PHP metaboxes in Gutenberg and extend Gutenberg’s sidebar to add a React component that will be used to manipulate the metadata using the global JavaScript Redux-like stores. Note that metadata in Gutenberg can also be manipulated using blocks. And both ways are discussed in my course, however, in this article I am going to focus on managing metadata in the sidebar since I believe this method will be used more often.

This article assumes some familiarity with ReactJS and Redux. Gutenberg relies heavily on these technologies to render UI and manage state. You can also check out the CSS-Tricks guide to learning Gutenberg for an intro to some of the concepts we’ll cover here.

The block editor interface

Gutenberg is a React application

At its core, Gutenberg is a ReactJS application. Everything you see in the editor is rendered using a React component. The post title, the content area that contains the blocks, the toolbar at the top and the right sidebar are all React components. Data or application states in this React application are stored in centralized JavaScript objects, or "stores." These stores are managed by WordPress’ data module. This module shares a lot of core principles with Redux. So, concepts like stores, reducers, actions, action creators, etc., also exist in this module. I will sometimes refer to these stores as "Redux-like" stores.

These stores do not only store any data about the current post, like the post content (the blocks), the post title, and the selected categories, but it also stores global information about a WordPress website, like all the categories, tags, posts, attachments and so on. In addition to that, UI state information like,"is the sidebar opened or closed?" are also stored in these global stores. One of the jobs of the "data module" is to retrieve data from these stores and also change data in the stores. Since these stores are global and can be used by multiple React components changing data in any store will be reflected in any Gutenberg UI part (including blocks) that uses this piece of data.

Once a post is saved, the WordPress REST API will be used to update the post using the data stored in these global stores. So the post title, the content, categories etc., that are stored in these global stores will be sent as payload in the WP REST API endpoint that updates the post. And thus if we are able to manipulate data in these stores, once the user clicks save, the data that we manipulated will be stored in the database by the API without us having to do anything.

One of the things that is not managed by these global stores in Gutenberg is metadata. If you have some metafields that you used to manage using a metabox in the pre-Gutenberg "classic" editor, those will not be stored and manipulated using the global Redux-like stores by default. However, we can opt-in to manage metadata using JavaScript and the Redux-like stores. Although those old PHP metaboxes will still appear in Gutenberg, WordPress recommends porting these PHP metaboxes to another approach that uses the global stores and React components. And this will ensure a more unified and consistent experience. You can read more about problems that could occur by using PHP metaboxes in Gutenberg.

So before we start, let’s take a look at the Redux-like stores in Gutenberg and how to use them.

Retrieving and changing data in Gutenberg’s Redux-like stores

We now know that the Gutenberg page is managed using these Redux-like stores. We have some default "core" stores that are defined by WordPress. Additionally, we can also define our own stores if we have some data that we would like to share between multiple blocks or even between blocks and other UI elements in the Gutenberg page, like the sidebar. Creating your own stores is also discussed in my course and you can read about it in the official docs. However, in this article we will focus on how to use the existing stores. Using the existing stores lets us manipulate metadata; therefore we will not need to create any custom stores.

In order to access these stores, make sure you have the latest WordPress version with Gutenberg active and edit any post or page. Then, open your browser console an type the following statement:

wp.data.select('core/editor').getBlocks()

You should get something like this:

Let’s break this down. First, we access the wp.data module which (as we discussed) is responsible for managing the Redux-like stores. This module will be available inside the global wp variable if you have Gutenberg in your WordPress installation. Then, inside this module, we call a function called select. This function receives a store name as an argument and returns all the selectors for this store. A selector is a term used by the data module and it simply means a function that gets some data from the store. So, in our example, we accessed the core/editor store, and this will return a bunch of functions that can be used to get data from this store. One of these functions is getBlocks() which we called above. This function will return an array of objects where each object represents a block in your current post. So depending on how many blocks you have in your post, this array will change.

As we’ve seen, we accessed a store called core/editor. This store contains information about the current post that you are editing. We’ve also seen how to get the blocks in the current post but we can also get a lot of other stuff. We can get the title of the current post, the current post ID, the current post post type and pretty much everything else we might need.

But in the example above, we were only able to retrieve data. What if we want to change data? Let’s take a look at another selector in the ‘core/editor’ store. Let’s run this selector in our browser console:

wp.data.select('core/editor').getEditedPostAttribute('title')

This should return the title of the post currently being edited:

Great! Now what if we want to change the title using the data module? Instead of calling select(), we can call dispatch() which will also receive a store name and return some actions that you can dispatch. If you are familiar with Redux, terms like "actions" and "dispatch" will sound familiar to you. If this sounds new to you, all you need to know is that dispatching a certain action simply means changing some data in a store. In our case, we want to change the post title in the store, so we can call this function:

wp.data.dispatch('core/editor').editPost({title: 'My new title'})

Now take a look at the post title in the editor — it will be changed accordingly!

That’s how we can manipulate any piece of data in the Gutenberg interface. Wan retrieve the data using selectors and change that data using actions. Any change will be reflected in any part of the UI that uses this data.

There are, of course, other stores in Gutenberg that you can checkout on this page. So, let’s take a quick look at a couple of more stores before we move on.

The stores that you will use the most are the core/editor which we just looked at, and the core store. Unlike core/editor, the core store contains information, not only about the currently edited post, but also about the whole WordPress website in general. So, for instance, we can get all the authors on the website using:

wp.data.select('core').getAuthors()

We can also get some posts from the website like so:

wp.data.select('core').getEntityRecords('postType','post',{per_page: 5})

Make sure to run this twice if the first result was null. Some selectors like this one will send an API call first to get your post. That means the the returned value will initially be null until the API request is fulfilled:

Let’s look at one more store: edit-post. This store is responsible for the UI information in the actual editor. For example, we can have selectors that check if the sidebar is currently open:

wp.data.select('core/edit-post').isEditorSidebarOpened()

This will return true if the sidebar is opened. But try closing the sidebar, run this function again, and it should return false.

We can also open and close the sidebar by dispatching actions in this store. Having the sidebar open and running this action in the browser console, the sidebar should be closed:

wp.data.dispatch('core/edit-post').closeGeneralSidebar()

You will unlikely need to use this store, but it’s good to know that this is what Gutenberg does when you click on the sidebar icon to close it.

There are some more stores that you might need to take a look at. The core/notices store, for instance, could be useful. This can help you display error, warning and success messages in the Gutenberg page. You can also check all the other stores here.

Try to play around with these stores in your browser until you feel comfortable using them. After that, we can see how to use them in real code outside the browser.

Let’s setup a WordPress plugin to add a Gutenberg sidebar

Now that we know how to use the Redux-like stores in Gutenberg, the next step is to add a React sidebar component in the editor. This React component will be connected to the core/editor store and it will have some input that, when changed, will dispatch some action that will manipulate metadata — like the way we manipulated the post title earlier. But to do that, we need to create a WordPress plugin that holds our code.

You can follow along by cloning or downloading the repository for this example on GitHub.

Let’s create a new folder inside wp-content/plugins directory of the WordPress installation. I am going to call it gutenberg-sidebar. Inside this folder, let’s create the entry point for our plugin. The entry point is the PHP file that will be run when activating your plugin. It can be called index.php or plugin.php. We’re going to use plugin.php for this example and put some information about the plugin at the top as well as add some code that avoids direct access:

<?php
/**
  * Plugin Name: gutenberg-sidebar
  * Plugin URI: https://alialaa.com/
  * Description: Sidebar for the block editor.
  * Author: Ali Alaa
  * Author URI: https://alialaa.com/
  */
if( ! defined( 'ABSPATH') ) {
    exit;
}

You should find your plugin on the Plugins screen in the WordPress admin. Click on "Activate" in order for the code to run.

As you might imagine, we will write a lot of JavaScript and React from this point, forward. And in order to code React components easily we will need to use JSX. And JSX is not valid JavaScript that can run in your browser, it needs to be converted into plain JavaScript. We might also need to use ESNext features and import statements for importing and exporting modules.

And these features will not work on all browsers, so it’s better to transform our code into old ES5 JavaScript. Thankfully, there are a lot of tools that can help us achieve that. A famous one is webpack. webpack, however, is a big topic in itself and it won’t fit the scope of this article. Therefore, we are going to use another tool that WordPress provides which is @wordpress/scripts. By installing this package, we will get a recommended webpack configuration without having to do anything in webpack ourselves. Personally, I recommend that you learn webpack and try to do the configuration yourself. This will help you understand what’s going on and give you more control. You can find a lot of resources online and it’s also discussed in my course. But for now, let’s install the WordPress webpack configuration tool.

Change to your plugin folder in Terminal:

cd path/to/your/theme/folder

Next, we need to initialize npm in that folder in order to install @wordpress/scripts. This can be done by running this command:

npm init

This command will ask you some questions like the package name, version, license, etc. You can keep hitting Enter and leave the default values. You should have a package.json file in your folder and we can start installing npm packages. Let’s install @wordpress/scripts by running the following command:

npm install @wordpress/scripts --save-dev

This package will expose a CLI called wp-scripts which you can use in your npm scripts. There are different commands that you can run. We will focus on the build and start commands for now. The <code>build script will transform your files so that they are minified and ready for production. Your source code’s entry point is configured in src/index.js and the transformed output will be at build/index.js. Similarly, the start script will transform your code in src/index.js to build/index.js, however, this time, the code will not be minified to save time and memory — the command will also watch for changes in your files and re-build your files every time something is changed. The start command is suitable to be used for development while the build command is for production. To use these commands, we will replace the scripts key in the package.json file which will look something like this if you used the default options when we initialized npm.

Change this:

"scripts": {
  "test": "echo "Error: no test specified" && exit 1"
},

...to this:

"scripts": {
  "start": "wp-scripts start",
  "build": "wp-scripts build"
},

Now we can run npm start and npm run build to start development or build files, respectively.

Let’s create a new folder in the plugin’s root called src and add an index.js file in it. We can see it things are working by sprinkling in a little JavaScript. We’ll try an alert.

Now run npm start in Terminal. You should find the build folder created with the compiled index.js and also sourcemap files. In addition to that, you will notice that the build/index.js file is not minified and webpack will be watching for changes. Try changing the src/index.js file and save again. The build/index.js file will re-generated:

If you stop the watch (Ctrl + C ) in Terminal and run npm run build, the build/index.js file should now be minified.


Now that we have our JavaScript bundle, we need to enqueue this file in the Gutenberg editor. To do that we can use the hoo enqueue_block_editor_assets which will insure that the files are enqueued only in the Gutenberg page and not in other wp-admin pages where it isn’t needed.

We can enqueue our file like so in plugin.php:

// Note that it’s a best practice to prefix function names (e.g. myprefix)
function myprefix_enqueue_assets() {
  wp_enqueue_script(
    'myprefix-gutenberg-sidebar',
    plugins_url( 'build/index.js', __FILE__ )
  );
}
add_action( 'enqueue_block_editor_assets', 'myprefix_enqueue_assets' );

Visit the Gutenberg page. If all is well, you should get an alert, thanks to what we added to src/index.js earlier.

Fantastic! We’re ready to write some JavaScript code, so let’s get started!

Importing WordPress JavaScript packages

In order to add some content to the existing Gutenberg sidebar or create a new blank sidebar, we need to register a Gutenberg JavaScript plugin — and in order to do that, we need to use some functions and components from packages provided by WordPress: wp-plugins, wp-edit-post and wp-i18n. These packages will be available in the wp global variable in the browser as wp.plugins, wp.editPost and wp.i18n.

We can import the functions that we need into src/index.js. Specifically, those functions are: registerPlugin and PluginSidebar.

const { registerPlugin } = wp.plugins;
const { PluginSidebar } = wp.editPost;
const { __ } = wp.i18n;

It’s worth noting that we need to make sure that we have these files as dependencies when we enqueue our JavaScript file in order to make sure that our index.js file will be loaded after the wp-plugins, wp-edit-posts and wp-i18n packages. Let’s add those to plugin.php:

function myprefix_enqueue_assets() {
  wp_enqueue_script(
    'myprefix-gutenberg-sidebar',
    plugins_url( 'build/index.js', __FILE__ ),
    array( 'wp-plugins', 'wp-edit-post', 'wp-i18n', 'wp-element' )
  );
}
add_action( 'enqueue_block_editor_assets', 'myprefix_enqueue_assets' );

Notice that I added wp-element in there as a dependency. I did that because we will write some React components using JSX. Typically, we’d import the entire React library when making React components. However, wp-element is an abstraction layer atop React so we never have to install or import React directly. Instead, we use wp-element as a global variable.

These packages are also available as npm packages. Instead of importing functions from the global wp variable (which will only be available in the browser that your code editor knows nothing about), we can simply install these packages using npm and import them in our file. These WordPress packages are usually prefixed with @wordpress.

Let’s install the two packages that we need:

npm install @wordpress/edit-post @wordpress/plugins @wordpress/i18n --save

Now we can import our packages in index.js:

import { registerPlugin } from "@wordpress/plugins";
import { PluginSidebar } from "@wordpress/edit-post";
import { __ } from "@wordpress/i18n";

The advantage of importing the packages this way is that your text editor knows what @wordpress/edit-post and @wordpress/plugins are and it can autocomplete functions and components for you — unlike importing from wp.plugins and wp.editPost which will only be available in the browser while the text editor has no clue what wp is.

Your text editor can autocomplete component names for you.

You might also think that importing these packages in your bundle will increase your bundle size, but there’s no worries there. The webpack config file that comes with @wordpress/scripts is instructed to skip bundling these @wordpress packages and depend of the wp global variable instead. As a result, the final bundle will not actually contain the various packages, but reference them via the wp variable.

Great! so I am going to stick to importing packages using npm in this article, but you are totally welcome to import from the global wp variable if you prefer. Let’s now use the functions that we imported!

Registering a Gutenberg Plugin

In order to add a new custom sidebar in Gutenberg, we first need to register a plugin — and that’s what the registerPlugin function that we imported will do. As a first argument, registerPlugin will receive a unique slug for this plugin. We can have an array of options as a second argument. Among these options, we can have an icon name (from the dashicons library) and a render function. This render function can return some components from the wp-edit-post package. In our case. we imported the PluginSidebar component from wp-edit-post and created a sidebar in the Gutenberg editor by returning this component in the render function. I also added PluginSidebar inside a React fragment since we can add other components in the render function as well. Also, the __ function imported from wp-i18n will be used so we can translate any string that we output:

registerPlugin( 'myprefix-sidebar', {
  icon: 'smiley',
  render: () => {
    return (
      <>
        <PluginSidebar
          title={__('Meta Options', 'textdomain')}
        >
          Some Content
        </PluginSidebar>
      </>
    )
  }
})

You should now have a new icon beside the cog icon in the Gutenberg editor screen. This smiley icon will toggle our new sidebar which will have whatever content we have inside the PluginSidebar component:

If you were to click on that star icon beside the sidebar title, the sidebar smiley icon will be removed from the top toolbar. Therefore, we need to add another way to access our sidebar in case the user un-stars it from the top toolbar, and to do that, we can import a new component from wp-edit-post called PluginSidebarMoreMenuItem. So, let’s modify out import statement:

import { PluginSidebar, PluginSidebarMoreMenuItem } from "@wordpress/edit-post";

The PluginSidebarMoreMenuItem will allow us to add an item in the Gutenberg menu that you can toggle using the three dots icon at the top-right of the page. We want to modify our plugin to include this component. We need to give PluginSidebar a name prop and give PluginSidebarMoreMenuItem a target prop with the same value:

registerPlugin( 'myprefix-sidebar', {
  icon: 'smiley',
  render: () => {
    return (
      <>
        <PluginSidebarMoreMenuItem
          target="myprefix-sidebar"
        >
          {__('Meta Options', 'textdomain')}
        </PluginSidebarMoreMenuItem>
        <PluginSidebar
          name="myprefix-sidebar"
          title={__('Meta Options', 'textdomain')}
        >
          Some Content
        </PluginSidebar>
      </>
    )
  }
})

In the menu now, we will have a "Meta Options" item with our smiley icon. This new item should toggle our custom sidebar since they are linked using the name and the target props:

Great! Now we have a new space in our Gutenberg page. We can replace the "some content" text in PluginSidebar and add some React components of our own!

Also, let’s make sure to check the edit-post package documentation. This package contains a lot of other components that you can add in your plugin. These components can allow you to extend the existing default sidebar and add your own components in it. Also, we can find components that allow us to add items in the Gutenberg top-right menu and also for the blocks menu.

Handling metadata in the classic editor

Let’s take a quick look at how we used to manage metadata in the classic editor using metaboxes. First, install and activate the classic editor plugin in order to switch back to the classic editor. Then, add some code that will add a metabox in the editor page. This metabox will manage a custom field that we’ll call _myprefix_text_metafield. This metafield will just be a text field that accepts HTML markup. You can add this code in plugin.php or put it in a separate file and include it plugin.php:

<?php
function myprefix_add_meta_box() {
  add_meta_box( 
    'myprefix_post_options_metabox', 
    'Post Options', 
    'myprefix_post_options_metabox_html', 
    'post', 
    'normal', 
    'default'
  );
}
add_action( 'add_meta_boxes', 'myprefix_add_meta_box' );
function myprefix_post_options_metabox_html($post) {
  $field_value = get_post_meta($post->ID, '_myprefix_text_metafield', true);
  wp_nonce_field( 'myprefix_update_post_metabox', 'myprefix_update_post_nonce' );
  ?>
  <p>
    <label for="myprefix_text_metafield"><?php esc_html_e( 'Text Custom Field', 'textdomain' ); ?></label>
    <br />
    <input class="widefat" type="text" name="myprefix_text_metafield" id="myprefix_text_metafield" value="<?php echo esc_attr( $field_value ); ?>" />
  </p>
  <?php
}
function myprefix_save_post_metabox($post_id, $post) {
  $edit_cap = get_post_type_object( $post->post_type )->cap->edit_post;
  if( !current_user_can( $edit_cap, $post_id )) {
    return;
  }
  if( !isset( $_POST['myprefix_update_post_nonce']) || !wp_verify_nonce( $_POST['myprefix_update_post_nonce'], 'myprefix_update_post_metabox' )) {
    return;
  }
  if(array_key_exists('myprefix_text_metafield', $_POST)) {
    update_post_meta( 
      $post_id, 
      '_myprefix_text_metafield', 
      sanitize_text_field($_POST['myprefix_text_metafield'])
    );
  }
}
add_action( 'save_post', 'myprefix_save_post_metabox', 10, 2 );

I am not going to go into details in this code since this is out of the scope of the article, but what it’s essentially doing is:

  • Making a metabox using the add_meta_box function
  • Rendering an HTML input using the myprefix_post_options_metabox_html function
  • Controlling the metafield, called _myprefix_text_metafield
  • Using the save_post action hook to get the HTML input value and update the field using update_post_meta.

If you have the classic editor plugin installed, then you should see the metafield in the post editor:

Note that the field is prefixed with an underscore (_myprefix_text_metafield) which prevents it from being edited using the custom fields metabox that comes standard in WordPress. We add this underscore because we intend to manage the field ourselves and because it allows us to hide it from the standard Custom Fields section of the editor.

Now that we have a way to manage the field in the classic editor, let’s go ahead and deactivate the classic editor plugin and switch back to Gutenberg. The metabox will still appear in Gutenberg. However, as we discussed earlier, WordPress recommends porting this PHP-based metabox using a JavaScript approach.

That’s what we will do in the rest of the article. Now that we know how to use the Redux-like stores to manipulate data and how to add some React content in the sidebar, we can finally create a React component that will manipulate our metafield and add it in the sidebar of the Gutenberg editor.

We don’t want to completely get rid of the PHP-based field because it’s still helpful in the event that we need to use the classic editor for some reason. So, we’re going to hide the field when Gutenberg is active and show it when the classic editor is active. We can do that by updating the myprefix_add_meta_box function to use the __back_compat_meta_box option:

function myprefix_add_meta_box() {
  add_meta_box( 
    'myprefix_post_options_metabox', 
    'Post Options', 
    'myprefix_post_options_metabox_html', 
    'post', 
    'normal', 
    'default',
    array('__back_compat_meta_box' => true)
  );
}

Let’s move on to creating the React component that manages the metadata.

Getting and setting metadata using JavaScript

We have seen how to get the post title and how to change it using the wp-data module. Let’s take a look at how to do the same for custom fields. To get metafields, we can call the save selector getEditedPostAttribute. But this time we will pass it a value of meta instead of title.

Once that’s done, test it out in the browser console:

wp.data.select('core/editor').getEditedPostAttribute('meta')

As you will see, this function will return an empty array, although we are sure that we have a custom field called _myprefix_text_metafield that we are managing using the classic editor. To make custom fields manageable using the data module, we first have to register the field in the plugin.php.

function myprefix_register_meta() {
  register_meta('post', '_myprefix_text_metafield', array(
    'show_in_rest' => true,
    'type' => 'string',
    'single' => true,
  ));
}
add_action('init', 'myprefix_register_meta');

Make sure to set the show_in_rest option to true. WordPress will fetch the fields using the WP REST API. That means, we need to enable the show_in_rest option to expose it.

Run the console test again and we will have an object with all of our custom fields returned.

Amazing! We are able to get our custom field value, so now let’s take a look at how can we change the value in the store. We can dispatch the editPost action in the core/editor store and pass it an object with a meta key, which will be another object with the fields that we need to update:

wp.data.dispatch('core/editor').editPost({meta: {_myprefix_text_metafield: 'new value'}})

Now try running the getEditedPostAttribute selector again and the value should be updated to new value.

If you try saving a post after updating the field using Redux, you will get an error. And if you take a look at the Network tab in DevTools, you will find that the error is returned from the wp-json/wp/v2/posts/{id} REST endpoint that says that we are not allowed to update _myprefix_text_metafield.

This because WordPress treats any field that is prefixed with an underscore as a private value that cannot be updated using the REST API. We can, however, specify an auth_callback option that will allow updating this field using the REST API when it returns true as long as the editor is capable of editing posts. We can also add the sanitize_text_field function to sanitize the value before saving to the database:

function myprefix_register_meta() {
  register_meta('post', '_myprefix_text_metafield', array(
    'show_in_rest' => true,
    'type' => 'string',
    'single' => true,
    'sanitize_callback' => 'sanitize_text_field',
    'auth_callback' => function() { 
      return current_user_can('edit_posts');
    }
  ));
}
add_action('init', 'myprefix_register_meta');

Now try the following:

  • Open a new post in WordPress.
  • Run this in the DevTools console see the current value of the field:
wp.data.select('core/editor').getEditedPostAttribute('meta')
  • Run this in DevTools to update the value:
wp.data.dispatch('core/editor').editPost({meta: {_myprefix_text_metafield: 'new value'}})
  • There will be errors, so save the post to clear them.
  • Refresh the page and run this in the DevTools console:
wp.data.select('core/editor').getEditedPostAttribute('meta')

Does the new value show up in the console? If so, great! Now we know how to get and set the meta field value using Redux and we are ready to create a react component in the sidebar to do that.

Creating a React component to manage the custom fields

What we need to do next is create a React component that contains a text field that is controlled by the value of the metafield in the Redux store. It should have the value of the meta field...and hey, we already know how to get that! We can create the component in a separate file and then import it index.js. However I am simply going to create directly in index.js since we’re dealing with a very small example.

Again, we’re only working with a single text field, so let’s import a component provided by a WordPress package called @wordpress/components. This package contains a lot of reusable components that are Gutenberg-ready without us having to write them from scratch. It’s a good idea to use components from this package in order to be consistent with the rest of the Gutenberg UI.

First, let’s install this package:

npm install --save @wordpress/components

We’ll import TextControl and PanelBody at the top of index.js to fetch the two components we need from the package:

import { PanelBody, TextControl } from "@wordpress/components";

Now let’s create our component. I am going to create a React functional component and call it PluginMetaFields, but you can use a class component if you’d prefer that.

let PluginMetaFields = (props) => {
  return (
    <>
      <PanelBody
        title={__("Meta Fields Panel", "textdomain")}
        icon="admin-post"
        intialOpen={ true }
      >
        <TextControl 
          value={wp.data.select('core/editor').getEditedPostAttribute('meta')['_myprefix_text_metafield']}
          label={__("Text Meta", "textdomain")}
        />
      </PanelBody>
    </>
  )
}

PanelBody takes title, icon and initialOpen props. Title and icon are pretty self-explanatory. initialOpen puts the panel in an open/expanded state by default. Inside the panel, we have TextControl. which receives a label and a value for the input. As you can see in the snippet above, we get the value from the global store by accessing the _myprefix_text_metafield field from the object returned by wp.data.select('core/editor').getEditedPostAttribute('meta').

Notice that we are now depending on @wordpress/components and use wp.data. We must add these packages as dependencies when we enqueue our file in plugin.php:

function myprefix_enqueue_assets() {
wp_enqueue_script(
    'myprefix-gutenberg-sidebar',
    plugins_url( 'build/index.js', __FILE__ ),
    array( 'wp-plugins', 'wp-edit-post', 'wp-element', 'wp-components', 'wp-data' )
  );
}
add_action( 'enqueue_block_editor_assets', 'myprefix_enqueue_assets' );

Let’s officially add the component to the sidebar instead of the dummy text we put in earlier as a quick example:

registerPlugin( 'myprefix-sidebar', {
  icon: 'smiley',
  render: () => {
    return (
      <>
        <PluginSidebarMoreMenuItem
          target="myprefix-sidebar"
        >
          {__('Meta Options', 'textdomain')}
        </PluginSidebarMoreMenuItem>
        <PluginSidebar
          name="myprefix-sidebar"
          title={__('Meta Options', 'textdomain')}
        >
          <PluginMetaFields />
        </PluginSidebar>
      </>
    )
  }
})

This should should give you a "Meta Options" panel that contains a "Meta Fields" title, a pin icon, and a text input with a "Test Meta" label and default value of "new value."

Nothing will happen when you type into the text input because we are not yet handling updating the field. We’ll do that next, however, we first need to take care of another problem. Try to run editPost in the DevTools console again, but with a new value:

wp.data.dispatch('core/editor').editPost({meta: {_myprefix_text_metafield: 'a newer value'}})

You will notice that the value in the text field will not update to the new value. That’s the problem. We need the field to be controlled by the value in the Redux store, but we don’t see that reflected in the component. What’s up with that?

If you have used Redux with React before, then you probably know that we need to use a higher order component called connect in order to use Redux store values in a React component. The same goes for React components in Gutenberg — we have to use some higher order component to connect our component with the Redux-like store. Unfortunately, we are unable to simply call wp.data.select directly as we did before. This higher order component lives in the wp.data global variable which is also available as an npm package called @wordpress.data. So let’s install it to help us solve the issue.

npm install --save @wordpress/data

The higher order component we need is called withSelect, so let’s import it in index.js.

import { withSelect } from "@wordpress/data";

Remember that we already added wp-data as a dependency in wp_enqueue_script, so we can just use it by wrapping our component with it, like so:

PluginMetaFields = withSelect(
  (select) => {
    return {
      text_metafield: select('core/editor').getEditedPostAttribute('meta')['_myprefix_text_metafield']
    }
  }
)(PluginMetaFields);

Here, we’re overriding our PluginMetaFields component and assigning it the same component, now wrapped with the withSelect higher order component. withSelect will receive a function as an argument. This function will receive the select function (which we used to access wp.data.select) and it should return an object. Each key in this object will be injected as a prop in the component (similar to connect in Redux). withSelect will return a function that we can pass it the component (PluginMetaFields) again as seen above. So, by having this higher order component, we now get text_metafield as a prop in the component, and whenever the meta value in the redux store is updated, the prop will also get updated — thus, the component will update since components update whenever a prop is changed.

let PluginMetaFields = (props) => {
  return (
    <>
      <PanelBody
        title={__("Meta Fields Panel", "textdomain")}
        icon="admin-post"
        intialOpen={ true }
      >
      <TextControl 
          value={props.text_metafield}
          label={__("Text Meta", "textdomain")}
        />
      </PanelBody>
    </>
  )
}

If you now try and run editPost with a new meta value in your browser, the value of the text field in the sidebar should also be updated accordingly!

So far, so good. Now we know how to connect our React components with our Redux-like stores. We are now left with updating the meta value in the store whenever we type in the text field.

Dispatching actions in React components

We now need to dispatch the editPost action whenever we type into the text field. Similar to wp.data.select, we also should not call wp.data.dispatch directly in our component like so:

// Do not do this
<TextControl 
    value={props.text_metafield}
    label={__("Text Meta", "textdomain")}
    onChange={(value) => wp.data.dispatch('core/editor').editPost({meta: {_myprefix_text_metafield: value}})
    }
/>

We will instead wrap our component with another higher order component from the @wordpress.data package called withDispatch. We’ve gotta import that, again, in plugin.js:

import { withSelect, withDispatch } from "@wordpress/data";

In order to use it, we can wrap our component — which is already wrapped with withSelect and again with withDispatch — like so:

PluginMetaFields = withDispatch(
  (dispatch) => {
    return {
      onMetaFieldChange: (value) => {
        dispatch('core/editor').editPost({meta: {_myprefix_text_metafield: value}})
      }
    }
  }
)(PluginMetaFields);

You can check out yet another WordPress package called @wordpress/compose. It makes using multiple high order components a bit cleaner for use in a single component. But I will leave that to you to try out for the sake of keeping our example simple.

withDispatch is similar to withSelect in that it will receive a function that has the dispatch function as an argument. That allows us to return an object from this function that contains functions that will be available inside the component’s props. I went about this by creating a function with an arbitrary name (onMetaFieldChange) that will receive a value, dispatch the editPost action, and set the meta value in the Redux store to the value received in the function’s argument. We can call this function in the component and pass it the value of the text field inside the onChange callback:

<TextControl 
  value={props.text_metafield}
  label={__("Text Meta", "textdomain")}
  onChange={(value) => props.onMetaFieldChange(value)}
/>

Confirm everything is working fine by opening the custom sidebar in the WordPress post editor, updating the field, saving the post and then refreshing the page to make sure the value is saved in the database!

Let’s add a color picker

It should be clear now that can we update a meta field using JavaScript, but we’ve only looked at simple text field so far. The @wordpress/components library provides a lot of very useful components, including dropdowns, checkboxes, radio buttons, and so on. Let’s level up and conclude this tutorial by taking a look at how we can use the color picker component that’s included in the library.

You probably know what to do. First, we, import this component in index.js:

import { PanelBody, TextControl, ColorPicker } from "@wordpress/components";

Now, instead of registering a new custom field, let’s aim for simplicity and assume that this color picker will be controlled by the same _myprefix_text_metafield field we worked with earlier. We can use the ColorPicker component inside our PanelBody and it will be very similar to what we saw with TextControl, but the prop names will be slightly different. We have a color prop instead of value and onChangeComplete instead on onChange. Also, onChangeComplete will receive a color object that contains some information about the chosen color. This object will have a hex property we can use to store the color value in the _myprefix_text_metafield field.

Catch all that? It boils down to this:

<ColorPicker
  color={props.text_metafield}
  label={__("Colour Meta", "textdomain")}
  onChangeComplete={(color) => props.onMetaFieldChange(color.hex)}
/>

We should now have a color picker in our sidebar, and since it’s controlling the same meta field as the TextControl component, our old text field should update whenever we pick a new color.

That’s a wrap!

If you have reached this far in the article, then congratulations! I hope you enjoyed it. Make sure to check out my course if you want to learn more about Gutenberg and custom blocks. You can also find the final code for this article over at GitHub.

The post Managing WordPress Metadata in Gutenberg Using a Sidebar Plugin appeared first on CSS-Tricks.

Gulp for WordPress: Creating the Tasks

This is the second post in a two-part series about creating a Gulp workflow for WordPress theme development. Part one focused on the initial installation, setup, and organization of Gulp in a WordPress theme project. This post goes deep into the tasks Gulp will run by breaking down what each task does and how to tailor them to streamline theme development.

Now that we spent the first part of this series setting up a WordPress theme project with Grunt installed in it, it's time to dive into the tasks we want it to do for us as we develop the theme. We're going to get our hands extremely dirty in this post, get ready to write some code!

Article Series:

  1. Initial Setup
  2. Creating the Tasks (This Post)

Creating the style task

Let’s start by compiling src/bundle.scss from Sass to CSS, then minifying the CSS output for production mode and putting the completed bundle.css file into the dist directory.

We’re going to use a couple of Gulp plugins to do the heavy lifting. We’ll use gulp-sass to compile things and gulp-clean-css to minify. Then, gulp-if will allow us to conditionally run functions which, In our case, will check if we are in production or development modes before those tasks run and then execute accordingly.

We can install all three plugins in one fell swoop:

npm install --save-dev gulp-sass gulp-clean-css gulp-if

Let’s make sure we have something in our bundle.scss file so we can test the tasks:

$colour: #f03;

body {
  background-color: $colour;
}

Alright, back to the Gulpfile to import the plugins and define the task that runs them:

import { src, dest } from 'gulp';
import yargs from 'yargs';
import sass from 'gulp-sass';
import cleanCss from 'gulp-clean-css';
import gulpif from 'gulp-if';
const PRODUCTION = yargs.argv.prod;

export const styles = () => {
  return src('src/scss/bundle.scss')
    .pipe(sass().on('error', sass.logError))
    .pipe(gulpif(PRODUCTION, cleanCss({compatibility:'ie8'})))
    .pipe(dest('dist/css'));
}

Let’s walk through this code to explain what’s happening.

  • The src and dest functions are imported from Gulp. src will read the file that you pass as an argument and return a node stream.
  • We pull in yargs to create our flag that separates tasks between the development and production modes.
  • The three plugins are called into action.
  • The PRODUCTION flag is defined and held in the prod command.
  • We define styles as the task name we will use to run these tasks in the command line.
  • We tell the task what file we want processed (bundle.scss) and where it lives (src/scss/bundle.scss).
  • We create "pipes" that serve as the plungs that run when the styles command is executed. Those pipes run in the order they are written: convert Sass to CSS, minify the CSS (if we’re in production), and place the resulting CSS file into the dist/css directory.

Go ahead. Run gulp styles in the command line and see that a new CSS file has been added to your CSS directory dist/css.

Now do gulp styles --prod. The same thing happens, but now that CSS file has been minified for production use.

Now, assuming you have a functioning WordPress theme with header.php and footer.php, the CSS file (as well as JavaScript files when we get to those tasks) can be safely enqueued, likely in your functions.php file:

function _themename_assets() {
  wp_enqueue_style( '_themename-stylesheet', get_template_directory_uri() . '/dist/css/bundle.css', array(), '1.0.0', 'all' );
}
add_action('wp_enqueue_scripts', '_themename_assets');

That’s all good, but we can make our style command even better.

For example, try inspecting the body on the homepage with the WordPress theme active. The styles that we added should be there:

As you can see, it says that our style is coming from bundle.css, which is true. However, it would be much better if the name of the original SCSS file is displayed here instead for our development purposes — it makes it so much easier to locate code, particularly when we’re working with a ton of partials. This is where source maps come into play. That will detail the location of our styles in DevTools. To further illustrate this issue, let's also add some SCSS inside src/scss/components/slider.scss and then import this file in bundle.scss.

//src/scss/components/slider.scss
body {
  background-color: aqua;
}
//src/scss/bundle.scss
@import './components/slider.scss';
$colour: #f03;
body {
  background-color: $colour;
}

Run gulp styles again to recompile your files. Your inspector should then look like this:

The DevTools inspector will show that both styles are coming from bundle.css. But we would like it to show the original file instead (i.e bundle.scss and slider.scss). So let’s add that to our wish list of improvements before we get to the code.

The other thing we’ll want is vendor prefixing to be handled for us. There’s nothing worse than having to write and manage all of those on our own, and Autoprefixer is the tool that can do it for us.

And, in order for Autoprefixer to work its magic, we’ll need the PostCSS plugin.

OK, that adds up to three more plugins and tasks we need to run. Let’s install all three:

npm install --save-dev gulp-sourcemaps gulp-postcss autoprefixer

So gulp-sourcemaps will obviously be used for sourcemaps. gulp-postcss and autoprefixer will be used to add autoprefixing to our CSS. postcss is a famous plugin for transforming CSS files and autoprefixer is just a plugin for postcss. You can read more about the other things that you can do with postcss here.

Now at the very top let’s import our plugins into the Gulpfile:

import postcss from 'gulp-postcss';
import sourcemaps from 'gulp-sourcemaps';
import autoprefixer from 'autoprefixer';

And then let’s update the task to use these plugins:

export const styles = () => {
  return src('src/scss/bundle.scss')
    .pipe(gulpif(!PRODUCTION, sourcemaps.init()))
    .pipe(sass().on('error', sass.logError))
    .pipe(gulpif(PRODUCTION, postcss([ autoprefixer ])))
    .pipe(gulpif(PRODUCTION, cleanCss({compatibility:'ie8'})))
    .pipe(gulpif(!PRODUCTION, sourcemaps.write()))
    .pipe(dest('dist/css'));
}

To use the the sourcemaps plugin we have to follow some steps:

  1. First, we initialize the plugin using sourcemaps.init().
  2. Next, pipe all the plugins that you would like to map.
  3. Finally, Create the source map file by calling sourcemaps.write() just before writing the bundle to the destination.

Note that all the plugins piped between sourcemaps.init() and sourcemaps.write() should be compatible with gulp-sourcemaps. In our case, we are using sass(), postcss() and cleanCss() and all of them are compatible with sourcemaps.

Notice that we only run the Autoprefixer begind the production flag since there’s really no need for all those vendor prefixes during development.

Let’s run gulp styles now, without the production flag. Here’s the output in bundle.css:

body {
  background-color: aqua; }
body {
  background-color: #f03; }
/*#sourceMappingURL=data:application/json;charset=utf8;base64,eyJ2ZXJzaW9uIjozLCJmaWxlIjoiYnVuZGxlLmNzcyIsInNvdXJjZXMiOlsiYnVuZGxlLnNjc3MiLCJjb21wb25lbnRzL3NsaWRlci5zY3NzIl0sInNvdXJjZXNDb250ZW50IjpbIkBpbXBvcnQgJy4vY29tcG9uZW50cy9zbGlkZXIuc2Nzcyc7XG5cbiRjb2xvdXI6ICNmMDM7XG5ib2R5IHtcbiAgICBiYWNrZ3JvdW5kLWNvbG9yOiAkY29sb3VyO1xufVxuOjpwbGFjZWhvbGRlciB7XG4gICAgY29sb3I6IGdyYXk7XG59IiwiYm9keSB7XG4gICAgYmFja2dyb3VuZC1jb2xvcjogYXF1YTtcbn0iXSwibmFtZXMiOltdLCJtYXBwaW5ncyI6IkFDQUEsQUFBQSxJQUFJLENBQUM7RUFDRCxnQkFBZ0IsRUFBRSxJQUFJLEdBQ3pCOztBRENELEFBQUEsSUFBSSxDQUFDO0VBQ0QsZ0JBQWdCLEVBRlgsSUFBSSxHQUdaOztBQUNELEFBQUEsYUFBYSxDQUFDO0VBQ1YsS0FBSyxFQUFFLElBQUksR0FDZCJ9 */#

The extra text below is source maps. Now, when we inspect the site in DevTools, we see:

Nice! Now onto production mode:

gulp styles --prod

Check DevTools against style rules that require prefixing (e.g. display: grid;) and confirm those are all there. And make sure that your file is minified as well.

One final notice for this task. Let’s assume we want multiple CSS bundles: one for front-end styles and one for WordPress admin styles. We can create add a new admin.scss file in the src/scss directory and pass an array of paths in the Gulpfile:

export const styles = () => {
  return src(['src/scss/bundle.scss', 'src/scss/admin.scss'])
    .pipe(gulpif(!PRODUCTION, sourcemaps.init()))
    .pipe(sass().on('error', sass.logError))
    .pipe(gulpif(PRODUCTION, postcss([ autoprefixer ])))
    .pipe(gulpif(PRODUCTION, cleanCss({compatibility:'ie8'})))
    .pipe(gulpif(!PRODUCTION, sourcemaps.write()))
    .pipe(dest('dist/css'));
}

Now we have bundle.css and admin.css in the dist/css directory. Just make sure to properly enqueue any new bundles that are separated out like this.

Creating the watch task

Alright, next up is the watch task, which makes our life so much easier by looking for files with saved changes, then executing tasks on our behalf without have to call them ourselves in the command line. How great is that?

Like we did for the styles task:

import { src, dest, watch } from 'gulp';

We’ll call the new task watchForChanges:

export const watchForChanges = () => {
  watch('src/scss/**/*.scss', styles);
}

Note that watch is unavailable as a name since we already have a variable using it.

Now let’s run gulp watchForChanges the command line will be on a constant, ongoing watch for changes in any .scss files inside the src/scss directory. And, when those changes happen, the styles task will run right away with no further action on our part.

Note that src/scss/**/*.scss is a glob pattern. That basically means that this string will match any .scss file inside the src/scss directory or any sub-folder in it. Right now, we are only watching for .scss files and running the styles task. Later, we’ll expand its scope to watch for other files as well.

Creating the images task

As we covered earlier, the images task will compress images in src/images and then move them to dist/images. Let’s install a gulp plugin that will be responsible for compressing images:

npm install --save-dev gulp-imagemin

Now, import this plugin at the top of the Gulpfile:

import imagemin from 'gulp-imagemin';

And finally, let’s write our images task:

export const images = () => {
  return src('src/images/**/*.{jpg,jpeg,png,svg,gif}')
    .pipe(gulpif(PRODUCTION, imagemin()))
    .pipe(dest('dist/images'));
}

We give the src() function a glob that matches all .jpg, .jpeg, .png, .svg and .gif images in the src/images directory. Then, we run the imagemin plugin, but only for production. Compressing images can take some time and isn’t necessary during development, so we can leave it out of the development flow. Finally, we put the compressed versions of images in dist/images.

Now any images that we drop into src/images will be copied when we run gulp images. However, running gulp images --prod, will both compress and copy the image over.

Last thing we need to do is modify our watchForChanges task to include images in its watch:

export const watchForChanges = () => {
  watch('src/scss/**/*.scss', styles);
  watch('src/images/**/*.{jpg,jpeg,png,svg,gif}', images);
}

Now, assuming the watchForChanges task is running, the images task will be run automatically whenever we add an image to the src/images folder. It does all the lifting for us!

Important: If the watchForChanges task is running and when the Gulpfile is modified, it will need to be stopped and restarted in order for the changes to take effect.

Creating the copy task

You probably have been in situations where you’ve created files, processed them, then needed to manually grab the production files and put them where they need to be. Well, as we saw in the images task, we can use the copy feature to do this for us and help prevent moving wrong files.

export const copy = () => {
  return src(['src/**/*','!src/{images,js,scss}','!src/{images,js,scss}/**/*'])
    .pipe(dest('dist'));
}

Try to read the array of paths supplied to src() carefully. We are telling Gulp to match all files and folders inside src (src/**/*), except the images, js and scss folders (!src/{images,js,scss}) and any of the files or sub-folders inside them (!src/{images,js,scss}/**/*).

We want our watch task to look for these changes as well, so we’ll add it to the mix:

export const watchForChanges = () => {
  watch('src/scss/**/*.scss', styles);
  watch('src/images/**/*.{jpg,jpeg,png,svg,gif}', images);
  watch(['src/**/*','!src/{images,js,scss}','!src/{images,js,scss}/**/*'], copy);
}

Try adding any file or folder to the src directory and it should be copied over to the the /dist directory. If, however, we were to add a file or folder inside of /images, /js or /scss, it would be ignored since we already handle these folders in separate tasks.

We still have a problem here though. Try to delete the added file and it won’t happen. Our task only handles copying. This problem could also happen for our /images, /js and /scss, folders. If we have old images or JavaScript and CSS bundles that were removed from the src folder, then they won’t get removed from the dist folder. Therefore, it’s a good idea to completely clean the dist folder every time to start developing or building a theme. And that’s what we are going to do in the next task.

Composing tasks for developing and building

Let’s now install a package that will be responsible for deleting the dist folder. This package is called del:

npm install --save-dev del

Import it at the top:

import del from 'del';

Create a task that will delete the dist folder:

export const clean = () => {
  return del(['dist']);
}

Notice that del returns a promise. Thus, we don’t have to call the cb() function. Using the new JavaScript features allows us to refactor this to:

export const clean = () => del(['dist']);

The folder should be deleted now when running gulp clean. What we need to do next is delete the dist folder, run the images, copy and styles tasks, and finally watch for changes every time we start developing. This can be done by running gulp clean, gulp images, gulp styles, gulp copy and then gulp watch. But, of course, we will not do that manually. Gulp has a couple of functions that will help us compose tasks. So, let’s import these functions from Gulp:

import { src, dest, watch, series, parallel } from 'gulp';

series() will take some tasks as arguments and run them in series (one after another). And parallel() will take tasks as arguments and run them all at once. Let’s create two new tasks by composing the tasks that we already created:

export const dev = series(clean, parallel(styles, images, copy), watchForChanges)
export const build = series(clean, parallel(styles, images, copy))
export default dev;

Both tasks will do the exact same thing: clean the dist folder, then styles, images and copy will run in parallel one the cleaning is complete. We will start watching for changes as well for the dev (short for develop) task, after these parallel tasks. Additionally, we are also exporting dev as the default task.

Notice that when we run the build task, we want our files to be minified, images to be compressed, and so on. So, when we run this command, we will have to add the --prod flag. Since this can easily be forgotten when running the build task, we can use npm scripts to create aliases for the dev and build commands. Let’s go to package.json, and in the scripts field, we will probably find something like this:

"scripts": {
  "test": "echo "Error: no test specified" && exit 1"
}

Let’s change it to this:

"scripts": {
  "start": "gulp",
  "build": "gulp build --prod"
},

This will allow us to run npm run start in the command line, which will go to the scripts field and find what command corresponds to start. In our case, start will run gulp and gulp will run the default gulp task, which is dev. Similarly, npm run build will run gulp build --prod. This way, we can completely forget about the --prod flag and also forget about running the Gulp tasks using the gulp command. Of course, our dev and build commands will do more than that later on, but for now, we have the foundation that we will work with throughout the rest of the tasks.

Creating the scripts task

As mentioned, in order to bundle our JavaScript files, we are going to need a module bundler. webpack is the most famous option out there, however it is not a Gulp plugin. Rather, it’s a plugin on its own that has a completely separate setup and configuration file. Luckily, there is a package called webpack-stream that helps us use webpack within a Gulp task. So, let’s install this package:

npm install --save-dev webpack-stream

webpack works with something called loaders. Loaders are responsible for transforming files in webpack. And to transform new Javascript versions into ES5, we will need a loader called babel-loader. We will also need @babel/preset-env but we already installed this earlier:

npm install --save-dev babel-loader

Let’s import webpack-stream at the top of the Gulpfile:

import webpack from 'webpack-stream';

Also, to test our task, lets add these lines in src/js/bundle.js and src/js/components/slider.js:

// bundle.js
import './components/slider';
console.log('bundle');


// slider.js
console.log('slider')

Our scripts task will finally look like so:

export const scripts = () => {
  return src('src/js/bundle.js')
  .pipe(webpack({
    module: {
      rules: [
        {
          test: /\.js$/,
          use: {
            loader: 'babel-loader',
            options: {
              presets: []
            }
          }
        }
      ]
    },
    mode: PRODUCTION ? 'production' : 'development',
    devtool: !PRODUCTION ? 'inline-source-map' : false,
    output: {
      filename: 'bundle.js'
    },
  }))
  .pipe(dest('dist/js'));
}

Let’s break this down a bit:

  • First, we specify bundle.js as our entry point in the src() function.
  • Then, we pipe the webpack plugin and specify some options for it.
  • The rules field in the module option lets webpack know what loaders to use in order to transform our files. In our case we need to transform JavaScript files using the babel-loader.
  • The mode option is either production or development. For development, webpack will not minify the output JavaScript bundle, but it will for production. Therefore, we don’t need a separate Gulp plugin to minify JavaScript because webpack can do that depending on our PRODUCTION constant.
  • The devtool option will add source maps, but not in production. In development, however, we will use inline-source-maps. This kind of source maps is the most accurate though it can be a bit slow to create. If you find it too slow, check the other options here. They won’t be as accurate as inline-source-maps but they can be pretty fast.
  • Finally, the output option can specify some information about the output file. In our case, we only need to change the filename. If we don’t specify the filename, webpack will generate a bundle with a hash as the filename. Read more about these options here.

Now we should be able to run gulp scripts and gulp scripts --prod and see a bundle.js file created in dist/js. Make sure that minification and source maps are working properly. Let’s now enqueue our JavaScript file in WordPress, which can be in the theme’s functions.php file, or wherever you write your functions.

<?php
function _themename_assets() {
  wp_enqueue_style( '_themename-stylesheet', get_template_directory_uri() . '/dist/css/bundle.css', array(), '1.0.0', 'all' );
  
  wp_enqueue_script( '_themename-scripts', get_template_directory_uri() . '/dist/js/bundle.js', array(), '1.0.0', true );
}
add_action('wp_enqueue_scripts', '_themename_assets');

Now, looking at the console, let’s confirm that source maps are working correctly by checking the file that the console logs come from:

Without the source maps, both logs will appear coming from bundle.js.

What if we would like to create multiple JavaScript bundles the same way we do for the styles? Let’s create a file called admin.js in src/js. You might think that we can simply change the entry point in the src() to an array like so:

export const scripts = () => {
  return src(['src/js/bundle.js','src/js/admin.js'])
  .
  .
}

However, this will not work. webpack works a bit differently that normal Gulp plugins. What we did above will still create one file called bundle.js in the dist folder. webpack-stream provides a couple of solutions for creating multiple entry points. I chose to use the second solution since it will allow us to create multiple bundles by passing an array to the src() the same way we did for the styles. This will require us to install vinyl-named:

npm install --save-dev vinyl-named

Import it:

import named from 'vinyl-named';

...and then update the scripts task:

export const scripts = () => {
  return src(['src/js/bundle.js','src/js/admin.js'])
  .pipe(named())
  .pipe(webpack({
    module: {
      rules: [
        {
          test: /\.js$/,
          use: {
            loader: 'babel-loader',
            options: {
              presets: ['@babel/preset-env']
            }
          }
        }
      ]
    },
    mode: PRODUCTION ? 'production' : 'development',
    devtool: !PRODUCTION ? 'inline-source-map' : false,
    output: {
      filename: '[name].js'
    },
  }))
  .pipe(dest('dist/js'));
}

The only difference is that we now have an array in the src(). We then pipe the named plugin before webpack, which allows us to use a [name] placeholder in the output field’s filename instead of hardcoding the file name directly. After running the task, we get two bundles in dist/js.

Another feature that webpack provides is using libraries from external sources rather than bundling them into the final bundle. For example, let’s say your bundle needs to use jQuery. You can run npm install jquery --save and then import it to your bundle import $ from 'jquery'. However, this will increase the bundle size and, in some cases, you may already have jQuery loaded via a CDN or — in case of WordPress — it can exist as a dependency like so:

wp_enqueue_script( '_themename-scripts', get_template_directory_uri() . '/dist/js/bundle.js', array('jquery'), '1.0.0', true );

So, now WordPress will enqueue jQuery using a normal script tag. How can we then use it inside our bundle using import $ from 'jquery'? The answer is by using webpack’s externals option. Let’s modify our scripts task to add it in:

export const scripts = () => {
  return src(['src/js/bundle.js','src/js/admin.js'])
    .pipe(named())
    .pipe(webpack({
      module: {
        rules: [
          {
            test: /\.js$/,
            use: {
              loader: 'babel-loader',
              options: {
                presets: []
            }
          }
        }
      ]
    },
    mode: PRODUCTION ? 'production' : 'development',
    devtool: !PRODUCTION ? 'inline-source-map' : false,
    output: {
      filename: '[name].js'
    },
    externals: {
      jquery: 'jQuery'
    },
  }))
  .pipe(dest('dist/js'));
}

In the externals option, jquery is the key that identifies the name of the library we want to import. In our case, it will be import $ from 'jquery'. And the value jQuery is the name of a global variable where that the library lives. Now try to import $ from ‘jquery’ in the bundle and use jQuery using the $ — it should work perfectly.

Let’s watch for changes for JavaScript files as well:

export const watchForChanges = () => {
  watch('src/scss/**/*.scss', styles);
  watch('src/images/**/*.{jpg,jpeg,png,svg,gif}', images);
  watch(['src/**/*','!src/{images,js,scss}','!src/{images,js,scss}/**/*'], copy);
  watch('src/js/**/*.js', scripts);
}

And, finally, add our scripts task in the dev and build tasks:

export const dev = series(clean, parallel(styles, images, copy, scripts), watchForChanges);
export const build = series(clean, parallel(styles, images, copy, scripts));

Refreshing the browser with Browsersync

Let’s now improve our watch task by installing Browsersync, a plugin that refreshes the browser each time tasks finish running.

npm install browser-sync gulp --save-dev

As usual, let’s import it:

import browserSync from "browser-sync";

Next, we will initialize a Browsersync server and write two new tasks:

const server = browserSync.create();
export const serve = done => {
  server.init({
    proxy: "http://localhost/yourFolderName" // put your local website link here
  });
  done();
};
export const reload = done => {
  server.reload();
  done();
};

In order to control the browser using Browsersync, we have to initialize a Browsersync server. This is different from a local server where WordPresss would typically live. the first task is serve, which starts the Browsersync server, and is pointed to our local WordPress server using the proxy option. The second task will simply reload the browser.

Now we need to run this server when we are developing our theme. We can add the serve task to the dev series tasks:

export const dev = series(clean, parallel(styles, images, copy, scripts), serve, watchForChanges);

Now run npm start and the browser should open up a new URL that’s different than the original one. This URL is the one that Browsersync will refresh. Now let’s use the reload task to reload the browser once tasks are done:

export const watchForChanges = () => {
  watch('src/scss/**/*.scss', series(styles, reload));
  watch('src/images/**/*.{jpg,jpeg,png,svg,gif}', series(images, reload));
  watch(['src/**/*','!src/{images,js,scss}','!src/{images,js,scss}/**/*'], series(copy, reload));
  watch('src/js/**/*.js', series(scripts, reload));
  watch("**/*.php", reload);
}

As you can see, we added a new line to run the reload task every time a PHP file changes. We are also using series() to wait for our styles, images, scripts and copy tasks to finish before reloading the browser. Now, run npm start and change something in a Sass file. The browser should reload automatically and changes should be reflected after refresh once the tasks have finished running.

Don’t see CSS or JavaScript changes after refresh? Make sure caching is disabled in your browser’s inspector.

We can make even one more improvement to the styles tasks. Browsersync allows us to inject CSS directly to the page without even having to reload the browser. And this can be done by adding server.stream() at the very end of the styles task:

export const styles = () => {
  return src(['src/scss/bundle.scss', 'src/scss/admin.scss'])
    .pipe(gulpif(!PRODUCTION, sourcemaps.init()))
    .pipe(sass().on('error', sass.logError))
    .pipe(gulpif(PRODUCTION, postcss([ autoprefixer ])))
    .pipe(gulpif(PRODUCTION, cleanCss({compatibility:'ie8'})))
    .pipe(gulpif(!PRODUCTION, sourcemaps.write()))
    .pipe(dest('dist/css'))
    .pipe(server.stream());
}

Now, in the watchForChanges task, we won’t have to reload for the styles task any more, so let’s remove the reload task from it:

export const watchForChanges = () => {
  watch('src/scss/**/*.scss', styles);
  .
  .
}

Make sure to stop watchForChanges if it’s already running and then run it again. Try to modify any file in the scss folder and the changes should appear immediately in the browser without even reloading.

Packaging the theme in a ZIP file

WordPress themes are generally packaged up as a ZIP file that can be installed directly in the WordPress admin. We can create a task that will take the required theme files and ZIP them up for us. To do that we need to install another Gulp plugin: gulp-zip.

npm install --save-dev gulp-zip

And, as always, import it at the top:

import zip from "gulp-zip";

Let’s also import the JSON object in the package.json file. We need that in order to grab the name of the package which is also the name of our theme:

import info from "./package.json";

Now, let’s write our task:

export const compress = () => {
return src([
  "**/*",
  "!node_modules{,/**}",
  "!bundled{,/**}",
  "!src{,/**}",
  "!.babelrc",
  "!.gitignore",
  "!gulpfile.babel.js",
  "!package.json",
  "!package-lock.json",
  ])
  .pipe(zip(`${info.name}.zip`))
  .pipe(dest('bundled'));
};

We are passing the src() the files and folders that we need to compress, which is basically all files and folders (**/), except a few specific types of files, which are preceded by !. Next, we are piping the gulp-zip plugin and calling the file the name of the theme from the package.json file (info.name). The result is a fresh ZIP file an a new folder called bundled.

Try running gulp compress and make sure it all works. Open up the generated ZIP file and make sure that it only contains the files and folders needed to run the theme.

Normally, though, we only need to ZIP things up *after* the theme files have been built. So let’s add the compress task to the build task so it only runs when we need it:

export const build = series(clean, parallel(styles, images, copy, scripts), compress);

Running npm run build should now run all of our tasks in production mode.

Replacing the placeholder prefix in the ZIP file

One step we need to do before zipping our files is to scan them and replace the themename placeholder with the theme name we plan to use. As you may have guessed, there is indeed a Gulp plugin that does that for us, called gulp-replace.

npm install --save-dev gulp-replace

Then import it:

import replace from "gulp-replace";

We want this task to run immediately before our files are zipped, so let’s modify the compress task by slotting it in the right place:

export const compress = () => {
return src([
    "**/*",
    "!node_modules{,/**}",
    "!bundled{,/**}",
    "!src{,/**}",
    "!.babelrc",
    "!.gitignore",
    "!gulpfile.babel.js",
    "!package.json",
    "!package-lock.json",
  ])
  .pipe(replace("_themename", info.name))
  .pipe(zip(`${info.name}.zip`))
  .pipe(dest('bundled'));
};

Try to building the theme now with npm run build and then unzip the file inside the bundled folder. Open any PHP file where the _themename placeholder may have been used and make sure it’s replaced with the actual theme name.

There is a gotcha to watch for that I noticed in the replace plugin as I was working with it. If there are ZIP files inside the theme (e.g. you are bundling WordPress plugins inside your theme), then they will get corrupted when they pass through the replace plugin. That can be resolved by ignoring ZIP files using a gulp-if statement:

.pipe(
  gulpif(
    file => file.relative.split(".").pop() !== "zip",
    replace("_themename", info.name)
  )
)

Generating a POT file

Translation is a big thing in the WordPress community, so for our final task, we let’s scan through all of our PHP files and generate a POT file that gets used for translation. Luckily, we also have a gulp plugin for that:

npm install --save-dev gulp-wp-pot

And, of course, import it:

import wpPot from "gulp-wp-pot";

Here’s our final task:

export const pot = () => {
  return src("**/*.php")
  .pipe(
      wpPot({
        domain: "_themename",
        package: info.name
      })
    )
  .pipe(gulp.dest(`languages/${info.name}.pot`));
};

We want the POT file to generate every time we build the theme:

export const build = series(clean, parallel(styles, images, copy, scripts), pot, compress);

Summing up

Here’s the complete Gulpfile, including all of the tasks we covered in this post:

import { src, dest, watch, series, parallel } from 'gulp';
import yargs from 'yargs';
import sass from 'gulp-sass';
import cleanCss from 'gulp-clean-css';
import gulpif from 'gulp-if';
import postcss from 'gulp-postcss';
import sourcemaps from 'gulp-sourcemaps';
import autoprefixer from 'autoprefixer';
import imagemin from 'gulp-imagemin';
import del from 'del';
import webpack from 'webpack-stream';
import named from 'vinyl-named';
import browserSync from "browser-sync";
import zip from "gulp-zip";
import info from "./package.json";
import replace from "gulp-replace";
import wpPot from "gulp-wp-pot";
  const PRODUCTION = yargs.argv.prod;
  const server = browserSync.create();
  export const serve = done => {
    server.init({
      proxy: "http://localhost:8888/starter"
    });
    done();
  };
  export const reload = done => {
    server.reload();
    done();
  };
  export const clean = () => del(['dist']);
    
  export const styles = () => {
  return src(['src/scss/bundle.scss', 'src/scss/admin.scss'])
    .pipe(gulpif(!PRODUCTION, sourcemaps.init()))
    .pipe(sass().on('error', sass.logError))
    .pipe(gulpif(PRODUCTION, postcss([ autoprefixer ])))
    .pipe(gulpif(PRODUCTION, cleanCss({compatibility:'ie8'})))
    .pipe(gulpif(!PRODUCTION, sourcemaps.write()))
    .pipe(dest('dist/css'))
    .pipe(server.stream());
  }
  export const images = () => {
  return src('src/images/**/*.{jpg,jpeg,png,svg,gif}')
    .pipe(gulpif(PRODUCTION, imagemin()))
    .pipe(dest('dist/images'));
  }
  export const copy = () => {
    return src(['src/**/*','!src/{images,js,scss}','!src/{images,js,scss}/**/*'])
    .pipe(dest('dist'));
  }
    export const scripts = () => {
      return src(['src/js/bundle.js','src/js/admin.js'])
      .pipe(named())
      .pipe(webpack({
        module: {
        rules: [
          {
            test: /\.js$/,
            use: {
              loader: 'babel-loader',
              options: {
                presets: []
                }
              }
            }
          ]
        },
        mode: PRODUCTION ? 'production' : 'development',
        devtool: !PRODUCTION ? 'inline-source-map' : false,
        output: {
          filename: '[name].js'
        },
        externals: {
          jquery: 'jQuery'
        },
      }))
      .pipe(dest('dist/js'));
    }
    export const compress = () => {
      return src([
        "**/*",
        "!node_modules{,/**}",
        "!bundled{,/**}",
        "!src{,/**}",
        "!.babelrc",
        "!.gitignore",
        "!gulpfile.babel.js",
        "!package.json",
        "!package-lock.json",
      ])
      .pipe(
        gulpif(
          file => file.relative.split(".").pop() !== "zip",
          replace("_themename", info.name)
        )
      )
      .pipe(zip(`${info.name}.zip`))
      .pipe(dest('bundled'));
    };
    export const pot = () => {
      return src("**/*.php")
        .pipe(
          wpPot({
            domain: "_themename",
            package: info.name
          })
        )
      .pipe(dest(`languages/${info.name}.pot`));
    };
    export const watchForChanges = () => {
      watch('src/scss/**/*.scss', styles);
      watch('src/images/**/*.{jpg,jpeg,png,svg,gif}', series(images, reload));
      watch(['src/**/*','!src/{images,js,scss}','!src/{images,js,scss}/**/*'], series(copy, reload));
      watch('src/js/**/*.js', series(scripts, reload));
      watch("**/*.php", reload);
    } 
    export const dev = series(clean, parallel(styles, images, copy, scripts), serve, watchForChanges);
    export const build = series(clean, parallel(styles, images, copy, scripts), pot, compress);
    export default dev;

Phew, that’s everything! I hope you learned something from this series and that it helps streamline your WordPress development flow. Let me know if you have any questions in the comments. If you are interested in a complete WordPress theme development course, make sure to check out my course on Udemy with a special discount for you. 😀

The post Gulp for WordPress: Creating the Tasks appeared first on CSS-Tricks.

Gulp for WordPress: Initial Setup

This is the first part of a two-part series on creating a Gulp workflow for WordPress theme development. This first part covers a lot of ground for the initial setup, including Gulp installation and an outline of the tasks we want it to run. If you're interested in how the tasks are created, then stay tuned for part two.

Earlier this year, I created a course for building premium WordPress themes. During the process, I wanted to use a task runner to concatenate and minify JavaScript and CSS files. I ended up using that task runner to automate a lot of other tasks that made the theme much more efficient and scalable.

The two most popular task runners powered by Node are Gulp and Grunt. I went with Gulp after a good amount of research, it appeared to have an intuitive way to write tasks. It uses Node streams to manipulate files and JavaScript functions to write the tasks, whereas Grunt uses a configuration object to define tasks — which might be fine for some, but is something that made me a little uncomfortable. Also, Gulp is a bit faster than Grunt because of those Node streams and faster is always a good thing to me!

So, we're going to set Gulp up to do a lot of the heavy lifting for WordPress theme development. We'll cover the initial setup for now, but then go super in-depth on the tasks themselves in another post.

Article Series:

  1. Initial Setup (This Post)
  2. Creating the Tasks

Initial theme setup

So, how can we use Gulp to power the tasks for a WordPress theme? First off, let’s assume our theme only contains the two files that WordPress requires for any theme: index.php and styles.css. Sure, most themes are likely to include many more files that this, but that’s not important right now.

Secondly, let’s assume that our primary goal is to create tasks that help manage our assets, like minify our CSS and JavaScript files, compile Sass to CSS, and transpile modern JavaScript syntax (e.g. ES6, ES7, etc..) into ES5 in order to support older browsers.

Our theme folder structure will look like this:

themefolder/
├── index.php
├── style.css
└── src/
    ├── images/
    │   └── cat.jpg
    ├── js/
    │   ├── components/
    │   │   └── slider.js
    │   └── bundle.js
    └── scss/
        ├── components/
        │   └── slider.scss
        └── bundle.scss

The only thing we’ve added on top of the two required files is a src directory where our original un-compiled assets will live.

Inside of that src directory, we have an images subdirectory as well as others for our JavaScript and Sass files. And from, there, the JavaScript and Sass subdirectories are organized into components that will be called from their respective bundle file. So, for example, bundle.js will import and include slider.js when our JavaScript tasks run so all our code is concatenated into a single file.

Identifying Gulp tasks

OK, next we want Gulp tasks to a create a new dist directory where all of our compiled, minified and concatenated versions of our assets will be distributed after the tasks have completed. Even though we’re calling this directory dist in this post because it is short for "distribution," it could really be called anything, as long as Gulp knows what it is called. Regardless, these are the assets that will be distributed to end users. The src folder will only contain the files that we edit directly during development.

Identifying which Gulp tasks are the best fit for a project will depend on the project’s specific needs. Some tasks will be super helpful on some projects but completely irrelevant on others. I’ve identified the following for us to cover in this post. You’ll see that one or two are more useful in a WordPress context (e.g. the POT task) than others. Yet, most of these are broad enough that you’re likely to see them in many projects that use Gulp to process Sass, JavaScript and image assets.

  • Styles Task: This task is responsible for compiling the bundle.scss file in the scss subdirectory to bundle.css in a css directory located in the dist directory. This task will also minify the generated CSS file so that its is it’s smallest possible size when used in production.

We will talk about production vs. development modes during the article. Note that we will not create a task to concatenate CSS files. The bundle.scss file will act as an entry point for all . <code>scss files that we want to include. In other words; any Sass or CSS files you want to include in your project, just import them in the bundle.scss file using @import statements. For instance, in our example folder, we can use @import ./components/slider'; to import the slider.scss file. This way in our task we will have to compile and minify only one file (bundle.css).

  • Scripts Task: Similar to the Styles task, this task will transpile bundle.js from ES6 syntax to ES5, then minify the file for production.

We will only compile bundle.js. Any other JavaScript files we want to include will be done using ES6 import statements. But in order for those import statements to work on all browsers, we will need to use a module bundler. We’re going to use webpack as our bundler. If this is your first time working with it, this primer is a good place to get an overview of what it is and does.

  • Images Task: This task will simply copy images from src/images and send them to dist/images after the files have been compressed to their smallest size.
  • Copy Task: This task will be responsible for copying any other files or folders that are not in /src/images, /src/js or /src/scss and post them to the dist directory.

Remember. the src folder will contain the files that are only used during development and that will not be included in the final theme package. Thus, any assets other than our images, JavaScript and Sass files need to be copied posted to the dist folder. For instance, if we have a /src/fonts folder, we would want to copy the files in there into the dist directory so they get included in the final deliverable.

  • POT Task: As the name suggests, this task will scan all your theme’s PHP files and generate a .pot (i.e. translation) file from gettext calls in the files. This is the most WordPress-centric of all the tasks we’re covering here.
  • Watch Task: This task will literally watch for changes in your files. When a change is made, certain tasks will be triggered and executed, depending on the type of file that changed.

For instance, if we change a JavaScript file, then the Scripts task should do its magic and then it would be ideal if the browser refreshed for us automatically so we can see those changes. Further, If we change a PHP file, then let’s simply refresh the browser since PHP files don’t rely on any other tasks in our project. We’ll be using a Gulp plugin called Browsersync to handle browser refreshes, but we’ll get to that and other plugins a little later.

  • Compress Task: As you might expect, all the plugins that we use to write our tasks will be managed using npm. So, our theme folder will contain another folder, like node_modules, that in turn, contains files like package.json and other configuration files that define our project’s dependencies — and these files and folders are only needed during development. During production, we can take out the necessary files for our theme and leave the unneeded development files behind. That’s what this task will do; it will create a ZIP file that only contains the necessary files to run our theme.

As a bonus step for the compress task, if you are creating a theme that you intend to publish on WordPress.org or maybe on a website like ThemeForest, then you probably already know that all functions in your theme must be prefixed with a unique prefix:

function mythemename_enqueue_assets() {
  // function body
}

So, if you are creating a lot of themes. you'll need to easily reuse functions in different themes but change the prefix to the name of the theme, to prevent conflicts. We can prefix our functions with a placeholder prefix and then replace all instances of that placeholder in the compress task. For instance, we can choose the string _themename as a place holder, and when we compress our theme we will replace all ‘_themename’ strings to the actual theme name:

function _themename_enqueue_assets() {
  // function body
}

This can also apply to anywhere we use the theme name for example in the text domain:

<?php _e('some string', '_themename'); ?>
  • Develop Task: This task does nothing new. It runs as we develop our theme. It cleans the dist folder, runs the Styles, Scripts, Images and Copy tasks in development mode (i.e. without minifying any of the assets), then watches for file changes to refresh the browser for us.
  • Build Task: This task is intended to build our files for production. It will do all the same cleaning and tasks as the Develop task, but in production mode (i.e. minify the assets in the process) and generate a new POT file for translation updates. After it runs, our dist folder should contain the files that are ready for distribution.
  • Bundle Task: This task will simply run the build task, making sure that all the files in the dist folder are minified and ready for distribution. Then, it will run the Compress task, which bundles all of the production-ready files and folders into a ZIP file. We want a ZIP file because that is the format WordPress recognizes to extract and install a theme.

Here’s how our file structure looks after our tasks complete:

themefolder/
├── index.php
├── style.css
├── src/
└── dist/
    ├── images/
    │   └── cat.jpg // after compression
    ├── js/
    │   └── bundle.js // bundled with all imported files (minified in production)
    └── scss/
        └── bundle.scss // bundled with all imported files (minified in production)

Now that we know what tasks we’re going to use on our project and what they do, let’s get into the process for installing Gulp into the project.

Installing Gulp

Before we install Gulp, we should make sure that we have Node and npm installed on our machines. We can do that by running these commands in the command line:

node --version
npm --version

...and, we should get some version number as seen here:

Now, let’s point the command line to the theme folder:

cd path/to/your/theme/folder

...and then run this command to initialize a new npm project:

npm init

This will prompt us with some options. The only important option in our case is the package name option. This is where the name of the theme can be provided — everything else can stay at their default setting. When choosing the theme name, make sure to only use lowercase characters and underscores while avoiding dashes and special characters since this theme name will be used to replace the functions placeholder that we mentioned earlier.

On to installing Gulp! First, we’ve got to install Gulp’s command line interface (gulp-cli) globally so we can use Gulp in the command line.

npm install --global gulp-cli

After that, we run this command in order to install Gulp itself in the theme directory:

npm install –save-dev gulp

The current stable release of Gulp is 3.9.1 at the time of this writing, but version 4.0 is already available in the project repository.

To make sure everything is installed correctly, we’ll run this command:

gulp --version

Nice! Looks like we’re running version 4.0, which is the latest version at the time of this writing.

Writing Gulp tasks

Gulp tasks are defined in a a file in called gulpfile.js that we’ll need to create and place into the root of our theme.

Gulp is JavaScript at its core, so we can define a quick example task that logs something to the console.

var gulp = require('gulp');
gulp.task('hello', function() {
  console.log('First Task');
})

In this example, we’ve defined a new task by calling gulp.task. The first argument for this function is the task’s name (hello) and the second argument is the function we want to run when that name is entered into the command line which, in this case, should print "First Task" into the console.

Let’s do that now.

gulp hello

Here’s what we get:

As you can see, we do indeed get the console.log('First Task') output we want. However, we also get an error saying that our task did not complete. All Gulp tasks require telling Gulp where to end the task, and we do that by calling a function that is passed as the first argument in our task function like so:

var gulp = require('gulp');
gulp.task('hello', function(cb) {
  console.log('First Task');
  cb();
})

Let’s try running gulp hello again and we should get the same output, but without the error this time.

cb() is a Node.js callback function that's often passed into an asynchronous function. There are some cases where we won’t have to call it, such as when a task returns a promise or a node stream. A node stream is what we will use in the tasks in this post, which means we will see it a lot throughout our article.

Here’s an example of a task that returns a promise. In this task, we won’t have to call the cb() function because Gulp already knows that the task will end when the promise resolves or returns an error:

gulp.task('promise', function(cb) {
  return new Promise(function(resolve, reject) {
    setTimeout(function() {
      resolve();
    }, 300);
  });
});

Now try and run ‘gulp promise’ and the task will complete without returning any errors.

Finally, it’s worth mentioning that Gulp accepts a default task that runs by typing gulp alone in the command line. All it takes is using "default" as the task name argument.

gulp.task('default', function(cb) {
  console.log('Default Task');
  cb();
});

Now, typing gulp by itself in the command line will run the task.

Whew! Now we know the basics of writing Gulp tasks.

There is one more thing we can do to improve things and that’s enabling ES6 syntax in the Gulpfile. This will allow us to use features like destructuring, import statements, and arrow functions, among others.

Using ES6 in the Gulpfile

The first step to use ES6 syntax in the Gulpfile is to rename it from gulpfile.js to gulpfile.babel.js. As you may already know, Babel is the compiler that compiles ES6 to ES5.

So, let’s install Babel and some of its required packages by running:

npm install --save-dev @babel/register @babel/preset-env @babel/core

After that, we have to create a file called .babelrc inside of our theme folder. This file will tell Babel which preset to use to compile our JavaScript. The contents of the .babelrc file will look like this:

{
  "presets": ["@babel/preset-env"]
}

Now we can use ES6 in our Gulpfile! Here’s how that would look if we were to re-write it:

import gulp from 'gulp';
export const hello = (cb) => {
  console.log('First Task');
  cb();
}

export const promise = (cb) => {
  return new Promise((resolve, reject) => {
    setTimeout(() => {
      resolve();
    }, 300);
  });
};

export default hello

As you can see, we are importing Gulp using import and not require. In fact, there’s no longer any need to import Gulp at all! I included the import statement anyway to show it can be used instead of require. We’re allowed to skip importing Gulp because we don’t have to call gulp.task — instead, we only need to export a function, and the name of this function will be the name of the task. Further, all that’s needed to define a default function is use export default. And notice those arrow functions, too! Everything is so much more concise.

Let’s move on and start coding the actual tasks.

Development vs. Production

As we covered earlier, we need to create two modes: development and production. The reason we need to delineate between the two is that some details in our tasks will be time and memory-consuming, which only make sense in a production environment. For instance, the styles task needs to minify the CSS. However, the minification can take both time and memory — and if that process runs every single time something changes during development, that is not only unnecessary, but is very inefficient. It’s ideal for tasks to be as fast as possible during development.

We need to set a flag that specifies whether a task should run in one mode or the other. We can use a package called yargs that allows us to define these types of arguments while running a command. So, let’s install it and put it to use:

npm install --save-dev yargs

Now, we can add arguments to our command like so:

gulp hello --prod=true

...and then retrieve these argument in the Gulpfile:

import yargs from 'yargs';
const PRODUCTION = yargs.argv.prod;

export const hello = (cb) => {
  console.log(PRODUCTION);
  cb();
}

Notice that the values we define in the command are available inside the yargs.argv object in the Gulpfile and in console.log(PRODUCTION). In our case this will output true, so PRODUCTION will be our flag that decides whether or not a function runs inside the tasks.

We're all set up!

We covered a lot of ground here, but we now have everything we need to start writing tasks for our WordPress theme development. That just so happens to be the sole focus of the next part of this series, so stay tuned for tomorrow.

The post Gulp for WordPress: Initial Setup appeared first on CSS-Tricks.