WordPress Performance Team Releases New Feature Plugin For Testing Improvements In Progress

WordPress’ performance team has released a new feature plugin called Performance Lab that includes a set of performance-related improvements for core. The team, which formed just five months ago, is led by Yoast and Google-sponsored core contributors, and has had more than 250 people join its Slack channel, with many participating regularly in weekly chats.

This first release includes the following modules, which are in varying states of development:

The purpose of the plugin is to make it easy for users to test improvements in progress. Each of these modules can be enabled or disabled under a new Settings > Performance menu in the admin.

The WebP upload module can be tested by uploading JPEG images and then checking to see that the WebP versions are generated in the Media Library and displayed on the frontend. The other performance modules are checks that will show up on the Site Health status screen:

WordPress Core Committer Felix Arntz emphasized that the plugin should be considered a beta testing plugin, not a quick fix for making your WordPress site faster.

“The plugin is not going to be a suite of all crucial performance features you need to make your site fast – that’s where the existing performance plugins have their market, and the Performance Lab plugin should indeed not falsely raise an impression of wanting to compete with them,” Arntz commented in a GitHub ticket regarding the plugin’s branding.

Users should be aware they may have unexpected results when testing the plugin, especially when enabling the more experimental features that are not turned on by default. It should not be considered a replacement for other more established performance plugins. Performance Lab may also change over time, as new features are proposed for core.

“Because the Performance Lab plugin is a collection of potential WordPress core feature modules, the list of modules included may drastically change over time,” Arntz said. “New modules may be added regularly, while other modules may be removed in a future plugin version once they have landed in a WordPress core release.”

The goal for the feature plugin is to get performance improvements in progress tested more widely, weeding out edge cases before shipping the modules in a core release. Testers can log issues as GitHub issues or as wordpress.org support forum requests.

Legacy in Your Cloud: Top AWS Unmanaged Resources That You Should Know About

Cloud operations are complex. There are a lot of reasons for this complexity, but in this post, I want to focus on how resources and services are managed in today’s clouds. Cloud today is oftentimes comprised of a large number of heterogeneous resources that have altogether different methods for managing them.

This diversity of resources is in large part the byproduct of cloud practices that predate infrastructure as code (IaC). Before automation and IaC, many companies would configure resources and services manually, without any alignment to best practices, based on internal processes that are unique to the organization. As companies evolved, and adopted IaC for codifying and managing cloud resources, this created a mishmash of services that are managed and unmanaged. 

WordPress Community Designers Create the Museum of Block Art

Anne McCarthy announced the launch of the Museum of Block Art earlier today via the Gutenberg Times blog. The site’s goal is to showcase creative uses of blocks and inspire the WordPress community to push the limits of what is possible with the block editor.

The site showcases 22 pieces of block art from 11 people in the WordPress space. Ann McCarthy, Tammie Lister, Beatriz Fialho, Allan Cole, Rich Tabor, Nick Hamze, Brian Gardner, Javier Arce, Mel Choyce-Dwan, Channing Ritter, and Francisco Vera all contributed to this first outing.

A 6-column grid showcasing various art designs created via the WordPress block editor.
Multiple block art styles.

The concept builds upon an earlier project by Lister. In October 2021, she announced Patternspiration, a site where she created and released a new block pattern every day for the entire month.

“She was showing me those, sharing problems she was running into, the intent around ‘how quickly can I create a pattern/what can I create in 30 minutes per day,’ etc.,” said McCarthy. “I brought it up on a hallway hangout, and the idea just hit me as we were chatting (that’s the moment in the video). I found her approach to be so beyond creative and beautiful compared to some of the necessarily practical items in the block pattern directory.”

The pattern directory on WordPress.org must take a lot of factors into consideration to ensure patterns work across themes for millions of users. This limits what designers can do. However, such limitations are unnecessary on third-party sites.

“I wanted to take it a step further because it felt so compelling to look at something and not have an ‘I bet that was made with WordPress’ feeling that many of us have had,” said McCarthy.

Because the Museum of Block Art allows for more artistry in its showcase, it can also create inconsistent results if end-users blindly copy/paste the code. For example, one of my favorite patterns is the It’s Me (Super Mario) design by Hamze, which brings back at least a decade of childhood memories:

Super Mario "pixelated" image made out of Button blocks in the WordPress editor.
Super Mario block art.

However, it relies on color names that are not likely to exist in every theme. Copying the pattern code and pasting it into the editor should create the correct layout, but the colors might be off.

Other patterns require users to download the correct images and add them to their posts. Abstractions Study No.8 by Cole relies on custom CSS, which is provided via a downloadable Blockbase child theme.

Image of multiple abstract shapes created via WordPress blocks.
Abstract block art.

This sort of hodge-podge of methods is OK for a project like this. The goal is to inspire, not necessarily to make downloadable patterns. When designers experiment and push the boundaries, it can also help evolve the block system’s tools as they report limitations upstream.

Aside from Lister’s Patternspiration work, I had caught Ritter’s blog post in January sharing how she had created block art. At the time, I was unaware that it would be a part of the then-unknown Museum of Block Art project.

WordPress block art made from a gallery of various city images.
City Textures block art.

McCarthy added that she encouraged Ritter to publish the post, noting that it helps to “demystify” how it was done.

“I started pinging people who I thought would be interested in doing it,” said McCarthy of the block art included on the site. “It was all very grassroots and sometimes would just randomly come up in conversation. I tried to keep the ask very small since so much is going in with WordPress and the world. Probably less than half of the people I contacted actually submitted art pieces.”

The initial plan was to launch the site alongside the WordPress 5.9 release. However, it was pushed back as contributors needed more time.

There is no submission form for third-party contributions to the museum. However, McCarthy encourages designers to use the #WPBlockArt hashtag across social media to share their work. It could get picked up for inclusion on the site.

“I’m mainly looking for pieces via the hashtag, but if I see a big demand for folks wanting to submit, I’d be game to open up something more official,” said McCarthy. “This has been a side project on top of 5.9, the FSE Outreach program, etc., so I wanted to be mindful about the opportunity cost of sinking more time into an off-the-wall idea.”

Introducing New Adobe Document Generation Service

Whether you are working with contracts, invoices, statements of work, or proposals, you need to enter data into these documents. These may be data points from your ERP, your CRM system, or another database.

Many organizations create contracts or statements of work manually in Microsoft Word, Google Docs, or elsewhere. Invoices may be auto-generated by systems that output as a PDF. However, these methods may run into a few challenges:

Google Adds Custom Voices Support to Its Text-to-Speech API

Google has announced the release of support for custom voices for the company’s Cloud Text-to-Speech API, enabling developers to add custom voice modeling to conversational interfaces. This new functionality provides support for over 10 languages across the globe. 

The announcement of this new feature on the Google Cloud blog highlighted the value to businesses:

Elasticsearch vs. CloudSearch: AWS Cloud

Today, more than 100 billion searches are conducted every month on the Google search engine alone. Search engine users conduct searches for several reasons including the foundational conversion of information into action. An action could be a decision to purchase, consume information for decision-making, or seek a better understanding of an issue or topic among others. Search engines make information available at our fingertips right whenever we need it. 

In this era of big data, search solutions are useful not only for popular search engines like Google, Yahoo, and Bing but also for enterprises for monitoring and managing the growing volumes of data in their databases to enhance operational efficiency. The enterprise search industry has grown remarkably and is expected to be worth $8.90 billion by 2024.

Big Data in Healthcare

Big data is a term that defines the large volume of unstructured or structured data that impacts a business on a day-to-day basis. This data can give valuable business insights and solve business problems that could not be tackled before with conventional analytics or software. Over the last few years, various healthcare software has been launched to make people’s lives easier and improve quality care. Moreover, the healthcare and life sciences industry demands foolproof product quality assurance.

Big data in healthcare collects patient's records and improves the performance of healthcare facilities by the following means:

4 Big GitOps Moments of 2021

The growing complexity of application development and demand for more frequent deployments bolstered the rise of GitOps. GitOps, in simple terms, is all about using Git for container-based continuous integration and deployment. GitOps enables a seamless developer experience and greater control for Ops teams. It is often considered an extension of DevOps.

The central idea of GitOps is to use Git as the single source of truth. With Git repositories storing the declarative state of the system, it makes code management, reconciliation, and audits fairly easy to control and implement at scale. GitOps offers productivity, reliability, and security for cloud-native applications, accelerating its adoption.

Next-Gen Data Pipes With Spark, Kafka, and K8s: Part 2

Introduction 

In our previous article, we discussed two emerging options for building new-age data pipes using stream processing. One option leverages Apache Spark for stream processing and the other makes use of a Kafka-Kubernetes combination of any cloud platform for distributed computing. The first approach is reasonably popular, and a lot has already been written about it. However, the second option is catching up in the market as that is far less complex to set up and easier to maintain. Also, data-on-the-cloud is a natural outcome of the technological drivers that are prevailing in the market. So, this article will focus on the second approach to see how it can be implemented in different cloud environments.

Kafka-K8s Streaming Approach in Cloud

In this approach, if the number of partitions in the Kafka topic matches with the replication factor of the pods in the Kubernetes cluster, then the pods together form a consumer group and ensure all the advantages of distributed computing. It can be well depicted through the below equation:

Read SAP Tables With RFC_READ_TABLE in Mule 4 Using SAP Connector

In this article, we are going to see how to make an RFC (Remote Function Call) to SAP and what different options are available to get your desired data. We'll also learn how to make use of various operators like AND, OR, IN, and many more, and what the default structure of RFC_READ_TABLE input looks like.

Whenever you are working with SAP, you will surely go with RFC calls and a lot of developers struggle to write XML queries for RFC.

System specification morphisms

Hi, i wana ask about what would be a great topic for literacy review, discussion paper or research paper for those who encounter system specification morphisms thanks

Creating a Risograph Grain Light Effect in Three.js

Recently, I release my brand new portfolio, home to my projects that I have worked on in the past couple of years:

As I was doing experimentations for the portfolio, I tried to reproduce this kind of effect I found on the web:

I really like these 2D grain effects applied to 3D elements. It kind of has this cool feeling of cray and rocks and I decided to try and reproduce it from scratch. I started with a custom light shader, then mixed it with a grain effect and by playing with some values I got to this final result:

In this tutorial I’d like to share with you what I’ve done to achieve this effect. We’re going to explore two different ways of doing it.

Note that I won’t get into too much detail about Three.js and WebGL for simplicity, so it’s good to have some solid knowledge of JavaScript, Three.js and some notions about shaders before starting with this tutorial. If you’re not very familiar with shaders but with Three.js, then the second way is for you!

Summary

Method 1: Writing our own custom ShaderMaterial (That’s the harder path but you’ll learn about how light reflection works!)

  • Creating a basic Three.js scene
  • Use ShaderMaterial
  • Create a diffuse light shader
  • Create a grain effect using 2D noise
  • Mix it with light

Method 2: Starting from MeshLambertMaterial shader (Easier but includes unused code from Three.js since we’ll rewrite the Three.js LambertMaterial shader)

  • Copy and paste MeshLambertMaterial
  • Add our custom grain light effect to the fragmentShader
  • Add any Three.js lights

1. Writing our own custom ShaderMaterial

Creating a basic Three.js scene

First we need to set up a basic Three.js scene with a simple sphere in the center:

Here is a Three.js basic scene with a camera, a renderer and a sphere in the middle. You can find the code in this repository in the file src/js/Scene.js, so you can start the tutorial from here.

Use ShaderMaterial

Let’s create a custom shader in Three.js using the ShaderMaterial class. You can pass it uniforms objects, and a vertex and a fragment shader as parameters. The cool thing about this class is that it’s already giving you most of the necessary uniforms and attributes for a basic shader (positions of the vertices, normals for light, ModelViewProjection matrices and more).

First, let’s create a uniform that will contain the default color of our sphere. Here I picked a light blue (#51b1f5) but feel free to pick your favorite color. We’ll use a new THREE.Color() and call our uniform uColor. We’ll replace the material from the previous code l.87:

const material = new THREE.ShaderMaterial({
  uniforms: {
    uColor: { value: new THREE.Color(0x51b1f5) }
  }
});

Then let’s create a simple vertex shader in vertexShader.glsl, a separated file that will display the sphere vertices at the correct position related to the camera.

void main(void) {
  gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}

And finally, we write a basic fragment shader fragmentShader.glsl in a separated file as well, that will use our uniform uColor vec3 value:

uniform vec3 uColor;

void main(void) {
  gl_FragColor = vec4(uColor, 1.);
}

Then, let’s import and link them to our ShaderMaterial.

import vertexShader from './vertexShader.glsl'
import fragmentShader from './fragmentShader.glsl'
...    
const material = new THREE.ShaderMaterial({
  vertexShader: vertexShader,
  fragmentShader: fragmentShader,
  uniforms: {
    uColor: { value: new THREE.Color(0x51b1f5) }
  }
});

Now we should have a nice monochrome sphere:

Create a diffuse light shader

Creating our own custom light shader will allow us to easily manipulate how the light should affect our mesh.

Even if that seems complicated to do, it’s not that much code and you can find great articles online explaining how light reflection works on a 3D object. I recommend you read webglfundamentals if you would like to learn more details on this topic.

Going further, we want to add a light source in our scene to see how the sphere reflects light. Let’s add three new uniforms, one for the position of our spotlight, the other for the color and a last one for the intensity of the light. Let’s place the spotlight above the object, 5 units in Y and 1 unit in Z, use a white color and an intensity of 0.7 for this example.

 ...
 uLightPos: {
   value: new THREE.Vector3(0, 5, 3) // position of spotlight
 },
 uLightColor: {
   value: new THREE.Color(0xffffff) // default light color
 },
 uLightIntensity: {
   value: 0.7 // light intensity
 },

Now let’s talk about normals. A THREE.SphereGeometry has normals 3D vectors represented by these arrows:

For each surface of the sphere, these red vectors define in which direction the light rays should be reflected. That’s what we’re going to use to calculate the intensity of the light for each pixel.

Let’s add two varyings on the vertex shader:

  • vNormals, the normals vectors of the object related to the world position (where it is in the global scene).
  • vSurfaceToLight, this represents the direction of the light position minus the direction of each surface of the sphere.
uniform vec3 uLightPos;

varying vec3 vNormal;
varying vec3 vSurfaceToLight;

void main(void) {
  vNormal = normalize(normalMatrix * normal);

  gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
  // General calculations needed for diffuse lighting
  // Calculate a vector from the fragment location to the light source
  vec3 surfaceToLightDirection = vec3( modelViewMatrix * vec4(position, 1.0));
  vec3 worldLightPos = vec3( viewMatrix * vec4(uLightPos, 1.0));
  vSurfaceToLight = normalize(worldLightPos - surfaceToLightDirection);
}

Now let’s generate colors based on these light values in the Fragment shader.

We already have the normals values with vNormals. To calculate a basic light reflection on a 3D object we need two values light types: ambient and diffuse.

Ambient light is a constant value that will give a global light color of the whole scene. Let’s just use our light color for this case.

Diffuse light is representing the value of how strong the light is depending on how the object reflects it. That means that all surfaces which are close to and facing the spotLight should be more enlightened than surfaces that are far away and in the same direction. There is an amazing math function to calculate this value called the dot() product. The formula for getting a diffuse color is the dot product of vSurfaceToLight and vNormal. In this image you can see that vectors facing the sun are brighter than the others:

Then we need to addition the ambient and diffuse light and finally multiply it by a lightIntensity. Once we got our light value let’s multiply it by the color of our sphere. Fragment shader:

uniform vec3 uLightColor;
uniform vec3 uColor;
uniform float uLightIntensity;

varying vec3 vNormal;
varying vec3 vSurfaceToLight;

vec3 light_reflection(vec3 lightColor) {
  // AMBIENT is just the light's color
  vec3 ambient = lightColor;

  //- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
  // DIFFUSE  calculations
  // Calculate the cosine of the angle between the vertex's normal
  // vector and the vector going to the light.
  vec3 diffuse = lightColor * dot(vSurfaceToLight, vNormal);

  // Combine 
  return (ambient + diffuse);
}

void main(void) {
  vec3 light_value = light_reflection(uLightColor);
  light_value *= uLightIntensity;

  gl_FragColor = vec4(uColor * light_value, 1.);
}

And voilà:

Feel free to click and drag on this sandbox scene to rotate the camera.

Note that if you want to recreate MeshPhongMaterial you also need to calculate the specular light. This represent the effect you can observe when a ray of light gets directly into our eyes when reflected by an object, but we don’t need that precision here.

Create a grain effect using 2D noise

To get a 2D grain effect we’ll have to use a noise function that will display a gray color from 0 to 1 for each pixel of the screen in a “beautiful randomness”. There are a lot of functions online for creating simplex noise, perlin noise or others. Here we’ll use glsl-noise for a 2D simplex noise and glslify to import the noise function directly at the beginning of our fragment shader using:

#pragma glslify: snoise2 = require(glsl-noise/simplex/2d)

Thanks to the native WebGL value gl_FragCoord.xy we can easily get the UVs (coordinates) of the screen. Then we just have to apply the noise to these coordinates vec3 textureNoise = vec3(snoise2(uv)); This will create our 2D noise. Then, let’s apply these noise colors to our gl_FragColor:

#pragma glslify: snoise2 = require(glsl-noise/simplex/2d)
... 
// grain
vec2 uv = gl_FragCoord.xy;

vec3 noiseColors = vec3(snoise2(uv));

gl_FragColor = vec4(noiseColors, 1.0);

What a nice old TV noise effect:

As you can see, when moving the camera, the texture feels like it’s “stuck” to the screen, that’s because we matched our simplex noise effect to the coordinates of the screen to create a 2D style effect. You can also adjust the size of the noise like so uv /= myNoiseScaleVal;

Mixing it with the light

Now that we got our noise value and our light let’s mix them! The idea is to apply less noise where the light value is stronger (1.0 == white) and more noise where the light value is weaker (0.0 == black). We already have our light value, so let’s just multiply the texture value with that:

colorNoise *= light_value.r;

You can see how the light affects the noise now, but this doesn’t look very strong. We can accentuate this value by using an exponential function. To do that in GLSL (the shader language) you can use pow(). It’s already included in shaders, here I used the exponential of 5.

colorNoise *= pow(light_value.r, 5.0);

Then, let’s enlighten the noise color effect like so:

vec3 colorNoise = vec3(snoise2(uv) * 0.5 + 0.5);

To gray, right? Almost there, let’s re-add our beautiful color that we got from the start. We can say that if the light is strong it will go white, and if the light is weak it will be clamped to the initial channel color of the sphere like this:

gl_FragColor.r = max(textureNoise.r, uColor.r);
gl_FragColor.g = max(textureNoise.g, uColor.g);
gl_FragColor.b = max(textureNoise.b, uColor.b);
gl_FragColor.a = 1.0;

Now that we have this Material ready, we can apply it to any object of the scene:

Congrats, you finished the first way of doing this effect!

2. Starting from MeshLambertMaterial shader

This way is simpler since we’ll directly reuse the MeshLambertMaterial from Three.js and apply our grain in the fragment shader. First let’s create a basic scene like in the first method. You can take this repository, and start from the src/js/Scene.js file to follow this second method.

Copy and paste MeshLambertMaterial

In Three.js all the Materials shaders can be found here. They are composed by shunks (reusable GLSL code) that are included here and there in Three.js shaders. We’re going to copy the MeshLambertMaterial fragment shader from here and paste it in a new fragment.glsl file.

Then, let’s add a new ShaderMaterial that will include this fragmentShader. However, for the vertex, since we’re not changing it, we can just pick it directly from the lib THREE.ShaderLib.lambert.vertexShader.

Finally, we need to merge the Three.js uniforms with ours, using THREE.UniformsUtils.merge(). Like in the first method, let’s use the sphere color uColor, uNoiseCoef to play with the grain effect and a uNoiseScale for the grain size.

import fragmentShader from './fragmentShader.glsl'
...

this.uniforms = THREE.UniformsUtils.merge([
  THREE.ShaderLib.lambert.uniforms,
  {
    uColor: {
      value: new THREE.Color(0x51b1f5)
    },
    uNoiseCoef: {
      value: 3.5
    },
    uNoiseScale: {
      value: 0.8
    }
  }
])

const material = new THREE.ShaderMaterial({
  vertexShader: THREE.ShaderLib.lambert.vertexShader,
  fragmentShader: glslify(fragmentShader),
  uniforms: this.uniforms,
  lights: true,
  transparent: true
})

Note that we’re importing the fragmentShader using glslify because we’re going to use the same simplex noise 2D from the first method. Also, the lights parameter needs to be set to true so the materials can reuse the value of all source lights of the scene.

Add our custom grain light effect to the fragmentShader

In our freshly copied fragment shader, we’ll need to import the 2D simplex noise using the glslify and glsl-noise libs. #pragma glslify: snoise2 = require(glsl-noise/simplex/2d).

If we look closely at the MeshLambertMaterial fragment we can find a outgoingLight value. This looks very similar to our light_value from the first method, so let’s apply the same 2D grain shader effect to it:

// grain
vec2 uv = gl_FragCoord.xy;
uv /= uNoiseScale;

vec3 colorNoise = vec3(snoise2(uv) * 0.5 + 0.5);
colorNoise *= pow(outgoingLight.r, uNoiseCoef);

Then let’s mix our uColor with the colorNoise. And here is the final fragment shader:

#pragma glslify: snoise2 = require(glsl-noise/simplex/2d)
...
uniform float uNoiseScale;
uniform float uNoiseCoef;
...	
// write this the very end of the shader
// grain
vec2 uv = gl_FragCoord.xy;
uv /= uNoiseScale;

vec3 colorNoise = vec3(snoise2(uv) * 0.5 + 0.5);
colorNoise *= pow(outgoingLight.r, uNoiseCoef);

gl_FragColor.r = max(colorNoise.r, uColor.r);
gl_FragColor.g = max(colorNoise.g, uColor.g);
gl_FragColor.b = max(colorNoise.b, uColor.b);
gl_FragColor.a = 1.0;

Add any Three.js lights

No light? Let’s add some THREE.SpotLight in the scene in src/js/Scene.js file.

const spotLight = new THREE.SpotLight(0xff0000)
spotLight.position.set(0, 5, 4)
spotLight.intensity = 1.85
this.scene.add(spotLight)

And here you go:

You can also play with the alpha value in the fragment shader like this:

gl_FragColor = vec4(colorNoise, 1. - colorNoise.r);

And that’s it! Hope you enjoyed the tutorial and thank you for reading.

Resources

The post Creating a Risograph Grain Light Effect in Three.js appeared first on Codrops.

How to Create a User-Submitted Events Calendar in WordPress

Do you want to allow users to submit calendar events on your WordPress website?

Adding user-submitted events is a great way to build a community and boost engagement on your website.

In this article, we’ll show you how to create a user-submitted events calendar in WordPress without giving visitors access to your admin area.

Create a user submitted events calendar in WordPress

Why Create a User-Submitted Events Calendar?

Crowdsourcing events for your WordPress calendar is a great way to build a community, attract new visitors, and keep your calendar updated with the latest events. It also helps save time since you don’t have to search the internet for upcoming events.

When your community members can add events to your calendar, they’ll get free promotion for their events, and your website visitors and other community members can easily learn about events happening in their area.

For example, let’s say you’re running a charity or non-profit membership website. You can allow members to add different fundraisers, seminars, and other charity events to your site’s calendar.

The problem is that WordPress doesn’t allow users to submit calendar events or upload files on the front end by default. You will have to create an account for each user and allow access to the admin area. This method is time-consuming and could be risky.

Thankfully, there’s an easier way. Let’s see how you can let people add calendar events in WordPress.

Creating a User-Submitted Events Calendar in WordPress

The best way to allow users to add calendar events without giving them access to your WordPress admin panel is by using WPForms. It’s the best contact form plugin for WordPress and is trusted by over 5 million businesses.

The plugin lets you create a file upload form and offers a Post Submissions addon that allows you to accept event listings, PDFs, articles, quotations, and other content on the front end of your website.

WPForms

Note: You’ll need the WPForms Pro version because it includes the Post Submission addon, premium integrations, and other customization features.

First, you’ll need to install and activate the WPForms plugin. If you need help, then please see our guide on how to install a WordPress plugin.

Upon activation, simply head over to WPForms » Settings from your WordPress dashboard and enter your license key. You can find the license key in the WPForms account area.

WPForms license key

Next, click the ‘Verify Key’ button to continue.

After verifying the license key, you’ll need to go to WPForms » Addons and then scroll down to the Post Submissions Addon.

Go ahead and click the ‘Install Addon’ button.

Post submission addon by WPForms

Once the addon is installed, you’ll notice the Status change from ‘Not Installed’ to ‘Active.’

Setting Up The Events Calendar Plugin

Next, you’ll need a WordPress events calendar plugin to create an events calendar on your website.

We’ll use The Events Calendar plugin for our tutorial. It is a powerful event management system for WordPress and offers lots of features. You can easily use it to add events and manage organizers and venues.

Plus, The Event Calendar offers a free version and easily integrates with WPForms.

First, you’ll need to install and activate The Event Calendar plugin. For more details, check out our guide on how to install a WordPress plugin.

Upon activation, you’ll be redirected to Events » Settings in the WordPress admin panel. The plugin will ask you to join its community. You can simply click the ‘Skip’ button for now.

Set up the event calendar plugin

After that, you can go through different settings for your events calendar.

There are settings in the ‘General’ tab to change the number of events to show per page, activate the block editor for events, show comments, edit the event URL slug, and more.

General settings tab

You can also set the time zone settings for your events calendar if you scroll down. The plugin lets you use your site’s time zone everywhere or manually set the time zone for each event.

We suggest using the ‘Use the site-wide time zone everywhere’ option. This will help match the events times that users submit with your site’s time zone.

When you’ve made the changes, click the ‘Save Changes’ button.

Change time zone settings

After that, you can go to the ‘Display’ tab and edit the appearance of your events calendar.

For instance, there are options to turn off the default style, choose a template, enable event views, and more.

Edit display settings

Once you’ve made the changes, let’s see how you can create a form to accept calendar events.

Creating a User Submitted Events Form

In the next step, you’ll need to set up a form using WPForms to allow users to submit events.

To start, you can go to WPForms » Add New from your WordPress dashboard. This will launch the drag and drop form builder.

Simply enter a name for your form at the top and then select the ‘Blog Post Submission Form’ template.

We’re using this template because when you use The Events Calendar plugin, each event is a custom post type. Using WPForms, you can edit the blog post submission form template to submit an event custom post type instead of a regular blog post.

Choose blog post submission form template

Next, you can customize your post submission form.

Using the drag and drop form builder, WPForms lets you add different form fields. You can add a dropdown menu, checkboxes, phone number, address, website URL, and more.

Plus, it also lets you rearrange the order of each form field and remove fields you don’t need.

Drag and drop form fields

For example, we’ll add the ‘Date / Time’ fields to our form template to show the ‘Event Start Date / Time’ and ‘Event Finish Date / Time’.

Pro Tip: When you add the Date / Time field, make sure to click the checkbox for ‘Disable Past Dates.’ You can find this option under the Advanced Options tab.

This will ensure that all your new events have a future date. It also helps catch mistakes if someone accidentally enters the wrong year.

Disable past dates

When creating your form, you can rename different form fields. To do that, simply click on them and then change the ‘Label’ under Field Options in the menu on your left.

For our tutorial, we changed the label for Post Title to Event Title and Post Excerpt to Event Description.

Edit form field labels

After that, you’ll need to go to the Settings » Post Submissions tab in the form builder.

Now, make sure that the ‘Post Submissions’ option is On.

Ensure post submission is on and match metadata

Besides that, you’ll need to match your form fields with the fields that The Events Calendar plugin will look for.

For example, this is how we mapped our demo form fields:

  • Post Title to Event Title
  • Post Excerpt to Event Description
  • Post Featured Image to Featured Image
  • Post Type to Events
  • Post Status to Pending Review
  • Post Author to Current User

The Pending Review status allows you to moderate each event submission. Plus, if you’re accepting online payments, then you can check if the payments were successful before approving the event.

Next, you’ll also need to map the event start and end date/time. For that, scroll down to the ‘Custom Post Meta’ section and enter a code to map the respective fields in your form.

To start, add _EventStartDate code and select your event start time field (like Event Start Date / Time) from the dropdown menu.

Then click the ‘+’ button to add another Custom Post Meta and enter the _EventEndDate code to map the event finish form field (like Event Finish Date / Time).

Enter custom post meta

Next, you can also change other settings of your form.

If you go to the ‘Confirmations’ tab, you’ll see settings for showing the thank you page that will appear when users submit a calendar event.

You can show a message, a page or redirect people to another URL when they submit the form.

Confirmation settings

Other than that, you can also change the ‘Notifications’ settings.

Here, the plugin lets you choose different settings for receiving a notification when someone submits a form. For instance, you can change the send to email address, subject line, from name, and more.

Edit notification settings

Don’t forget to click the ‘Save’ button at the top when you’ve made the changes.

Publishing Your User-Submitted Events Form

Now that you’ve created a user-submitted events form, it’s time to publish it on your WordPress website.

WPForms offers multiple options to embed your form in WordPress. You can use the WPForms block in the block editor, use a shortcode, add a sidebar widget, and more.

For this tutorial, we’ll use the Embed wizard offered by WPForms.

To start, simply click the ‘Embed’ button at the top right corner.

Edit notification settings

When you click the button, a popup window will appear.

Go ahead and click the ‘Create New Page’ button, and WPForms will automatically create a new page for your form.

Create a new page

You can also click the ‘Select Existing Page’ to add the form to a published page.

Next, you’ll need to enter a name for your page. Once that’s done, simply click the ‘Let’s Go!’ button.

Enter name for page

On the next screen, you can see your user-submitted events form on the new WordPress page.

Go ahead and preview the page and then click the ‘Publish’ button.

Publish your page

You can now visit your website to see the form in action.

Here’s what it will look like on the front end of your website.

Form preview

Next, you can review the calendar events your users submit by going to Events from your WordPress dashboard.

All the user-submitted events will be listed here as pending. You can click the ‘Edit’ button under each event to review them.

View your event

When reviewing the event, ensure that the user has filled out all the details. If any information is missing, you can add it or reject the calendar event if it doesn’t meet your website requirements.

After that, simply Publish the user-submitted event. You can then view your events by visiting the URL created by The Events Calendar: https://www.example.com/events

Events page preview

We hope this article helped you learn how to create a user-submitted events calendar in WordPress. You may also want to check out our guides on how to move a website from HTTP to HTTPS and the best WordPress SEO plugins and tools to improve your website’s ranking.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post How to Create a User-Submitted Events Calendar in WordPress first appeared on WPBeginner.

Signals For Customizing Website User Experience

In my last article, I suggested using the SaveData API to deliver a different, more performant, experience to users that expressed that desire. This hopefully leads to a greater experience for all users. In this article, I want to spend a bit more time on this, and also look at other signals we can similarly use to help us make decisions on what to load on our websites.

That’s not to say the extraneous content is pointless — enhanced design and user interfaces can have an important impact on the brand of a website, and delightful little extras can really impact your users' relationship with your site. It’s when the cost of those “extras” starts to negatively impact your user’s experience of the site, then you should consider how essential they are, and if they can be turned off for some users.

Save Data API

Let’s have a quick recap on the Save Data API. That user preference is available in two (hopefully soon to be three!) ways:

  1. A Save-Data header is sent on each HTTP request.
    This allows dynamic backends to change the HTML returned.
  2. The NetworkInformation.saveData JavaScript API.
    This allows client-side JavaScript to check this and act accordingly.
  3. The upcoming prefers-reduced-data media query, which allows CSS to set different options depending on this setting.
    This is available behind a flag in Chrome, but not yet on by default while it finishes standardization.

Note: At the time of writing, the Save Data API, and in fact all the options we’ll talk about in this article, are only available in Chromium-based browsers (Chrome, Edge, Opera…etc.). This is a bit of a shame, as I believe they are useful for websites. If you believe the same, then let the other browsers know you want them to support this too. All of these are on various standard tracks rather than being proprietary Chrome APIs, so they can be implemented by other browsers (Safari and Firefox) if the demand is there. However, later in this article, I’ll explain why it’s perhaps more important that they are supported in Chromium-based browsers — and Chrome in particular.

Perhaps confusingly, iOS does have a Low Data mode, though that is used by iOS itself to reduce background tasks using data, and it is not exposed to the browser to allow websites to take advantage of that (even for Chrome on iOS which is more a skin on top of Safari than the full Chrome browser).

Websites can act on the Save Data preference to give a lighter website to… well.. . save the user’s data! This is helpful for those on poor or expensive networks, so they don’t have to pay an exorbitant cost just to visit your website. This setting is used by users in poorer countries but is also used by those with a capped data plan that might be running out just before your monthly cap renewal, or those traveling where roaming charges can be a lot more expensive than at home.

And Is It Used?

Again, I talked about this that previous article, and the answer is a resounding yes! Approximately two-thirds of Indian mobile Chrome users of Smashing Magazine have this setting turned on, for example. Expanding that to look at the top-10 mobile users that support Save Data, by volume for this site, we see the following:

Country % Data Saver
India 63%
USA 10%
Philippines 49%
China 0%
UK 35%
Nigeria 55%
Russia 55%
Canada 38%
Germany 35%
Pakistan 51%

Now, there are a few things to note about this. First of all, it’s, perhaps, no surprise to see high usage of this setting for what are often considered “poorer” countries — over 50% of mobile users having this setting seems common. What’s, perhaps, more surprising is the relatively high usage of a third of users using this in the likes of the UK, Germany, and France. In short, this is not a niche setting.

I’d love to know why China is so reluctant to use this if any readers know. Weirdly, they report as a range of browsers in our analytics including the Android WebView, Chrome and Safari (despite it's not supporting this!). Perhaps, these are imitation phones or other customized builds that do not expose this setting to the end-users to enable this. If you have any other theories or information on this — I’d love to know, so please drop a message in the comments below.

However, the above table is not actually representative of total traffic, and that’s another point to note about this data. If we compare the top-10 countries that visit SmashingMagazine.com by number of users across four different segments, we see the following:

All users Mobile user Mobile SaveData support Mobile SaveData on
1 USA USA India India
2 India India USA Philippines
3 UK UK Philippines Nigeria
4 Canada Germany China UK
5 Germany Philippines UK Russia
6 France Canada Nigeria USA
7 Russia China Russia Indonesia
8 Australia France Canada Pakistan
9 Philippines Nigeria Germany Brazil
10 Netherlands Russia Pakistan Canada

All users, and mobile users are not too dissimilar. Though some of the “poorer” countries like the Philippines and Nigeria are higher up in the table on mobile (desktop support of this site seems higher in Western countries).

However, looking at those with Save Data support (the same as the first table I showed), it is a completely different view; with India overtaking the USA at the top spot, and the Philippines shooting right up to number three. And finally looking at those with Save Data actually turned on, it’s an unrecognizable ordering compared to the first column.

Using signals like Save Data allows you to help those users that need help the most, compared to traditional analytics of looking at all users or even segmenting by device type.

I mentioned earlier that Save Data is only available in Chromium-based browsers, meaning we’re ignoring Safari users (a sizable proportion of mobile users), and Firefox. However, countless research (including the stats for our own site here, and others by the likes of Alex Russell) has shown that Android devices are the platform of choice for poorer countries with slower networks. This is hardly surprising given the cost difference between Android and iOS devices, but using the signals offered only to those devices doesn't mean neglecting half of your user base, but instead concentrating on the users that need the most help.

Additionally, as I mentioned in the previous article, the Core Web Vitals initiative being measured only in Chrome browsers (and not other Chromium browsers like Edge or Opera) is putting a spotlight on these users, while at the same time those are the users supporting this API and others to allow you to address them.

So, while I wish there wasn’t this inequality in this world, and while I wish all browsers would support these options better, I still believe that using these options to customize the delivery better is the right thing to do, and the fact that they are only available in Chromium-based browsers at the moment is not a reason to ignore these options.

How To Act Upon Save Data

How exactly websites use this information is entirely up to the website. In the past, Chrome used to perform changes to the website by proxying requests via their servers (similar to how Opera Mini works), but doing that is usually frowned upon these days. With the increase in the use of HTTPS, site content is more secured in part to avoid any interference (Chrome never performed these automatic optimizations on HTTPS sites, though as the browser they could in theory). Chrome will soon also be sunsetting this automatic altering of content on HTTP sites. So, now it’s down to websites to do change as they see fit if they want to act upon this user signal.

Websites should still deliver the core experience of the website, but drop optional extras. For Smashing Magazine, that involved dropping some of our web fonts. For others, it might involve using smaller images or not loading videos by default. Of course, for web performance reasons you should always use the smallest images you can, but in these days of high-density mobile screens, many prefer to give high-quality images to take advantage of those beautiful screens. If a user has indicated that its preference is to save data, you could use that as a signal to drop down a level there, even if it’s not quite as nice as a picture, but still gets the message across.

Tim Vereecke gave a fantastic talk on some of the Data S(h)aver strategies he uses on his site for users with this Save Data preference, including showing fewer articles by default, loading less on infinite scroll pages when reaching the bottom of the page, removing icon fonts, or reducing the number of ads, not auto-playing videos and loads more tips and tricks, some of which he’s summarised in an accompanying article.

One important point that Tim noted is using Save Data might not always improve performance. Some of the techniques he uses like loading less or turning off prefetching of likely future pages will result in data saving, but with the downside of loading taking a bit longer if users do want to see that content. In general, however, reducing data usually results in web performance gains.

Is Save Data The Only Option?

Save Data is a great API in my opinion, and I wish more sites used it, and more browsers supported it! The fact that the user has explicitly asked sites to send less data means doing that is acting upon their wishes.

The downside of Save Data, however, is that users have to know to enable this. While many Smashing Magazine readers may be more technical and may know about this option or may be comfortable delving into the settings of their browsers, others may not. Additionally, with the aforementioned change of Chrome removing the Save Data browser option, and perhaps switching to using the OS-level option, we may see some changes in its usage (for better or worse!).

So, what can we do to try to help users who don’t have this set? Well, there are a few more signals we can use, as they also might indicate users who might struggle with the full experience of the website. However, as we are making that decision for them (unlike Save Data which is an explicit user choice), we should tread carefully with any assumptions we make. We may even want to highlight to users that they are getting a different experience, and offer them a way of opting out of this. Perhaps this is a best practice even for those using Save Data, as perhaps they’re unaware or had forgotten that they turned this setting on, and so are getting a different experience.

In a similar vein, it’s also possible to offer a Save Data-like experience to all users (even in browsers and operating systems that don’t support it) with a front end-setting and then perhaps saving this value to a cookie and acting upon that (another trick Tim mentioned in his talk).

For the remainder of this article, I’d like to look at alternatives to Save Data that you can also act upon to customize your sites. In my opinion, these should be used in addition to Save Data, to squeeze a little more on top.

Other User Preference Signals

First up we will look at preferences that, like Save Data, a user can turn on and off. A new breed of user preference CSS media queries have been launched recently, which are being standardized in the Media Queries Level 5 draft specification and many are already available in browsers. These allow web developers to change their websites, based on various user preferences:

  • prefers-reduced-motion
    This indicates the user would prefer fewer motions, perhaps due to vestibular motion disorders. Adam Argyle has made a point of highlighting that reduced motion != no motion. Just tone it down a bit. If you were acting on the save data option, you wouldn’t hold back all data!
  • prefers-reduced-transparency
    To aid readability for those that find it difficult to distinguish content with translucent backgrounds.
  • prefers-contrast
    Similar to the above, this can be used as a request to increase the contrast between elements.
  • forced-colors
    This indicates the user agent is using a reduced color pallet, typically for accessibility reasons, such as Windows High Contrast mode.
  • prefers-color-scheme
    This can be set to light or dark to indicate the user's preferred color scheme.
  • prefers-reduced-data
    The CSS media query version of Save Data mentioned above.

Only some of these may have a different impact on web performance, which is my area of expertise, and the original starting point for this article with Save Data. However, they are important user preferences — particularly when considering the accessibility implications for motion sensitivity, and vision issues covered by the transparency, contrast, and even color scheme options. For more information, check out a previous Smashing Magazine article deep-diving into prefers-reduce-motion — the oldest and most well supported of these options.

Network Signals

Returning more to items to optimize web performance, the Effective Connection Type API is a property of the Network Information API and can be queried in JavaScript with the following code (again only in Chromium browsers for now):

navigator.connection.effectiveType;

This then returns one of four string values (4g, 3g, 2g, or slow-2g) — the theory being that you can reduce the network needs when the connection is slower and so give a faster experience even to those on slower networks. There are a few downsides to ECT. The main one is that the definitions of the 4 types are all fixed, and based on quite old network data. The result is that nearly all users now fall into the 4g category, a few into the 3g, and very few into the 2g or slow-2g categories.

Returning to our Indian mobile users, who we saw in the last article were getting much worse experiences, 84.2% are reported as 4g, 15.1% 3g, 0.4% 2g, and 0.3% slow-2g. It’s great that technology has advanced so that this is the case, but our dependency on it has grown too, and it does mean that its use as a differentiator of “slower” users is already limited and becoming more so as time goes on. Being able to identify the 16% of slowest users is not to be sniffed at, but it’s a far cry from the 63% of users asking us to Save Data in that region!

There are other options available in the navigator.connection API, but without the simplicity of a small number of categories:

navigator.connection.rtt;
navigator.connection.downlink;

Note: For privacy reasons, these return a rounded number, rather than a precise number, to avoid them being used as a fingerprinting vector. This is why we can’t have nice things. Still, for the non-tracking purposes, an imprecise number is all we need anyway.

The other downside of these APIs is that they are only available as a JavaScript API (where it’s thankfully very easy to use), or as a Client Hint HTTP Header (where it’s not as easy to use).

Client Hints HTTP Headers

The Save-Data HTTP header is a simple HTTP Header sent for all requests when a user has this turned on. This makes it nice and easy for backends to use this. However, we can’t get other information like ECT in similar HTTP headers without severely bulking up all requests for web browsing when the vast majority of websites will not use it. It also introduces privacy risks by making available more than we strictly need about our users.

Client Hints is a way to work around those limitations, by not sending any of this extra information by default, and instead of having websites “opting in” to this information when they will make use of this. They do this by letting browsers know, with the Accept CH HTTP Header, what Client Hint headers the page will make use of. For example, in the response to the initial request, we could include this HTTP Header:

accept-ch: ect, rtt, downlink

This can also be included in a meta element in the page contents:

<meta http-equiv="Accept-CH" content="ECT, RTT, Downlink">

This then means that any future requests to this website, will include those Client Hint HTTP Headers, as well as the usual HTTP Headers:

downlink: 10
ect: 4g
rtt: 50

Important! If making use of Client Hints and returning different results for the same URL based on these, do remember to include the client hint headers you are altering content based upon, in your Vary header, so any caches are aware of this and won’t serve the cached page for future visits unless they also have the same client hint headers set.

You can view all the client hints available for your browser at https://browserleaks.com/client-hints (hint: use a Chromium-based browser to view this website or you won’t see much!). This website opts into all the known Client Hints to show the potential information leaked by your browser but each site should only enable the hints they will use. Client Hints are also by default only sent on requests to the original origin and not to third-party requests loaded by the page (though this can be enabled through the use of Permission Policy header).

The main downside of this two-step process, which I agree is absolutely necessary for the reasons given above, is the very first request to a website does not get these client hints and this is in all likelihood the one that would benefit most from savings based on these client hints.

The BrowserLeaks demo above actually cheats, by loading that data in an iframe rather than in the main document, to get around this. I wouldn’t recommend that option for most sites meaning you are either left with using the JavaScript APIs instead, only optimizing for non-first page visits, or using the Client Hint information independent requests (Media, CSS or JavaScript files). That’s not to say using them independent requests is not powerful, and is particularly useful for image CDNs, but the fastest website is one that can start rendering all the critical content from the first response.

Device Capability Signals

Moving on from User and Network signals, we have the final category of device signals. These APIs explain the capabilities of the device, rather than the connection, including:

API JavaScript API Client Hint Example Output
Number of processors navigator.hardwareConcurrency N/A 4
Device Pixel Ratio devicePixelRatio Sec-CH-DPR, DPR 3
Device Memory navigator.deviceMemory Sec-CH-Device-Memory, Device-Memory 8

I’m not entirely convinced the first is that useful as nearly every device has multiple processors now, but it’s usually the power of those cores that are more important than the number, however, the next two have a lot of potential for optimizing for.

DPR has long been used to serve responsive images - usually through srcset or media queries than above APIs, but the JavaScript and Client Hint header options have been utilized less by websites (though many image CDNs support sending different images based on Client Hints). Utilizing them more could lead to valuable optimizations for sites — beyond the static media use cases we’ve typically seen up until now.

The one that I think could really be used as a performance indicator is Device Memory. Unlike the number of processors, or DPR, the amount of RAM a device has is often a great indicator as to whether it’s a “high end” device, or a cheaper, more limited device. I was encouraged to investigate how this correlated to Core Web Vitals by Gilberto Cocchi after my last article and the results are very interesting as shown in the graphs below. These were created with a customized version of the Web Vitals Report, altered to allow reporting on 4 dimensions.

Largest Contentful Paint (LCP) showed a clear correlation between poor LCP and RAM, with the 1 GB and 2 GB RAM p75 scores being red and amber, but even though the higher RAM both had green scores, there was still a clear and noticeable difference, particularly shown on the graph.

Whether this is directly caused by the lack of RAM, or that RAM is just a proxy measure of other factors (high end, versus low-end devices, device age, networks those devices are run on…etc.), doesn’t really matter at the end of the day. If it’s a good proxy that the experience is likely poorer for those users, then we can use that as a signal to optimize our site for them.

Cumulative Layout Shift (CLS) has some correlation, but even at the lowest memory is still showing green:

This is perhaps not so surprising since CLS can’t really be countered by the power of devices or networks. If a shift is going to happen the browser will notice — even if it happens so fast, that the user barely noticed.

Interestingly, there’s much less correlation for First Input Delay (FID). Note also that FID is often not measured, so can result in breaks in the chart when there are few users in that category — as shown by the 1GB devices series which has few data points.

To be honest, I would have expected Device Memory to have a much bigger impact on FID (whether directly, or indirectly for the reasons as discussed in the LCP section above), and again perhaps reflects that this metric isn’t actually that difficult to pass for many sites, something the Chrome team is well aware of and are working on.

For privacy reasons, device memory is basically only reported as one of a capped, fixed set of floating-point numbers: 0.25, 0.5, 1, 2, 4, 8, so even if you have 32 GB of RAM that will be reported as 8. But again, that lack of precision is fine as we’re probably only interested in devices with 2 GB of RAM or less, based on the above stats — though best advice would be to measure your own web visitors and based your information on that. I just hope over time, as technology advances, we’re not put into a similar situation as ECT where everything migrates to the top category, making the signal less useful. On the plus side, this should be easier to change just by increasing the upper capping amount.

Measure Your Users

The last section, on correlating Device Memory to Core Web Vitals, brings about an important topic: do not just take for granted that any of these options will prove useful for your site. Instead, measure your user population to see which of these options will be useful for your users.

This could be as simple as logging the values for these options in a Google Analytics Custom Dimension. That’s what we did here at Smashing for a number of them, and how we were able to create the graphs above to see the correlation as we were then able to slice and dice these measures against other data in Google Analytics (including the Core Web Vitals, we already logged in Google Analytics using the web-vitals library).

Alternatively, if you already use one of the many RUM solutions out there some, or all of these may already be being measured and you may already have the data to help start to make decisions as to whether to use these options or not. And if your RUM library of choice is not tracking these metrics, then maybe suggest that they do to benefit you and their other users.

Conclusion

I hope this article will convince you to consider these options for your own sites. Many of these options are easy to use if you know about them and can make a real difference to the users struggling the most. They also are not just for complicated web applications but can be used even on static article websites.

I’ve already mentioned that this site, smashingmagazine.com makes use of the Save Data API to avoid loading web fonts. Additionally, it uses the instant.page library to prefetch articles on mouse hover — except for slow ECTs or when a user has specified the Save Data option.

The Web Almanac (another site I work on), is another seemingly simple article website, where each chapter makes use of lots of graphs and other figures. These are initially loaded as lazy-loaded images and then upgraded to Google Sheet embeds, which have a handy hover effect to see the data points. The Google Sheet embeds are actually a little slow and resource-intensive, so this upgrade only happens for users that are likely to benefit from it: those on Desktop viewport widths, when Save Data is not turned off, when we’re on a fast connection using ECT, and when a high-resolution canvas is supported (not covered in this article, but old iPads did not support this but claimed to).

I encourage you to consider what parts of your website you should consider limiting to some users. Let us know in the comments how you’re using them.