Gutenberg 9.8 Brings Rounded Borders To the Group Block and Moves the Site Editor Canvas Into an Inline Frame

Gutenberg 9.8 launched yesterday with a few minor UI improvements. The development team added an initial implementation for border-radius support for the Group block that theme authors can opt into. They also moved the site editor into an iframe element to remove CSS conflicts from the global admin styles.

Those who have been testing Full Site Editing should be pleased that they will no longer need to deal with the seemingly never-ending creation of auto-drafts for templates and template parts. Good riddance. They would have inevitably caused user confusion in the long run. The change took around a month of discussion and work, but it reduced a complex and fragile process into a more stable system for the long term.

While the previous plugin release saw barely more than a handful of bug fixes, version 9.8 jumped back to over two dozen. A Gutenberg update without at least that many just does not feel right.

Minor UI Improvements

The latest version of the plugin improves the UI when working with the Spacer block. When a user selected the block in the past, it appeared as a light gray rectangle. Now, it is semi-transparent. This allows whatever is in the background, such as the Cover block with a background image, to show. This change should help users more easily make size adjustments in cases where viewing the background is necessary.

Using the Spacer block, which now has a semi-transparent color, within a Cover block in the WordPress editor.
Semi-transparent Spacer block when selected.

While I hope the Spacer block will eventually die a slow and agonizing death as it is replaced by more appropriate margin and padding block options, this change does help in the interim.

In a follow-up to the UI improvements in Gutenberg 9.7, work on block variations continues. Variations are when one block is used as the foundation to create multiple variations of the same block. The most common example is the Embed block, which has YouTube, Vimeo, and other variations. Before 9.7, these variations shared the same generic icon, name, and description in the block inspector and navigation instead of the variation-specific information.

Gutenberg 9.8 builds on the trend of using the variation’s data where it makes sense. The block switcher (transform) button in the editor toolbar now displays the variation’s icon.

Block switcher icon in the WordPress editor that shows a variation's icon.
Variation icon in use in the block switcher.

It is a small change, but it shows the development team’s continued devotion to polishing the editor interface.

Loading the Site Editor Canvas in an iframe

Gutenberg 9.8 separates the canvas area of the site editor into an iframe. This separation means that global admin styles do not bleed into or override styles within the editor itself. The good news is this is a first step toward doing the same in the post editor too.

This has been a change that I have been waiting for since the inception of the block editor. From a theme development and design standpoint, styling the editor to match the front end is ripe with issues. It has meant nesting CSS selectors when it should have been unnecessary. It has meant adding a few !important rules to overwrite what seems like oddities in the core CSS. While styling the block editor has improved by leaps and bounds in the past couple of years, it can still often be a pain.

WordPress core committer Ella van Durpe listed the benefits of moving the canvas into an iframe:

  • There would be no admin CSS bleed at all. This is something we’ve been struggling with since the beginning.
  • There would be no need to simulate media queries, which is arguably technically more difficult than using an iframe.
  • Relative units like (r)em and vw/vh just work.
  • For a full site, a theme stylesheet can be just dropped in the editor without any adjustment. I think this is important as it makes the life of theme authors much easier.
  • It’s possible to have one selection per window, so one in the admin and one in the content. This is useful for e.g. the link UI where the selection in the content can be kept while the selection is also in an input element (for the URL). Maybe be useful in other cases.

While I find it tough to believe that theme stylesheets would just work without a hitch — does such a world exist? –, they should work far better than in the past. There are likely items theme authors will need to contend with, but they should be minimal. Developers should keep a close eye on the future development of this.

Border Radius Support for the Group Block

As part of Gutenberg’s experimental feature set, the Group block now supports a border radius option. However, end-users will not automatically see it in the block inspector. This is an opt-in feature for themes at the moment. Presumably, it will be a part of the default set of options for several blocks in the future.

Group block in the editor with rounded borders.
Setting the border-radius value for the Group block.

For theme authors who want to add support, they will need to drop the following code snippet into their experimental-theme.json file and edit the radius value:

"core/group" : {
        "styles" : {
                "border" : {
                        "radius" : "50px"
                }
        }
}

This will allow theme authors to set the default border-radius for the group block. However, it will not hand over control to users. For that, themes will need to add the following snippet under the settings section of their experimental-theme.json file:

"border" : {
        "customRadius" : true
}

I tested this with a modified version of the TT1 Blocks theme without issue. Mostly, I am looking forward to more styling options like this in future iterations of the plugin.

useStateInCustomProperties

In my recent “Custom Properties as State” post, one of the things I mentioned was that theoretically, UI libraries, like React and Vue, could automatically map the state they manage over to CSS Custom Properties so we could use that state right there if we wanted.

Someone should make a useStateWithCustomProperties hook or something to do that. #freeidea

Andrew Bloyce took me up on that idea.

It works just like I had hoped. The hook returns a component that is the Custom Property “boundary” and any state you pass it is mapped to those custom properties. Basic demo:

This is clever and useful already, but I’m tellin’ ya, this will be extremely useful should the concept of higher level custom properties land. The idea is that you could flip one custom property and have a whole block of styling change, which is is what we already enjoy with media queries and you know how useful those are.


The post useStateInCustomProperties appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Hiding a part of a list

Hii :)

I have a list which is presented on one line so I am using css 'float: left' to do this. In one list there is 3 entries. It should look like this

<ol>
    <li> 1 </li>
    <li> text </li> 
    <li> 1.30 </li>
</ol

However the way it is layed out for me, it means that the middle entry doesnt fit properly so what I have done is set the overflow property to hidden and when hovered over it would show the full name. However when I hover over the text then is below the 3rd entry.

Is there a way to hide the 3rd entry when hovering over the second?

Thank youuu :)

What is the Best Marketing Platform For E-commerce Website?

I want to know expert opinion regarding best marketing platforms for e-commerce website. I am working as Digital Marketing Executive as Expertek Cyber Solutions Inc. I have been assigned to market two websites, one is of towels industry and the other is of Apparels industry and boths websites are of US regions. So it will be a great favor for me if any e-commerce marketer expert suggest any good valuable online marketing platform like Google Ads etc or some valueable backlinging websites to enhance SEO keywords Ranking.

how to end the tkinter program

i have a set of code for a gui that runs using a dictionary but i cant get the code to exit once the user presses the last button named end

import sys
import tkinter as tk
from functools import partial
class Situation(tk.Frame):

    def __init__(self, master=None, story='', buttons=[], **kwargs):
        tk.Frame.__init__(self, master, **kwargs)

        story_lbl = tk.Label(self, text=story, justify=tk.LEFT, anchor=tk.NW, font=("Play", 10))
        story_lbl.pack()

        for btn_text, new_situation in buttons:
            btn = tk.Button(self, text=btn_text, command=partial(self.quit_, new_situation),padx=10, pady=10)
            btn.pack()








    def quit_(self, new_situation):
        self.destroy()
        load(new_situation)

def load(situation=None):
    frame = Situation(root, **SITUATIONS.get(situation))
    frame.pack()
SITUATIONS = {

    None:{
        'story':
"""
true or false

Tristan has 80+ subscribers?
""",
'buttons':[
    ('true','Correct'),
    ('false','incorrect')

    ],
    },
    'Correct':{
       'story':
"""
question 1=correct

question 2
tristan's youtube channel name is what?
""",
'buttons':[
    ('mangle200','correct1'),
    ('Tristan Lauzon','incorrect1'),
    ('Winter Wolf','incorrect1')
    ],
        },
    'incorrect':{
        'story':
"""
question 1 = incorrect
question 2
tristan's youtube channel name is what?

""",

'buttons':[
    ('mangle200','correct1.1'),
    ('Tristan Lauzon','incorrect1.1'),
    ('Winter Wolf','incorrect1.1')
    ],
        },
    'correct1':{
        'story':
"""
question 1 = correct
question 2 = correct
""",
'buttons':[
    ('end')
    ],
        },
    'correct1.1':{
        'story':
"""
question 1 = incorrect
question 2 = correct
""",
'buttons':[
    ('end')
    ],
        },
    'incorrect1':{
        'story':
"""
question 1 = correct
question 2 = incorrect

""",
'buttons':[
    ('end')
    ],
        },
    'incorrect1.1':{
        'story':
"""
question 1 = incorrect
question 2 = incorrect
""",
'buttons':[
    ('end')
    ],
        },
    }

def beginning():
    start_button.destroy()
    load() # load the first story

#WINDOW
root = tk.Tk()
root.geometry('500x500-500-300')
root.title('multiple choice quiz')
root.attributes('-fullscreen', True)
#START
start_button = tk.Button(root, text="START", command=beginning)
start_button.place(relx=.5, rely=.5, anchor='c')


#THE LOOP
root.mainloop()

Collective #645







Collective 645 Item image

My Favorite Typefaces of 2020

After a decade, ILT’s annual Favorite Fonts list is back. The top-ten favorite typefaces is now joined by another 50 typefaces in the extended Honorable Mentions list.

Read it



Collective 645 Item image

Noise Planets

Atul Vinayak studies and replicates some generative line artwork by Tyler Hobbs using p5js.

Read it














The post Collective #645 appeared first on Codrops.

Gutenberg Contributors Consider Implementing a Bot to Close Stale Issues

Gutenberg project contributors are considering implementing a stale bot to tame the repository’s overgrown issues queue, which currently has 2,733 open issues. Stale bots are usually employed to automatically close “stale” issues and PRs based on a predefined set of parameters for inactivity.

“The current recommendation is to set our policy to a 180-day of no activity, so if no comments or commits are on an issue or PR in 180 days, then the bot will post a comment to the issue alerting the user it will be closed in 7-days due to inactivity,” Marcus Kazmierczak proposed.

One important concern is getting the tone right for the automatically-generated message. When you’re employing bots on a widely used open source project, they had better be friendly. A chilly, indifferent bot can unwittingly turn away potential contributors with the wrong kind of messaging. Kazmierczak proposed the following message:

This is an auto-generated message to let you know that this issue has gone 180 days without any activity and meets the project’s definition of stale. This will be auto-closed if there is no new activity over the next 7 days. If the issue is still relevant and active, you can simply comment with a “bump” to keep it open, or add the “[Status] Not Stale” label. Thanks for keeping our repository healthy!

Participants in the discussion on the proposal are divided on the best approach. Daniel Llewellyn, one of the most vocal opponents to using a stale bot, contends that an automatically closing issues sends the wrong message.

“If we care about users and that they trust that we will fix their problem then automatically closing their issue gives them the signal that we don’t,” Llewellyn said.

“If you don’t want to fix a problem then it is better for a human to explain why the problem won’t be fixed and personally close the issue. Automating this on the assumption that because nobody has commented in a while means it isn’t important is bad!”

Joy Reynolds agreed with this assessment, noting that closing issues through any means can be discouraging.

“I’ve had issues closed by humans for being stale, also, and it isn’t any better,” Reynolds said. “I’ve had issues closed because someone created a new issue on the same thing. This loses all the history and the watchers.

“I’ve also had an issue closed at Launchpad for being stale (and their system used only two weeks as a time frame). That served no purpose at all. It only makes people go away, frustrated.”

Kazmierczak reiterated in the comments that the bot can be configured to skip issues labeled as bugs and that issues and PRs can be bumped to reset the 6-month clock.

“The overall goal of the proposal is to improve feedback and responses to issues by ensuring what’s there is relevant,” Kazmierczak said.

Auto-closing issues is the most controversial part of the plan. The general consensus in the comments leans towards using the bot for labeling and triaging in order to manually close the issue later.

“My preference would be for a bot to alert humans in a slack channel when a ticket is declared stale and become progressively more insistent until a human responds,” Peter Wilson said.

Milana Cap suggested using a bot to nudge the ticket author as a compromise between “being friendly and thoughtful to contributors while keeping maintainers sane.”

Whatever approach contributors land on, excluding tickets marked as bugs is going to be critical for making the stale bot productive. Otherwise, it becomes just a fancy way of kicking the can down the road, delaying the inevitable.

In a recent post titled “Github Stale Bots: A False Economy,” software developer Ben Winding wrote about why stale bots don’t deliver what maintainers are aiming to achieve. Based on his experience with the Angular repository’s bot, Winding summarized the stale bot’s effect on the issues queue:

  1. Reduce the metric of Open Issues in github
  2. Made duplicate issues far more likely
  3. Increased friction of users reporting that the issue still exists
  4. Ultimately decreased the quality of the software, as the issues don’t accurately reflect reality

If the Gutenberg repository’s stale bot can be configured not to close bugs and used to maximize human involvement, it will be less likely to deter people from reporting issues. Feedback on the proposal is open until January 29, 2021. Kazmierczak is seeking input on the bot’s implementation, specifically its time threshold and messaging.

How to Play and Pause CSS Animations with CSS Custom Properties

Let’s have a look CSS @keyframes animations, and specifically about how you can pause and otherwise control them. There is a CSS property specifically for it, that can be controlled with JavaScript, but there is plenty of nuance to get into in the details. We’ll also look at my preferred way of setting this up which gives lots of control. Hint: it involves CSS custom properties.

The importance of pausing animations

Recently, while working on the CSS-powered slideshow you’ll see later in this article, I was inspecting the animations in the Layers panel of DevTools. I noticed something interesting I’d never thought about before: animations not currently in the viewport were still running!

Maybe it’s not that unexpected. We know videos do that. Videos just go on until you pause them. But it made me wonder if these playing animations still use the CPU/GPU? Do they consume unnecessary processing power, slowing down other parts of the page?

Inspecting frames in the Performance panel in DevTools didn’t shed any more light on this since I couldn’t see “offscreen”-frames. But, when I scrolled away from my “CSS Only Slideshow” at the first slide, then waited and scrolled back, it was at slide five. The animation hadn’t paused. Animations just run and run, until you pause them.

So I began to look into how, why and when animations should pause. Performance is an obvious reason, given the findings above. Another reason is control. Users not only love to have control, but they should have control. A couple of years ago, my wife had a really bad concussion. Since then, she has avoided webpages with too many animations, as they make her dizzy. As a result, I consider accessibility perhaps the most important reason for allowing animations to pause.

All together, this is important stuff. We’re talking specifically about CSS keyframe animations, but broadly, that means we’re talking about:

  1. Performance
  2. Control
  3. Accessibility

The basics of pausing an animation

The only way to truly pause an animation in CSS is to use the animation-play-state property with a paused value.

.paused {
  animation-play-state: paused;
}

In JavaScript, the property is “camelCased” as animationPlayState and set like this:

element.style.animationPlayState = 'paused';

We can create a toggle that plays and pauses the animation by reading the current value of animationPlayState:

const running = element.style.animationPlayState === 'running';

…and then setting it to the opposite value:

element.style.animationPlayState = running ? 'paused' : 'running';

Setting the duration

Another way to pause animations is to set animation-duration to 0s. The animation is actually running, but since it has no duration, you won’t see any action.

But if we change the value to 3s instead:

It works, but has a major caveat: the animations are technically still running. The animation is merely toggling between its initial position, and where it is next in the sequence.

Straight up removing the animation

We can remove the animation entirely and add it back via classes, but like animation-duration, this doesn’t actually pause the animation.

.remove-animation {
  animation: none !important;
}

Since true pausing is really what we’re after here, let’s stick with animation-play-state and look into other ways of using it.

Using data attributes and CSS custom properties

Let’s use a data-attribute as a selector in our CSS. We can call those whatever we want, so I’m going to use a [data-animation]-attribute on all the elements where I’d like to play/pause animations. That way, it can be distinguished from other animations:

<div data-animation></div>

That attribute is the selector, and the animation shorthand is the property where we’re setting everything. We’ll toss in a bunch of CSS custom properties *(*using Emmet-abbreviations) as values:

[data-animation] {
  animation:
    var(--animn, none)
    var(--animdur, 1s)
    var(--animtf, linear)
    var(--animdel, 0s)
    var(--animic, infinite)
    var(--animdir, alternate)
    var(--animfm, none)
    var(--animps, running);
}

With that in place, any animation with this data-attribute will be perfectly ready to accept animations, and we can control individual aspects of the animation with custom properties. Some animations are going to have something in common (like duration, easing-type, etc.), so fallback values are set on the custom properties as well.

Why CSS custom properties? First of all, they can be read and set in both CSS and JavaScript. Secondly, they help significantly reduce the amount of CSS we need to write. And, since we can set them within @keyframes (at least in Chrome at the time of writing), they offer new and exiting ways to work with animations!

For the animations themselves, I’m using class selectors and updating the variables from the [data-animation]-selector:

<div class="circle a-slide" data-animation></div>

Why a class and a data-attribute? At this stage, the data-animation attribute might as well be a regular class, but we’re going to use it in more advanced ways later. Note that the .circle class name actually has nothing to do with the animation — it’s just a class for styling the element.

/* Animation classes */
.a-pulse {
  --animn: pulse;
}
.a-slide {
  --animdur: 3s;
  --animn: slide;
}

/* Keyframes */
@keyframes pulse {
  0% { transform: scale(1); }
  25% { transform: scale(.9); }
  50% { transform: scale(1); }
  75% { transform: scale(1.1); }
  100% { transform: scale(1); }
}
@keyframes slide {
  from { margin-left: 0%; }
  to { margin-left: 150px; }
}

We only need to update the values that will change, so if we use some common values in the fallback values for the data-animation selector, we only need to update the name of the animation’s custom property, --animn.

Example: Pausing with the checkbox hack

To pause all the animations using the ol’ checkbox hack, let’s create a checkbox before the animations:

<input type="checkbox" data-animation-pause />

And update the --animps property when checked:

[data-animation-pause]:checked ~ [data-animation] {
  --animps: paused;
}

That’s it! The animations toggle between played and paused when clicking the checkbox — no JavaScript required.

CSS-only slideshow

Let’s put some of these ideas to work!

I‘ve played with the <details>-tag a lot recently. It’s the obvious candidate for accordions, but it can also be used for tooltips, toggle-tips, drop-downs (styled <select>-look-a-likes), mega-menus… you name it. It is the official HTML disclosure element, after all. Apart from the global attributes and global events that all HTML elements accept, <details> has a single open attribute, and a single toggle event. So, like the checkbox hack, it’s perfect for toggling state — but even simpler:

details[open] {
  --state: 1;
}
details:not([open]) {
  --state: 0;
}

I decided to do a slideshow, where the slides change automatically via a primary animation called autoplay, and each individual slide has its own unique secondary animation. The animation-play-state is controlled by the --animps-property. Each individual slide can have it’s own, unique animation, defined in a --animn-property:

<figure style="--animn:kenburns-top;--index:0;">
  <img src="some-slide-image.jpg" />
  <figcaption>Caption</figcaption>
</figure>

The animation-play-state of the secondary animations are controlled by the --img-animps-property. I found a bunch of nice Ken Burns-esque animations at Animista and switched between them in the --animn-properties of the slides.

Pausing an animation from another animation

In order to prevent GPU overload, it would be ideal for the primary animation to pause any secondary animations. We noted it briefly earlier, but only Chrome (at the time of writing, and it is a bit shaky) can update a CSS Custom Property from an @keyframe animation — which you can see in the following example where the --bgc-property and --counter-properties are modified at different frames:

The initial state of the secondary animation, the --img-animps -property, needs to be paused, even if the primary animation is running:

details[open] ~ .c-mm__inner .c-mm__frame {
  --animps: running;
  --img-animps: paused;
}

Then, in the main animation @keyframes, the property is updated to running:

@keyframes autoplay {
  0.1% {
    --img-animps: running; /* START */
    opacity: 0;
    z-index: calc(var(--z) + var(--slides))
  }
  5% { opacity: 1 }
  50% { opacity: 1 }
  51% { --img-animps: paused } /* STOP! */
  100% {
    opacity: 0;
    z-index: var(--z)
  }
}

To make this work in browsers other than Chrome, the initial value needs to be running, as they cannot update a CSS custom property from a @keyframe.

Here’s the slideshow, with a “details hack” play/pause-button — no JavaScript required:

Enabling prefers-reduced-motion

Some people prefer no animations, or at least reduced motion. It might just be a personal preference, but can also be because of a medical condition. We talked about the importance of accessibility with animations at the very top of this post.

Both macOS and Windows have options that allow users to inform browsers that they prefer reduced motion on websites. This enables us to reach for the prefers-reduced-motion feature query, which Eric Bailey has written all about.

@media (prefers-reduced-motion) { ... }

Let’s use the [data-animation]-selector for reduced motion by giving it different values that are applied when prefers-reduced-motion is enabled*:*

  • alternate = run a different animation
  • once = set the animation-iteration-count to 1
  • slow = change the animation-duration-property
  • stop = set animation-play-state to paused

These are just suggestions and they can be anything you want, really.

<div class="circle a-slide" data-animation="alternate"></div>
<div class="circle a-slide" data-animation="once"></div>
<div class="circle a-slide" data-animation="slow"></div>
<div class="circle a-slide" data-animation="stop"></div>

And the updated media query:

@media (prefers-reduced-motion) {
  [data-animation="alternate"] {
   /* Change animation duration AND name */
    --animdur: 4s;
    --animn: opacity;
  }
  [data-animation="slow"] {
    /* Change animation duration */
    --animdur: 10s;
  }
  [data-animation="stop"] {
    /* Stop the animation */
    --animps: paused;
  }
}

If this is too generic, and you prefer having unique, alternate animations per animation class, group the selectors like this:

.a-slide[data-animation="alternate"] { /* etc. */ }

Here’s a Pen with a checkbox simulating prefers-reduced-motion. Scroll down within the Pen to see the behavior change for each circle:

Pausing with JavaScript

To re-create the “Pause all animations”-checkbox in JavaScript, iterate all the [data-animation]-elements and toggle the same --animps custom property:

<button id="js-toggle" type="button">Toggle Animations</button>
const animations = document.querySelectorAll('[data-animation');
const jstoggle = document.getElementById('js-toggle');

jstoggle.addEventListener('click', () => {
  animations.forEach(animation => {
    const running = getComputedStyle(animation).getPropertyValue("--animps") || 'running';
    animation.style.setProperty('--animps', running === 'running' ? 'paused' : 'running');
  })
});

It’s exactly the same concept as the checkbox hack, using the same custom property: --animps, only set by JavaScript instead of CSS. If we want to support older browsers, we can toggle a class, that will update the animation-play-state.

Using IntersectionObserver

To play and pause all [data-animation]-animations automatically — and thus not unnecessarily overloading the GPU — we can use an IntersectionObserver.

First, we need to make sure that no animations are running at all:

[data-animation] {
  /* Change 'running' to 'paused' */
  animation: var(--animps, paused); 
}

Then, we’ll create the observer and trigger it when an element is 25% or 75% in viewport. If the latter is matched, the animation starts playing; otherwise it pauses.

By default, all elements with a [data-animation]-attribute will be observed, but if prefers-reduced-motion is enabled (set to “reduce”), the elements with [data-animation="stop"] will be ignored.

const IO = new IntersectionObserver((entries) => {
  entries.forEach((entry) => {
    if (entry.isIntersecting) {
      const state = (entry.intersectionRatio >= 0.75) ? 'running' : 'paused';
      entry.target.style.setProperty('--animps', state);
    }
  });
}, {
  threshold: [0.25, 0.75]
});

const mediaQuery = window.matchMedia("(prefers-reduced-motion: reduce)");
const elements = mediaQuery?.matches ? document.querySelectorAll(`[data-animation]:not([data-animation="stop"]`) : document.querySelectorAll('[data-animation]');

elements.forEach(animation => {
  IO.observe(animation);
});

You have to play around with the threshold-values, and/or whether you need to unobserve some animations after they’ve triggered, etc. If you load new content or animations dynamically, you might need to re-write parts of the observer as well. It’s impossible to cover all scenarios, but using this as a foundation should get you started with auto-playing and pausing CSS animations!

Bonus: Adding <audio> to the slideshow with minimal JavaScript

Here’s an idea to add music to the slideshow we built. First, add an audio-tag:

<audio src="/asset/audio/slideshow.mp3" hidden loop></audio>

Then, in Javascript:

const audio = document.querySelector('your-audio-selector');
const details = document.querySelector('your-details-selector');
details.addEventListener('toggle', () => {
  details.open ? audio.play() : audio.pause();
})

Pretty simple, huh?

I did a “Silent Movie” (with audio)-demo here, where you get to know my geeky past. 🙂


The post How to Play and Pause CSS Animations with CSS Custom Properties appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

What if you could cut your hosting costs by 80%? Webiny Serverless CMS makes it possible.

Are you hosting one or more websites and are using a headless CMS? Are you hosting your CMS on a virtual machine or a container, or using a SaaS solution? If so, then you’re paying for the uptime, regardless if the server or service is serving requests or not. Essentially, you are paying for stuff you are not using. And in this article look at how how you can change that and save up to 80% of your hosting cost along the way.

Serverless — what’s that about?

If you’re new to serverless, in short, serverless is set of services you’re consuming without worrying about the underlying infrastructure. There are services for compute, like AWS Lambda that allow you to run Node.js code, services for storage like S3, database as a service like DynamoDb and many others.

The benefits of serverless are:

  1. You are billed based on your consumption
  2. There are no servers for you to manage
  3. Services scale automatically
  4. Services are more secure than your regular server

Servers are still there, but they are abstracted away — out of sight, out of mind.

Out of all the benefits, the first one plays a big role. Picture an API on a regular server or a virtual machine. If that server is not handling a new request every few seconds, there is a lot of idle time where the server is not doing anything, but you’re still paying for it.

With serverless you pay per your consumption, if your API is not handling any request at that point in time, your cost is $0. To further back this case, a research made by Deloitte found that a larger system can save anywhere between 60-80% in infrastructure costs and up to 60% in management costs just by switching to serverless.

Although serverless sounds great, there is a down side to it. It’s quite complex and time consuming to create new solutions from scratch and existing solutions are not designed for such environments. This is where Webiny comes in.

Webiny Serverless CMS

To help you adopt serverless and build websites on top of this modern infrastructure, there is one solution you can use today, for free. Webiny Serverless CMS is an open source solution that comes with a few apps, including a GraphQL-based Headless CMS.

Some of its features:

  1. GraphQL API
  2. Content versioning and modeling through a UI
  3. Multi-tenancy & Multi-language support
  4. Powerful user access control
  5. Built-in image optimization and image editor
  6. Works with existing static page generators like Gatsby and others

It’s important to note that Webiny Serverless CMS is completely free and self-hosted — all you need is an AWS account.

The system is self-hosted on top of the AWS serverless offering, and your sites will benefit from it in the following ways:

  • High-availability and fault tolerance for your API
  • 99.999999999% (11 9’s) of data durability
  • Enterprise-grade secure and scalable ACL
  • Event-driven scalability — pay for what you use
  • Great performance using a global CDN
  • DDoS Protection of your APIs

All this is in the box and it takes less than 10 minutes to get up and running.

Comparing Webiny to other solutions on the market — this is what it looks like:

Get started with Webiny Serverless CMS and stop overpaying for your infrastructure.


The post What if you could cut your hosting costs by 80%? Webiny Serverless CMS makes it possible. appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

How do I have download button point to mirrored site

My website has a mirror of the download directory in the U.K.. On my download page I have 2 buttons, one for a U.S. download and one for a U.K. download. The current download page passes the filename to the download script. The download script (located in the download directory) for the U.S. download reads as follows:

<?php

$php_scripts = '../../php/';
require $php_scripts . 'PDO_Connection_Select.php';
require $php_scripts . 'GetUserIpAddr.php';
function mydloader($l_filename=NULL)

{
$ip = GetUserIpAddr();
if (!$pdo = PDOConnect("foxclone_data"))
{   
    exit;
}
    if( isset( $l_filename ) ) {  
        header('Content-Type: octet-stream');
        header("Content-Disposition: attachment; filename={$l_filename}");
        header('Pragma: no-cache');
        header('Expires: 0');        
        readfile($l_filename);

        $ext = pathinfo($l_filename, PATHINFO_EXTENSION);
        $stmt = $pdo->prepare("INSERT INTO download (address, filename,ip_address) VALUES (?, ?, inet_aton('$ip'))");
        $stmt->execute([$ip, $ext]) ; 

        $test = $pdo->query("SELECT lookup.id FROM lookup WHERE inet_aton('$ip') >= lookup.ipstart AND inet_aton('$ip') <= lookup.ipend");
        $ref = $test->fetchColumn();
        $ref = intval($ref);

        $stmt = $pdo->prepare("UPDATE download SET lookup_id = '$ref' WHERE address = '$ip'");
        $stmt->execute() ;         
      }

    else {
        echo "isset failed";
        }  
}
mydloader($_GET["f"]);
exit;

How do I modify it to download from the mirrored download directory if they select the U.K. download button. I think the download page will need to point to a separate download script for the U.K. download, but I'm not sure how to code it.

How We Improved SmashingMag Performance

Every web performance story is similar, isn’t it? It always starts with the long-awaited website overhaul. A day when a project, fully polished and carefully optimized, gets launched, ranking high and soaring above performance scores in Lighthouse and WebPageTest. There is a celebration and a wholehearted sense of accomplishment prevailing in the air — beautifully reflected in retweets and comments and newsletters and Slack threads.

Yet as time passes by, the excitement slowly fades away, and urgent adjustments, much-needed features, and new business requirements creep in. And suddenly, before you know it, the code base gets a little bit overweight and fragmented, third-party scripts have to load just a little bit earlier, and shiny new dynamic content finds its way into the DOM through the backdoors of fourth-party scripts and their uninvited guests.

We’ve been there at Smashing as well. Not many people know it but we are a very small team of around 12 people, many of whom are working part-time and most of whom are usually wearing many different hats on a given day. While performance has been our goal for almost a decade now, we never really had a dedicated performance team.

After the latest redesign in late 2017, it was Ilya Pukhalski on the JavaScript side of things (part-time), Michael Riethmueller on the CSS side of things (a few hours a week), and yours truly, playing mind games with critical CSS and trying to juggle a few too many things.

As it happened, we lost track of performance in the busyness of day-to-day routine. We were designing and building things, setting up new products, refactoring the components, and publishing articles. So by late 2020, things got a bit out of control, with yellowish-red Lighthouse scores slowly showing up across the board. We had to fix that.

That’s Where We Were

Some of you might know that we are running on JAMStack, with all articles and pages stored as Markdown files, Sass files compiled into CSS, JavaScript split into chunks with Webpack and Hugo building out static pages that we then serve directly from an Edge CDN. Back in 2017 we built the entire site with Preact, but then have moved to React in 2019 — and use it along with a few APIs for search, comments, authentication and checkout.

The entire site is built with progressive enhancement in mind, meaning that you, dear reader, can read every Smashing article in its entirety without the need to boot the application at all. It’s not very surprising either — in the end, a published article doesn’t change much over the years, while dynamic pieces such as Membership authentication and checkout need the application to run.

The entire build for deploying around 2500 articles live takes around 6 mins at the moment. The build process on its own has become quite a beast over time as well, with critical CSS injects, Webpack’s code splitting, dynamic inserts of advertising and feature panels, RSS (re)generation, and eventual A/B testing on the edge.

In early 2020, we’ve started with the big refactoring of the CSS layout components. We never used CSS-in-JS or styled-components, but instead a good ol’ component-based system of Sass-modules which would be compiled into CSS. Back in 2017, the entire layout was built with Flexbox and rebuilt with CSS Grid and CSS Custom Properties in mid-2019. However, some pages needed special treatment due to new advertising spots and new product panels. So while the layout was working, it wasn’t working very well, and it was quite difficult to maintain.

Additionally, the header with the main navigation had to change to accommodate for more items that we wanted to display dynamically. Plus, we wanted to refactor some frequently used components used across the site, and the CSS used there needed some revision as well — the newsletter box being the most notable culprit. We started off by refactoring some components with utility-first CSS but we never got to the point that it was used consistently across the entire site.

The larger issue was the large JavaScript bundle that — not very surprisingly — was blocking the main-thread for hundreds of milliseconds. A big JavaScript bundle might seem out of place on a magazine that merely publishes articles, but actually, there is plenty of scripting happening behind the scenes.

We have various states of components for authenticated and unauthenticated customers. Once you are signed in, we want to show all products in the final price, and as you add a book to the cart, we want to keep a cart accessible with a tap on a button — no matter what page you are on. Advertising needs to come in quickly without causing disruptive layout shifts, and the same goes for the native product panels that highlight our products. Plus a service worker that caches all static assets and serves them for repeat views, along with cached versions of articles that a reader has already visited.

So all of this scripting had to happen at some point, and it was draining on the reading experience even although the script was coming in quite late. Frankly, we were painstakingly working on the site and new components without keeping a close eye on performance (and we had a few other things to keep in mind for 2020). The turning point came unexpectedly. Harry Roberts ran his (excellent) Web Performance Masterclass as an online workshop with us, and throughout the entire workshop, he was using Smashing as an example by highlighting issues that we had and suggesting solutions to those issues alongside useful tools and guidelines.

Throughout the workshop, I was diligently taking notes and revisiting the codebase. At the time of the workshop, our Lighthouse scores were 60–68 on the homepage, and around 40-60 on article pages — and obviously worse on mobile. Once the workshop was over, we got to work.

Identifying The Bottlenecks

We often tend to rely on particular scores to get an understanding of how well we perform, yet too often single scores don’t provide a full picture. As David East eloquently noted in his article, web performance isn’t a single value; it’s a distribution. Even if a web experience is heavily and thoroughly an optimized all-around performance, it can’t be just fast. It might be fast to some visitors, but ultimately it will also be slower (or slow) to some others.

The reasons for it are numerous, but the most important one is a huge difference in network conditions and device hardware across the world. More often than not we can’t really influence those things, so we have to ensure that our experience accommodates them instead.

In essence, our job then is to increase the proportion of snappy experiences and decrease the proportion of sluggish experiences. But for that, we need to get a proper picture of what the distribution actually is. Now, analytics tools and performance monitoring tools will provide this data when needed, but we looked specifically into CrUX, Chrome User Experience Report. CrUX generates an overview of performance distributions over time, with traffic collected from Chrome users. Much of this data related to Core Web Vitals which Google has announced back in 2020, and which also contribute to and are exposed in Lighthouse.

We noticed that across the board, our performance regressed dramatically throughout the year, with particular drops around August and September. Once we saw these charts, we could look back into some of the PRs we’ve pushed live back then to study what has actually happened.

It didn’t take a while to figure out that just around these times we launched a new navigation bar live. That navigation bar — used on all pages — relied on JavaScript to display navigation items in a menu on tap or on click, but the JavaScript bit of it was actually bundled within the app.js bundle. To improve Time To Interactive, we decided to extract the navigation script from the bundle and serve it inline.

Around the same time we switched from an (outdated) manually created critical CSS file to an automated system that was generating critical CSS for every template — homepage, article, product page, event, job board, and so on — and inline critical CSS during the build time. Yet we didn’t really realize how much heavier the automatically generated critical CSS was. We had to explore it in more detail.

And also around the same time, we were adjusting the web font loading, trying to push web fonts more aggressively with resource hints such as preload. This seems to be backlashing with our performance efforts though, as web fonts were delaying rendering of the content, being overprioritized next to the full CSS file.

Now, one of the common reasons for regression is the heavy cost of JavaScript, so we also looked into Webpack Bundle Analyzer and Simon Hearne’s request map to get a visual picture of our JavaScript dependencies. It looked quite healthy at the start.

A few requests were coming to the CDN, a cookie consent service Cookiebot, Google Analytics, plus our internal services for serving product panels and custom advertising. It didn’t appear like there were many bottlenecks — until we looked a bit more closely.

In performance work, it’s common to look at the performance of some critical pages — most likely the homepage and most likely a few article/product pages. However, while there is only one homepage, there might be plenty of various product pages, so we need to pick ones that are representative of our audience.

In fact, as we’re publishing quite a few code-heavy and design-heavy articles on SmashingMag, over the years we’ve accumulated literally thousands of articles that contained heavy GIFs, syntax-highlighted code snippets, CodePen embeds, video/audio embeds, and nested threads of never-ending comments.

When brought together, many of them were causing nothing short of an explosion in DOM size along with excessive main thread work — slowing down the experience on thousands of pages. Not to mention that with advertising in place, some DOM elements were injected late in the page’s lifecycle causing a cascade of style recalculations and repaints — also expensive tasks that can produce long tasks.

All of this wasn’t showing up in the map we generated for a quite lightweight article page in the chart above. So we picked the heaviest pages we had — the almighty homepage, the longest one, the one with many video embeds, and the one with many CodePen embeds — and decided to optimize them as much as we could. After all, if they are fast, then pages with a single CodePen embed should be faster, too.

With these pages in mind, the map looked a little bit differently. Note the huge thick line heading to the Vimeo player and Vimeo CDN, with 78 requests coming from a Smashing article.

To study the impact on the main thread, we took a deep-dive into the Performance panel in DevTools. More specifically, we were looking for tasks that last longer than 50ms (highlighted with a right rectangle in the right upper corner) and tasks that contain Recalculation styles (purple bar). The first would indicate expensive JavaScript execution, while the latter would expose style invalidations caused by dynamic injections of content in the DOM and suboptimal CSS. This gave us some actionable pointers of where to start. For example, we quickly discovered that our web font loading had a significant repaint cost, while JavaScript chunks were still heavy enough to block the main thread.

As a baseline, we looked very closely at Core Web Vitals, trying to ensure that we are scoring well across all of them. We chose to focus specifically on slow mobile devices — with slow 3G, 400ms RTT and 400kbps transfer speed, just to be on the pessimistic side of things. It’s not surprising then that Lighthouse wasn’t very happy with our site either, providing fully solid red scores for the heaviest articles, and tirelessly complaining about unused JavaScript, CSS, offscreen images and their sizes.

Once we had some data in front of us, we could focus on optimizing the three heaviest article pages, with a focus on critical (and non-critical) CSS, JavaScript bundle, long tasks, web font loading, layout shifts and third-party-embeds. Later we’d also revise the codebase to remove legacy code and use new modern browser features. It seemed like a lot of work ahead of was, and indeed we were quite busy for the months to come.

Improving The Order Of Assets In The <head>

Ironically, the very first thing we looked into wasn’t even closely related to all the tasks we’ve identified above. In the performance workshop, Harry spent a considerable amount of time explaining the order of assets in the <head> of each page, making a point that to deliver critical content quickly means being very strategic and attentive about how assets are ordered in the source code.

Now it shouldn’t come as a big revelation that critical CSS is beneficial for web performance. However, it did come as a bit of a surprise how much difference the order of all the other assets — resource hints, web font preloading, synchronous and asynchronous scripts, full CSS and metadata — has.

We’ve turned up the entire <head> upside down, placing critical CSS before all asynchronous scripts and all preloaded assets such as fonts, images etc. We’ve broken down the assets that we’ll be preconnecting to or preloading by template and file type, so that critical images, syntax highlighting and video embeds will be requested early only for a certain type of articles and pages.

In general, we’ve carefully orchestrated the order in the <head>, reduced the number of preloaded assets that were competing for bandwidth, and focused on getting critical CSS right. If you’d like to dive deeper into some of the critical considerations with the <head> order, Harry highlights them in the article on CSS and Network Performance. This change alone brought us around 3–4 Lighthouse score points across the board.

Moving From Automated Critical CSS Back To Manual Critical CSS

Moving the <head> tags around was a simple part of the story though. A more difficult one was the generation and management of critical CSS files. Back in 2017, we manually handcrafted critical CSS for every template, by collecting all of the styles required to render the first 1000 pixels in height across all screen widths. This of course was a cumbersome and slightly uninspiring task, not to mention maintenance issues for taming a whole family of critical CSS files and a full CSS file.

So we looked into options on automating this process as a part of the build routine. There wasn’t really a shortage of tools available, so we’ve tested a few and decided to run a few tests. We’ve managed to set them up and running quite quickly. The output seemed to be good enough for an automated process, so after a few configuration tweaks, we plugged it in and pushed it to production. That happened around July–August last year, which is nicely visualized in the spike and performance drop in the CrUX data above. We kept going back and forth with the configuration, often having troubles with simple things like adding in particular styles or removing others. E.g. cookie consent prompt styles that aren’t really included on a page unless the cookie script has initialized.

In October, we’ve introduced some major layout changes to the site, and when looking into the critical CSS, we’ve run into exactly the same issues yet again — the generated outcome was quite verbose, and wasn’t quite what we wanted. So as an experiment in late October, we all bundled our strengths to revisit our critical CSS approach and study how much smaller a handcrafted critical CSS would be. We took a deep breath and spent days around the code coverage tool on key pages. We grouped CSS rules manually and removed duplicates and legacy code in both places — the critical CSS and the main CSS. It was a much-needed cleanup indeed, as many styles that were written back in 2017–2018 have become obsolete over the years.

As a result, we ended up with three handcrafted critical CSS files, and with three more files that are currently work in progress:

The files are inlined in the head of each template, and at the moment they are duplicated in the monolithic CSS bundle that contains everything ever used (or not really used anymore) on the site. At the moment, we are looking into breaking down the full CSS bundle into a few CSS packages, so a reader of the magazine wouldn’t download styles from the job board or book pages, but then when reaching those pages would get a quick render with critical CSS and get the rest of the CSS for that page asynchronously — only on that page.

Admittedly, handcrafted critical CSS files weren’t much smaller in size: we’ve reduced the size of critical CSS files by around 14%. However, they included everything we needed in the right order from top to finish without duplicates and overriding styles. This seemed to be a step in the right direction, and it gave us a Lighthouse boost of another 3–4 points. We were making progress.

Changing The Web Font Loading

With font-display at our fingertips, font loading seems to be a problem in the past. Unfortunately, it isn’t quite right in our case. You, dear readers, seem to visit a number of articles on Smashing Magazine. You also frequently return back to the site to read a yet another article — perhaps a few hours or days later, or perhaps a week later. One of the issues that we had with font-display used across the site was that for readers who moved between articles a lot, we noticed plenty of flashes between the fallback font and the web font (which shouldn’t normally happen as fonts would be properly cached).

That didn’t feel like a decent user experience, so we looked into options. On Smashing, we are using two main typefaces — Mija for headings and Elena for body copy. Mija comes in two weights (Regular and Bold), while Elena is coming in three weights (Regular, Italic, Bold). We dropped Elena’s Bold Italic years ago during the redesign just because we used it on just a few pages. We subset the other fonts by removing unused characters and Unicode ranges.

Our articles are mostly set in text, so we’ve discovered that most of the time on the site the Largest Contentful Paint is either the first paragraph of text in an article or the photo of the author. That means that we need to take extra care of ensuring that the first paragraph appears quickly in a fallback font, while gracefully changing over to the web font with minimal reflows.

Take a close look at the initial loading experience of the front page (slowed down three times):

The first one was occurring due to expensive layout recalculations caused by the change of the fonts (from fallback font to web font), causing over 290ms of extra work (on a fast laptop and a fast connection). By removing stage one from the font loading alone, we were able to gain around 80ms back. It wasn’t good enough though because were way beyond the 50ms budget. So we started digging deeper.

The main reason why recalculations happened was merely because of the huge differences between fallback fonts and web fonts. By matching the line-height and sizes for fallback fonts and web fonts, we were able to avoid many situations when a line of text would wrap on a new line in the fallback font, but then get slightly smaller and fit on the previous line, causing major change in the geometry of the entire page, and consequently massive layout shifts. We've played with letter-spacing and word-spacing as well, but it didn't produce good results.

With these changes, we were able to cut another 50-80ms, but we weren’t able to reduce it below 120ms without displaying the content in a fallback font and display the content in the web font afterwards. Obviously, it should massively affect only first time visitors as consequent page views would be rendered with the fonts retrieved directly from the service worker’s cache, without costly reflows due to the font switch.

By the way, it’s quite important to notice that in our case, we noticed that most Long Tasks weren’t caused by massive JavaScript, but instead by Layout Recalculations and parsing of the CSS, which meant that we needed to do a bit of CSS cleaning, especially watching out for situations when styles are overwritten. In some way, it was good news because we didn’t have to deal with complex JavaScript issues that much. However, it turned out not to be straightforward as we are still cleaning up the CSS this very day. We were able to remove two Long Tasks for good, but we still have a few outstanding ones and quite a way to go. Fortunately, most of the time we aren’t way above the magical 50ms threshold.

The much bigger issue was the JavaScript bundle we were serving, occupying the main thread for a whopping 580ms. Most of this time was spent in booting up app.js which contains React, Redux, Lodash, and a Webpack module loader. The only way to improve performance with this massive beast was to break it down into smaller pieces. So we looked into doing just that.

With Webpack, we’ve split up the monolithic bundle into smaller chunks with code-splitting, about 30Kb per chunk. We did some package.json cleansing and version upgrade for all production dependencies, adjusted the browserlistrc setup to address the two latest browser versions, upgraded to Webpack and Babel to the latest versions, moved to Terser for minification, and used ES2017 (+ browserlistrc) as a target for script compilation.

We also used BabelEsmPlugin to generate modern versions of existing dependencies. Finally, we’ve added prefetch links to the header for all necessary script chunks and refactored the service worker, migrating to Workbox with Webpack (workbox-webpack-plugin).

Remember when we switched to the new navigation back in mid-2020, just to see a huge performance penalty as a result? The reason for it was quite simple. While in the past the navigation was just static plain HTML and a bit of CSS, with the new navigation, we needed a bit of JavaScript to act on opening and closing of the menu on mobile and on desktop. That was causing rage clicks when you would click on the navigation menu and nothing would happen, and of course, had a penalty cost in Time-To-Interactive scores in Lighthouse.

We removed the script from the bundle and extracted it as a separate script. Additionally, we did the same thing for other standalone scripts that were used rarely — for syntax highlighting, tables, video embeds and code embeds — and removed them from the main bundle; instead, we granularly load them only when needed.

However, what we didn’t notice for months was that although we removed the navigation script from the bundle, it was loading after the entire app.js bundle was evaluated, which wasn’t really helping Time-To-Interactive (see image above). We fixed it by preloading nav.js and deferring it to execute in the order of appearance in the DOM, and managed to save another 100ms with that operation alone. By the end, with everything in place we were able to bring the task to around 220ms.

We managed to get some improvement in place, but still have quite a way to go, with further React and Webpack optimizations on our to-do list. At the moment we still have three major Long Tasks — font switch (120ms), app.js execution (220ms) and style recalculations due to the size of full CSS (140ms). For us, it means cleaning up and breaking up the monolithic CSS next.

It’s worth mentioning that these results are really the best-scenario-results. On a given article page we might have a large number of code embeds and video embeds, along with other third-party scripts and customer's browser extensions that would require a separate conversation.

Dealing With 3rd-Parties

Fortunately, our third-party scripts footprint (and the impact of their friends' fourth-party-scripts) wasn’t huge from the start. But when these third-party scripts accumulated, they would drive performance down significantly. This goes especially for video embedding scripts, but also syntax highlighting, advertising scripts, promo panels scripts and any external iframe embeds.

Obviously, we defer all of these scripts to start loading after the DOMContentLoaded event, but once they finally come on stage, they cause quite a bit of work on the main thread. This shows up especially on article pages, which are obviously the vast majority of content on the site.

The first thing we did was allocating proper space to all assets that are being injected into the DOM after the initial page render. It meant width and height for all advertising images and the styling of code snippets. We found out that because all the scripts were deferred, new styles were invalidating existing styles, causing massive layout shifts for every code snippet that was displayed. We fixed that by adding the necessary styles to the critical CSS on the article pages.

We’ve re-established a strategy for optimizing images (preferably AVIF or WebP — still work in progress though). All images below the 1000px height threshold are natively lazy-loaded (with <img loading=lazy>), while the ones on the top are prioritized (<img loading=eager>). The same goes for all third-party embeds.

We replaced some dynamic parts with their static counterparts — e.g. while a note about an article saved for offline reading was appearing dynamically after the article was added to the service worker’s cache, now it appears statically as we are, well, a bit optimistic and expect it to be happening in all modern browsers.

As of the moment of writing, we’re preparing facades for code embeds and video embeds as well. Plus, all images that are offscreen will get decoding=async attribute, so the browser has a free reign over when and how it loads images offscreen, asynchronously and in parallel.

To ensure that our images always include width and height attributes, we’ve also modified Harry Roberts’ snippet and Tim Kadlec’s diagnostics CSS to highlight whenever an image isn’t served properly. It’s used in development and editing but obviously not in production.

One technique that we used frequently to track what exactly is happening as the page is being loaded, was slow-motion loading.

First, we’ve added a simple line of code to the diagnostics CSS, which provides a noticeable outline for all elements on the page.

* {
  outline: 3px solid red
}

Then we record a video of the page loaded on a slow and fast connection. Then we rewatch the video by slowing down the playback and moving back and forward to identify where massive layout shifts happen.

Here’s the recording of a page being loaded on a fast connection:

The reason for the poor score on mobile is clearly poor Time to Interactive and poor Total Blocking time due to the booting of the app and the size of the full CSS file. So there is still some work to be done there.

As for the next steps, we are currently looking into further reducing the size of the CSS, and specifically break it down into modules, similarly to JavaScript, loading some parts of the CSS (e.g. checkout or job board or books/eBooks) only when needed.

We also explore options of further bundling experimentation on mobile to reduce the performance impact of the app.js although it seems to be non-trivial at the moment. Finally, we’ll be looking into alternatives to our cookie prompt solution, rebuilding our containers with CSS clamp(), replacing the padding-bottom ratio technique with aspect-ratio and looking into serving as many images as possible in AVIF.

That’s It, Folks!

Hopefully, this little case-study will be useful to you, and perhaps there are one or two techniques that you might be able to apply to your project right away. In the end, performance is all about a sum of all the fine little details, that, when added up, make or break your customer’s experience.

While we are very committed to getting better at performance, we also work on improving accessibility and the content of the site. So if you spot anything that’s not quite right or anything that we could do to further improve Smashing Magazine, please let us know in the comments to this article.

Finally, if you’d like to stay updated on articles like this one, please subscribe to our email newsletter for friendly web tips, goodies, tools and articles, and a seasonal selection of Smashing cats.

How to Add a Request to Callback Form in WordPress

Do you want to add a request to callback form on your WordPress site?

A request to callback form allows users to provide you with their phone number, so you can call them at their convenience. This helps you capture more leads, improve conversions, and grow your business.

In this article, we’ll show you how to easily add a request to callback form in WordPress with bonus tips on managing the calls like a pro.

How to add a request to call back form in WordPress

Why Add a Callback Request Form in WordPress

When visitors are interested in your product or service, they may want to connect with you to get more information. While some people prefer live chat or email, others prefer talking over the phone.

Normally, you would add a ‘Click to call‘ button that allows users to dial your business phone number.

WP Call Button

However, not all businesses can afford a 24/7 phone service with sales or support staff answering all the calls. Request to callback form helps you fix this problem.

Instead of talking to a representative right away, customers can leave their phone numbers with other information including the best time to call them. After that, you can call them during office hours.

This allows you to offer better customer support, capture more leads, and convert more visitors into customers.

That being said, let’s take a look at how to easily add a callback request form in WordPress.

How to Add a Request to Callback Form in WordPress

The simplest way to add a callback request form to your WordPress site is by using WPForms.

It is the best WordPress form builder plugin on the market and allows you to easily add any kind of form to your WordPress website. Over 4 million websites are using WPForms to create better forms.

First, you need to install and activate WPForms plugin. For more details, see our step by step guide on how to install a WordPress plugin.

Upon activation, you need to visit WPForms » Settings page to enter your license key. You can find this information under your account on the WPForms website.

Entering your WPForms license key on your site

After that, you can go to the WPForms » Add New page to create your first form.

Add a new form in WPForms

First you need to provide a title for your form and then select Simple Contact Form template.

Create new form in WPForms

This will load the form editor where you will see form fields on the left-hand and a live preview of your form to the right. Simply click on any field from the right column to add it to your form.

We suggest using Name, Email, Phone, and Date/Time fields in your callback request form.

Add fields in WPForms

You can edit any form field by clicking on it, and WPForms will automatically switch the tab to Field Options.

Viewing and changing the Advanced Options for the Date/Time field

From here, you can change the field label and description. You can also change the format, field size, and placeholder text.

Once you’re happy with the form, click on the Save button to store your changes.

Save request to callback form in WPForms

Next, you need to add the callback request form to your website.

WPForms makes it super easy to add forms to any post or page on your website.

Simply edit the post or page where you want to display the form. On the post edit screen, click on the add a new block button (+) and then add WPForms block to your content area.

Creating a WPForms block and selecting your form from the dropdown list

Next, you need to select the form you created earlier from the dropdown menu. WPForms will automatically load a preview of your form in the content area.

You can now Save or Publish your changes and preview your post / page to see the callback request form in action.

Request to callback forms on WPForms

Note: If you are using the old classic editor on your WordPress site, then you can add the form by clicking on the ‘Add Form’ button above the post editor.

Adding a form to the page using the classic WordPress editor

Customizing Your Callback Request Form Settings

Now that you’ve added a callback request form to your site, you can edit it at any time and customize it to your preferences.

Simply go to the WPForms » All Forms page and click on the form you created earlier. This opens the form builder interface where you can edit your form, add or remove fields, change labels, and more.

You can now switch to the Settings tab from the left column. From here you can change your form settings like form name, description, button label, notifications, and more.

WPForms General Settings

Configure Notifications for Request to Callback Form

By default, WPForms uses your site’s administrator email to notify you when a user submits the form.

You can send notifications to any other email address if you want or set up confirmation notification emails for the users as well.

Simply switch to the Notifications tab under form settings and you will see the default notification settings. You can change them to add a different email address to notify or change the notification message.

Changing the details of the notification emails

You can also click on the ‘Add New Notification’ button to create your own custom notifications. This comes in handy when you also want to notify the user that you have received their callback request.

WPForms will ask you to provide a name for the new notification and then show you the notification settings. You can click on the smart tags to enter the value submitted by the user in the form fields, e.g. Name or Email.

Using smart tags to create custom notifications

Once you are finished, don’t forget to click on the ‘Save’ button to store your form settings.

We have written a detailed guide on how to send notification to multiple recipients in WPForms.

Customize the Form Submission Message for Visitors

When visitors enter their details and submit the form, a default message is displayed. You can change this to show a custom success message or redirect users to any post or page on your website.

Simply switch to the Confirmations tab under form settings.

Change confirmation message or type

Under confirmation type, you can choose what happens when a user submits the form. You can edit the success message or redirect them to any page on your website.

Once you are finished, don’t forget to click on the ‘Save’ button to store your settings.

You can now go ahead and test your form by filling it out. Based on your confirmation settings, WPForms will then show you the success message or redirect you.

Form submitted successfully

How to View ‘Request to callback’ Form Submissions in WPForms

The best thing about using WPForms is that it will automatically saves all form submissions to your WordPress database. This means you can easily view form entries even if you don’t receive email notifications.

Simply go to WPForms » Entries page inside WordPress admin area and click on the form you created earlier.

Managing your form entries in WPForms

On the next page, you’ll see a list of entries submitted by your users along with their details. To organize your entries, use the star to highlight the entries that are important. Use the green circle to mark the entry as ‘Read’.

Star and circle feature to highlight and read in WPForms

Next, you can view, edit, and delete details of each entry. You can also Add Note to individual form entries, which helps you keep track of leads and create notes for follow-up requests.

Add a note in WPForms entries

Promoting The Callback Request Form on Your Website

Now that you have created a callback request form for your website, you may want to promote it so that users can easily find and take advantage of this service.

This is where OptinMonster comes in. It is the best lead generation tool on the market and allows you to easily convert website visitors into leads and paying customers.

It works really well with WPForms and you can embed your forms into popups, slide-ins, fullscreen popups, and more. See our tutorial on how to create a contact form popup in WordPress.

Promoting your callback service with OptinMonster

OptinMonster comes with powerful display rules, which allows you to display the callback form to users when it is most effective.

For example, you can only show callback form to users in a specific region or country or when users are viewing a specific page on your site.

Finding a Reliable Business Phone Service

Many businesses use their landline or mobile numbers to conduct business and answer customer calls. However, this is not the most effective solution.

Conventional phone services lack advanced call management features, which are essential for businesses to offer better customer experience.

This is why we use Nextiva in our business. They are the best business phone service provider on the market and allow you to manage your business calls using VoIP (Voice over Internet Protocol).

You can use it with any device including your mobile phone, a desk phone, laptop, or tablet. You can choose the number from any region and share it with different team members. It also includes smart features such as call forwarding, voice messages, automated responses, ringtones, and more.

The voice quality is better too and it is way cheaper than traditional phone services.

We hope this article helped you learn how to easily add a request to callback form in WordPress. You may also want to see our guide on how to track user journey on lead forms, and our ultimate WordPress conversion tracking guide to grow your business.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post How to Add a Request to Callback Form in WordPress appeared first on WPBeginner.